Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Word embedding and neural network on grammatical gender -- A case study
of Swedish | We analyze the information provided by the word embeddings about the
grammatical gender in Swedish. We wish that this paper may serve as one of the
bridges to connect the methods of computational linguistics and general
linguistics. Taking nominal classification in Swedish as a case study, we first
show how the information about grammatical gender in language can be captured
by word embedding models and artificial neural networks. Then, we match our
results with previous linguistic hypotheses on assignment and usage of
grammatical gender in Swedish and analyze the errors made by the computational
model from a linguistic perspective.
| 2,020 | Computation and Language |
Improving Results on Russian Sentiment Datasets | In this study, we test standard neural network architectures (CNN, LSTM,
BiLSTM) and recently appeared BERT architectures on previous Russian sentiment
evaluation datasets. We compare two variants of Russian BERT and show that for
all sentiment tasks in this study the conversational variant of Russian BERT
performs better. The best results were achieved by BERT-NLI model, which treats
sentiment classification tasks as a natural language inference task. On one of
the datasets, this model practically achieves the human level.
| 2,020 | Computation and Language |
Towards Ecologically Valid Research on Language User Interfaces | Language User Interfaces (LUIs) could improve human-machine interaction for a
wide variety of tasks, such as playing music, getting insights from databases,
or instructing domestic robots. In contrast to traditional hand-crafted
approaches, recent work attempts to build LUIs in a data-driven way using
modern deep learning methods. To satisfy the data needs of such learning
algorithms, researchers have constructed benchmarks that emphasize the quantity
of collected data at the cost of its naturalness and relevance to real-world
LUI use cases. As a consequence, research findings on such benchmarks might not
be relevant for developing practical LUIs. The goal of this paper is to
bootstrap the discussion around this issue, which we refer to as the
benchmarks' low ecological validity. To this end, we describe what we deem an
ideal methodology for machine learning research on LUIs and categorize five
common ways in which recent benchmarks deviate from it. We give concrete
examples of the five kinds of deviations and their consequences. Lastly, we
offer a number of recommendations as to how to increase the ecological validity
of machine learning research on LUIs.
| 2,020 | Computation and Language |
Measuring prominence of scientific work in online news as a proxy for
impact | The impact made by a scientific paper on the work of other academics has many
established metrics, including metrics based on citation counts and social
media commenting. However, determination of the impact of a scientific paper on
the wider society is less well established. For example, is it important for
scientific work to be newsworthy? Here we present a new corpus of newspaper
articles linked to the scientific papers that they describe. We find that
Impact Case studies submitted to the UK Research Excellence Framework (REF)
2014 that refer to scientific papers mentioned in newspaper articles were
awarded a higher score in the REF assessment. The papers associated with these
case studies also feature prominently in the newspaper articles. We hypothesise
that such prominence can be a useful proxy for societal impact. We therefore
provide a novel baseline approach for measuring the prominence of scientific
papers mentioned within news articles. Our measurement of prominence is based
on semantic similarity through a graph-based ranking algorithm. We find that
scientific papers with an associated REF case study are more likely to have a
stronger prominence score. This supports our hypothesis that linguistic
prominence in news can be used to suggest the wider non-academic impact of
scientific work.
| 2,020 | Computation and Language |
GUIR at SemEval-2020 Task 12: Domain-Tuned Contextualized Models for
Offensive Language Detection | Offensive language detection is an important and challenging task in natural
language processing. We present our submissions to the OffensEval 2020 shared
task, which includes three English sub-tasks: identifying the presence of
offensive language (Sub-task A), identifying the presence of target in
offensive language (Sub-task B), and identifying the categories of the target
(Sub-task C). Our experiments explore using a domain-tuned contextualized
language model (namely, BERT) for this task. We also experiment with different
components and configurations (e.g., a multi-view SVM) stacked upon BERT models
for specific sub-tasks. Our submissions achieve F1 scores of 91.7% in Sub-task
A, 66.5% in Sub-task B, and 63.2% in Sub-task C. We perform an ablation study
which reveals that domain tuning considerably improves the classification
performance. Furthermore, error analysis shows common misclassification errors
made by our model and outlines research directions for future.
| 2,020 | Computation and Language |
Development of POS tagger for English-Bengali Code-Mixed data | Code-mixed texts are widespread nowadays due to the advent of social media.
Since these texts combine two languages to formulate a sentence, it gives rise
to various research problems related to Natural Language Processing. In this
paper, we try to excavate one such problem, namely, Parts of Speech tagging of
code-mixed texts. We have built a system that can POS tag English-Bengali
code-mixed data where the Bengali words were written in Roman script. Our
approach initially involves the collection and cleaning of English-Bengali
code-mixed tweets. These tweets were used as a development dataset for building
our system. The proposed system is a modular approach that starts by tagging
individual tokens with their respective languages and then passes them to
different POS taggers, designed for different languages (English and Bengali,
in our case). Tags given by the two systems are later joined together and the
final result is then mapped to a universal POS tag set. Our system was checked
using 100 manually POS tagged code-mixed sentences and it returned an accuracy
of 75.29%
| 2,020 | Computation and Language |
Object-and-Action Aware Model for Visual Language Navigation | Vision-and-Language Navigation (VLN) is unique in that it requires turning
relatively general natural-language instructions into robot agent actions, on
the basis of the visible environment. This requires to extract value from two
very different types of natural-language information. The first is object
description (e.g., 'table', 'door'), each presenting as a tip for the agent to
determine the next action by finding the item visible in the environment, and
the second is action specification (e.g., 'go straight', 'turn left') which
allows the robot to directly predict the next movements without relying on
visual perceptions. However, most existing methods pay few attention to
distinguish these information from each other during instruction encoding and
mix together the matching between textual object/action encoding and visual
perception/orientation features of candidate viewpoints. In this paper, we
propose an Object-and-Action Aware Model (OAAM) that processes these two
different forms of natural language based instruction separately. This enables
each process to match object-centered/action-centered instruction to their own
counterpart visual perception/action orientation flexibly. However, one
side-issue caused by above solution is that an object mentioned in instructions
may be observed in the direction of two or more candidate viewpoints, thus the
OAAM may not predict the viewpoint on the shortest path as the next action. To
handle this problem, we design a simple but effective path loss to penalize
trajectories deviating from the ground truth path. Experimental results
demonstrate the effectiveness of the proposed model and path loss, and the
superiority of their combination with a 50% SPL score on the R2R dataset and a
40% CLS score on the R4R dataset in unseen environments, outperforming the
previous state-of-the-art.
| 2,020 | Computation and Language |
Biomedical and Clinical English Model Packages in the Stanza Python NLP
Library | We introduce biomedical and clinical English model packages for the Stanza
Python NLP library. These packages offer accurate syntactic analysis and named
entity recognition capabilities for biomedical and clinical text, by combining
Stanza's fully neural architecture with a wide variety of open datasets as well
as large-scale unsupervised biomedical and clinical text data. We show via
extensive experiments that our packages achieve syntactic analysis and named
entity recognition performance that is on par with or surpasses
state-of-the-art results. We further show that these models do not compromise
speed compared to existing toolkits when GPU acceleration is available, and are
made easy to download and use with Stanza's Python interface. A demonstration
of our packages is available at: http://stanza.run/bio.
| 2,020 | Computation and Language |
#Brexit: Leave or Remain? The Role of User's Community and Diachronic
Evolution on Stance Detection | Interest has grown around the classification of stance that users assume
within online debates in recent years. Stance has been usually addressed by
considering users posts in isolation, while social studies highlight that
social communities may contribute to influence users' opinion. Furthermore,
stance should be studied in a diachronic perspective, since it could help to
shed light on users' opinion shift dynamics that can be recorded during the
debate. We analyzed the political discussion in UK about the BREXIT referendum
on Twitter, proposing a novel approach and annotation schema for stance
detection, with the main aim of investigating the role of features related to
social network community and diachronic stance evolution. Classification
experiments show that such features provide very useful clues for detecting
stance.
| 2,020 | Computation and Language |
Mirostat: A Neural Text Decoding Algorithm that Directly Controls
Perplexity | Neural text decoding is important for generating high-quality texts using
language models. To generate high-quality text, popular decoding algorithms
like top-k, top-p (nucleus), and temperature-based sampling truncate or distort
the unreliable low probability tail of the language model. Though these methods
generate high-quality text after parameter tuning, they are ad hoc. Not much is
known about the control they provide over the statistics of the output, which
is important since recent reports show text quality is highest for a specific
range of likelihoods. Here, first we provide a theoretical analysis of
perplexity in top-k, top-p, and temperature sampling, finding that
cross-entropy behaves approximately linearly as a function of p in top-p
sampling whereas it is a nonlinear function of k in top-k sampling, under
Zipfian statistics. We use this analysis to design a feedback-based adaptive
top-k text decoding algorithm called mirostat that generates text (of any
length) with a predetermined value of perplexity, and thereby high-quality text
without any tuning. Experiments show that for low values of k and p in top-k
and top-p sampling, perplexity drops significantly with generated text length,
which is also correlated with excessive repetitions in the text (the boredom
trap). On the other hand, for large values of k and p, we find that perplexity
increases with generated text length, which is correlated with incoherence in
the text (confusion trap). Mirostat avoids both traps: experiments show that
cross-entropy has a near-linear relation with repetition in generated text.
This relation is almost independent of the sampling method but slightly
dependent on the model used. Hence, for a given language model, control over
perplexity also gives control over repetitions. Experiments with human raters
for fluency, coherence, and quality further verify our findings.
| 2,021 | Computation and Language |
An Experimental Study of The Effects of Position Bias on Emotion
CauseExtraction | Emotion Cause Extraction (ECE) aims to identify emotion causes from a
document after annotating the emotion keywords. Some baselines have been
proposed to address this problem, such as rule-based, commonsense based and
machine learning methods. We show, however, that a simple random selection
approach toward ECE that does not require observing the text achieves similar
performance compared to the baselines. We utilized only position information
relative to the emotion cause to accomplish this goal. Since position
information alone without observing the text resulted in higher F-measure, we
therefore uncovered a bias in the ECE single genre Sina-news benchmark. Further
analysis showed that an imbalance of emotional cause location exists in the
benchmark, with a majority of cause clauses immediately preceding the central
emotion clause. We examine the bias from a linguistic perspective, and show
that high accuracy rate of current state-of-art deep learning models that
utilize location information is only evident in datasets that contain such
position biases. The accuracy drastically reduced when a dataset with balanced
location distribution is introduced. We therefore conclude that it is the
innate bias in this benchmark that caused high accuracy rate of these deep
learning models in ECE. We hope that the case study in this paper presents both
a cautionary lesson, as well as a template for further studies, in interpreting
the superior fit of deep learning models without checking for bias.
| 2,020 | Computation and Language |
Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text
Classification | In cross-lingual text classification, one seeks to exploit labeled data from
one language to train a text classification model that can then be applied to a
completely different language. Recent multilingual representation models have
made it much easier to achieve this. Still, there may still be subtle
differences between languages that are neglected when doing so. To address
this, we present a semi-supervised adversarial training process that minimizes
the maximal loss for label-preserving input perturbations. The resulting model
then serves as a teacher to induce labels for unlabeled target language samples
that can be used during further adversarial training, allowing us to gradually
adapt our model to the target language. Compared with a number of strong
baselines, we observe significant gains in effectiveness on document and intent
classification for a diverse set of languages.
| 2,020 | Computation and Language |
Exploiting stance hierarchies for cost-sensitive stance detection of Web
documents | Fact checking is an essential challenge when combating fake news. Identifying
documents that agree or disagree with a particular statement (claim) is a core
task in this process. In this context, stance detection aims at identifying the
position (stance) of a document towards a claim. Most approaches address this
task through a 4-class classification model where the class distribution is
highly imbalanced. Therefore, they are particularly ineffective in detecting
the minority classes (for instance, 'disagree'), even though such instances are
crucial for tasks such as fact-checking by providing evidence for detecting
false claims. In this paper, we exploit the hierarchical nature of stance
classes, which allows us to propose a modular pipeline of cascading binary
classifiers, enabling performance tuning on a per step and class basis. We
implement our approach through a combination of neural and traditional
classification models that highlight the misclassification costs of minority
classes. Evaluation results demonstrate state-of-the-art performance of our
approach and its ability to significantly improve the classification
performance of the important 'disagree' class.
| 2,021 | Computation and Language |
The Return of Lexical Dependencies: Neural Lexicalized PCFGs | In this paper we demonstrate that $\textit{context free grammar (CFG) based
methods for grammar induction benefit from modeling lexical dependencies}$.
This contrasts to the most popular current methods for grammar induction, which
focus on discovering $\textit{either}$ constituents $\textit{or}$ dependencies.
Previous approaches to marry these two disparate syntactic formalisms (e.g.
lexicalized PCFGs) have been plagued by sparsity, making them unsuitable for
unsupervised grammar induction. However, in this work, we present novel neural
models of lexicalized PCFGs which allow us to overcome sparsity problems and
effectively induce both constituents and dependencies within a single model.
Experiments demonstrate that this unified framework results in stronger results
on both representations than achieved when modeling either formalism alone.
Code is available at https://github.com/neulab/neural-lpcfg.
| 2,020 | Computation and Language |
MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain
Question Answering | Progress in cross-lingual modeling depends on challenging, realistic, and
diverse evaluation sets. We introduce Multilingual Knowledge Questions and
Answers (MKQA), an open-domain question answering evaluation set comprising 10k
question-answer pairs aligned across 26 typologically diverse languages (260k
question-answer pairs in total). Answers are based on a heavily curated,
language-independent data representation, making results comparable across
languages and independent of language-specific passages. With 26 languages,
this dataset supplies the widest range of languages to-date for evaluating
question answering. We benchmark a variety of state-of-the-art methods and
baselines for generative and extractive question answering, trained on Natural
Questions, in zero shot and translation settings. Results indicate this dataset
is challenging even in English, but especially in low-resource languages
| 2,021 | Computation and Language |
NeuralQA: A Usable Library for Question Answering (Contextual Query
Expansion + BERT) on Large Datasets | Existing tools for Question Answering (QA) have challenges that limit their
use in practice. They can be complex to set up or integrate with existing
infrastructure, do not offer configurable interactive interfaces, and do not
cover the full set of subtasks that frequently comprise the QA pipeline (query
expansion, retrieval, reading, and explanation/sensemaking). To help address
these issues, we introduce NeuralQA - a usable library for QA on large
datasets. NeuralQA integrates well with existing infrastructure (e.g.,
ElasticSearch instances and reader models trained with the HuggingFace
Transformers API) and offers helpful defaults for QA subtasks. It introduces
and implements contextual query expansion (CQE) using a masked language model
(MLM) as well as relevant snippets (RelSnip) - a method for condensing large
documents into smaller passages that can be speedily processed by a document
reader model. Finally, it offers a flexible user interface to support workflows
for research explorations (e.g., visualization of gradient-based explanations
to support qualitative inspection of model behaviour) and large scale search
deployment. Code and documentation for NeuralQA is available as open source on
Github (https://github.com/victordibia/neuralqa}{Github).
| 2,020 | Computation and Language |
Photon: A Robust Cross-Domain Text-to-SQL System | Natural language interfaces to databases (NLIDB) democratize end user access
to relational data. Due to fundamental differences between natural language
communication and programming, it is common for end users to issue questions
that are ambiguous to the system or fall outside the semantic scope of its
underlying query language. We present Photon, a robust, modular, cross-domain
NLIDB that can flag natural language input to which a SQL mapping cannot be
immediately determined. Photon consists of a strong neural semantic parser
(63.2\% structure accuracy on the Spider dev benchmark), a human-in-the-loop
question corrector, a SQL executor and a response generator. The question
corrector is a discriminative neural sequence editor which detects confusion
span(s) in the input question and suggests rephrasing until a translatable
input is given by the user or a maximum number of iterations are conducted.
Experiments on simulated data show that the proposed method effectively
improves the robustness of text-to-SQL system against untranslatable user
input. The live demo of our system is available at http://naturalsql.com.
| 2,020 | Computation and Language |
Leverage Unlabeled Data for Abstractive Speech Summarization with
Self-Supervised Learning and Back-Summarization | Supervised approaches for Neural Abstractive Summarization require large
annotated corpora that are costly to build. We present a French meeting
summarization task where reports are predicted based on the automatic
transcription of the meeting audio recordings. In order to build a corpus for
this task, it is necessary to obtain the (automatic or manual) transcription of
each meeting, and then to segment and align it with the corresponding manual
report to produce training examples suitable for training. On the other hand,
we have access to a very large amount of unaligned data, in particular reports
without corresponding transcription. Reports are professionally written and
well formatted making pre-processing straightforward. In this context, we study
how to take advantage of this massive amount of unaligned data using two
approaches (i) self-supervised pre-training using a target-side denoising
encoder-decoder model; (ii) back-summarization i.e. reversing the summarization
process by learning to predict the transcription given the report, in order to
align single reports with generated transcription, and use this synthetic
dataset for further training. We report large improvements compared to the
previous baseline (trained on aligned data only) for both approaches on two
evaluation sets. Moreover, combining the two gives even better results,
outperforming the baseline by a large margin of +6 ROUGE-1 and ROUGE-L and +5
ROUGE-2 on two evaluation sets
| 2,020 | Computation and Language |
The optimality of syntactic dependency distances | It is often stated that human languages, as other biological systems, are
shaped by cost-cutting pressures but, to what extent? Attempts to quantify the
degree of optimality of languages by means of an optimality score have been
scarce and focused mostly on English. Here we recast the problem of the
optimality of the word order of a sentence as an optimization problem on a
spatial network where the vertices are words, arcs indicate syntactic
dependencies and the space is defined by the linear order of the words in the
sentence. We introduce a new score to quantify the cognitive pressure to reduce
the distance between linked words in a sentence. The analysis of sentences from
93 languages representing 19 linguistic families reveals that half of languages
are optimized to a 70% or more. The score indicates that distances are not
significantly reduced in a few languages and confirms two theoretical
predictions, i.e. that longer sentences are more optimized and that distances
are more likely to be longer than expected by chance in short sentences. We
present a new hierarchical ranking of languages by their degree of
optimization. The new score has implications for various fields of language
research (dependency linguistics, typology, historical linguistics, clinical
linguistics and cognitive science). Finally, the principles behind the design
of the score have implications for network science.
| 2,022 | Computation and Language |
Neural Modeling for Named Entities and Morphology (NEMO^2) | Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated
as classification over a sequence of tokens. Morphologically-Rich Languages
(MRLs) pose a challenge to this basic formulation, as the boundaries of Named
Entities do not necessarily coincide with token boundaries, rather, they
respect morphological boundaries. To address NER in MRLs we then need to answer
two fundamental questions, namely, what are the basic units to be labeled, and
how can these units be detected and classified in realistic settings, i.e.,
where no gold morphology is available. We empirically investigate these
questions on a novel NER benchmark, with parallel tokenlevel and morpheme-level
NER annotations, which we develop for Modern Hebrew, a morphologically
rich-and-ambiguous language. Our results show that explicitly modeling
morphological boundaries leads to improved NER performance, and that a novel
hybrid architecture, in which NER precedes and prunes morphological
decomposition, greatly outperforms the standard pipeline, where morphological
decomposition strictly precedes NER, setting a new performance bar for both
Hebrew NER and Hebrew morphological decomposition tasks.
| 2,021 | Computation and Language |
COVID-19 therapy target discovery with context-aware literature mining | The abundance of literature related to the widespread COVID-19 pandemic is
beyond manual inspection of a single expert. Development of systems, capable of
automatically processing tens of thousands of scientific publications with the
aim to enrich existing empirical evidence with literature-based associations is
challenging and relevant. We propose a system for contextualization of
empirical expression data by approximating relations between entities, for
which representations were learned from one of the largest COVID-19-related
literature corpora. In order to exploit a larger scientific context by transfer
learning, we propose a novel embedding generation technique that leverages
SciBERT language model pretrained on a large multi-domain corpus of scientific
publications and fine-tuned for domain adaptation on the CORD-19 dataset. The
conducted manual evaluation by the medical expert and the quantitative
evaluation based on therapy targets identified in the related work suggest that
the proposed method can be successfully employed for COVID-19 therapy target
discovery and that it outperforms the baseline FastText method by a large
margin.
| 2,020 | Computation and Language |
The Unreasonable Effectiveness of Machine Learning in Moldavian versus
Romanian Dialect Identification | Motivated by the seemingly high accuracy levels of machine learning models in
Moldavian versus Romanian dialect identification and the increasing research
interest on this topic, we provide a follow-up on the Moldavian versus Romanian
Cross-Dialect Topic Identification (MRC) shared task of the VarDial 2019
Evaluation Campaign. The shared task included two sub-task types: one that
consisted in discriminating between the Moldavian and Romanian dialects and one
that consisted in classifying documents by topic across the two dialects of
Romanian. Participants achieved impressive scores, e.g. the top model for
Moldavian versus Romanian dialect identification obtained a macro F1 score of
0.895. We conduct a subjective evaluation by human annotators, showing that
humans attain much lower accuracy rates compared to machine learning (ML)
models. Hence, it remains unclear why the methods proposed by participants
attain such high accuracy rates. Our goal is to understand (i) why the proposed
methods work so well (by visualizing the discriminative features) and (ii) to
what extent these methods can keep their high accuracy levels, e.g. when we
shorten the text samples to single sentences or when we use tweets at inference
time. A secondary goal of our work is to propose an improved ML model using
ensemble learning. Our experiments show that ML models can accurately identify
the dialects, even at the sentence level and across different domains (news
articles versus tweets). We also analyze the most discriminative features of
the best performing models, providing some explanations behind the decisions
taken by these models. Interestingly, we learn new dialectal patterns
previously unknown to us or to our human annotators. Furthermore, we conduct
experiments showing that the machine learning performance on the MRC shared
task can be improved through an ensemble based on stacking.
| 2,021 | Computation and Language |
Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing | Pretraining large neural language models, such as BERT, has led to impressive
gains on many natural language processing (NLP) tasks. However, most
pretraining efforts focus on general domain corpora, such as newswire and Web.
A prevailing assumption is that even domain-specific pretraining can benefit by
starting from general-domain language models. In this paper, we challenge this
assumption by showing that for domains with abundant unlabeled text, such as
biomedicine, pretraining language models from scratch results in substantial
gains over continual pretraining of general-domain language models. To
facilitate this investigation, we compile a comprehensive biomedical NLP
benchmark from publicly-available datasets. Our experiments show that
domain-specific pretraining serves as a solid foundation for a wide range of
biomedical NLP tasks, leading to new state-of-the-art results across the board.
Further, in conducting a thorough evaluation of modeling choices, both for
pretraining and task-specific fine-tuning, we discover that some common
practices are unnecessary with BERT models, such as using complex tagging
schemes in named entity recognition (NER). To help accelerate research in
biomedical NLP, we have released our state-of-the-art pretrained and
task-specific models for the community, and created a leaderboard featuring our
BLURB benchmark (short for Biomedical Language Understanding & Reasoning
Benchmark) at https://aka.ms/BLURB.
| 2,021 | Computation and Language |
Neural Language Generation: Formulation, Methods, and Evaluation | Recent advances in neural network-based generative modeling have reignited
the hopes in having computer systems capable of seamlessly conversing with
humans and able to understand natural language. Neural architectures have been
employed to generate text excerpts to various degrees of success, in a
multitude of contexts and tasks that fulfil various user needs. Notably, high
capacity deep learning models trained on large scale datasets demonstrate
unparalleled abilities to learn patterns in the data even in the lack of
explicit supervision signals, opening up a plethora of new possibilities
regarding producing realistic and coherent texts. While the field of natural
language generation is evolving rapidly, there are still many open challenges
to address. In this survey we formally define and categorize the problem of
natural language generation. We review particular application tasks that are
instantiations of these general formulations, in which generating natural
language is of practical importance. Next we include a comprehensive outline of
methods and neural architectures employed for generating diverse texts.
Nevertheless, there is no standard way to assess the quality of text produced
by these generative models, which constitutes a serious bottleneck towards the
progress of the field. To this end, we also review current approaches to
evaluating natural language generation systems. We hope this survey will
provide an informative overview of formulations, methods, and assessments of
neural natural language generation.
| 2,020 | Computation and Language |
Explainable Prediction of Text Complexity: The Missing Preliminaries for
Text Simplification | Text simplification reduces the language complexity of professional content
for accessibility purposes. End-to-end neural network models have been widely
adopted to directly generate the simplified version of input text, usually
functioning as a blackbox. We show that text simplification can be decomposed
into a compact pipeline of tasks to ensure the transparency and explainability
of the process. The first two steps in this pipeline are often neglected: 1) to
predict whether a given piece of text needs to be simplified, and 2) if yes, to
identify complex parts of the text. The two tasks can be solved separately
using either lexical or deep learning methods, or solved jointly. Simply
applying explainable complexity prediction as a preliminary step, the
out-of-sample text simplification performance of the state-of-the-art,
black-box simplification models can be improved by a large margin.
| 2,021 | Computation and Language |
Improving NER's Performance with Massive financial corpus | Training large deep neural networks needs massive high quality annotation
data, but the time and labor costs are too expensive for small business. We
start a company-name recognition task with a small scale and low quality
training data, then using skills to enhanced model training speed and
predicting performance with minimum labor cost. The methods we use involve
pre-training a lite language model such as Albert-small or Electra-small in
financial corpus, knowledge of distillation and multi-stage learning. The
result is that we raised the recall rate by nearly 20 points and get 4 times as
fast as BERT-CRF model.
| 2,020 | Computation and Language |
Evaluating Automatically Generated Phoneme Captions for Images | Image2Speech is the relatively new task of generating a spoken description of
an image. This paper presents an investigation into the evaluation of this
task. For this, first an Image2Speech system was implemented which generates
image captions consisting of phoneme sequences. This system outperformed the
original Image2Speech system on the Flickr8k corpus. Subsequently, these
phoneme captions were converted into sentences of words. The captions were
rated by human evaluators for their goodness of describing the image. Finally,
several objective metric scores of the results were correlated with these human
ratings. Although BLEU4 does not perfectly correlate with human ratings, it
obtained the highest correlation among the investigated metrics, and is the
best currently existing metric for the Image2Speech task. Current metrics are
limited by the fact that they assume their input to be words. A more
appropriate metric for the Image2Speech task should assume its input to be
parts of words, i.e. phonemes, instead.
| 2,020 | Computation and Language |
On Learning Universal Representations Across Languages | Recent studies have demonstrated the overwhelming advantage of cross-lingual
pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual
NLP tasks. However, existing approaches essentially capture the co-occurrence
among tokens through involving the masked language model (MLM) objective with
token-level cross entropy. In this work, we extend these approaches to learn
sentence-level representations and show the effectiveness on cross-lingual
understanding and generation. Specifically, we propose a Hierarchical
Contrastive Learning (HiCTL) method to (1) learn universal representations for
parallel sentences distributed in one or multiple languages and (2) distinguish
the semantically-related words from a shared cross-lingual vocabulary for each
sentence. We conduct evaluations on two challenging cross-lingual tasks, XTREME
and machine translation. Experimental results show that the HiCTL outperforms
the state-of-the-art XLM-R by an absolute gain of 4.2% accuracy on the XTREME
benchmark as well as achieves substantial improvements on both of the
high-resource and low-resource English-to-X translation tasks over strong
baselines.
| 2,021 | Computation and Language |
Word Embeddings: Stability and Semantic Change | Word embeddings are computed by a class of techniques within natural language
processing (NLP), that create continuous vector representations of words in a
language from a large text corpus. The stochastic nature of the training
process of most embedding techniques can lead to surprisingly strong
instability, i.e. subsequently applying the same technique to the same data
twice, can produce entirely different results. In this work, we present an
experimental study on the instability of the training process of three of the
most influential embedding techniques of the last decade: word2vec, GloVe and
fastText. Based on the experimental results, we propose a statistical model to
describe the instability of embedding techniques and introduce a novel metric
to measure the instability of the representation of an individual word.
Finally, we propose a method to minimize the instability - by computing a
modified average over multiple runs - and apply it to a specific linguistic
problem: The detection and quantification of semantic change, i.e. measuring
changes in the meaning and usage of words over time.
| 2,020 | Computation and Language |
Exploring Swedish & English fastText Embeddings for NER with the
Transformer | In this paper, our main contributions are that embeddings from relatively
smaller corpora can outperform ones from larger corpora and we make the new
Swedish analogy test set publicly available. To achieve a good network
performance in natural language processing (NLP) downstream tasks, several
factors play important roles: dataset size, the right hyper-parameters, and
well-trained embeddings. We show that, with the right set of hyper-parameters,
good network performance can be reached even on smaller datasets. We evaluate
the embeddings at both the intrinsic and extrinsic levels. The embeddings are
deployed with the Transformer in named entity recognition (NER) task and
significance tests conducted. This is done for both Swedish and English. We
obtain better performance in both languages on the downstream task with smaller
training data, compared to recently released, Common Crawl versions; and
character n-grams appear useful for Swedish, a morphologically rich language.
| 2,021 | Computation and Language |
Multi-task learning for natural language processing in the 2020s: where
are we going? | Multi-task learning (MTL) significantly pre-dates the deep learning era, and
it has seen a resurgence in the past few years as researchers have been
applying MTL to deep learning solutions for natural language tasks. While
steady MTL research has always been present, there is a growing interest driven
by the impressive successes published in the related fields of transfer
learning and pre-training, such as BERT, and the release of new challenge
problems, such as GLUE and the NLP Decathlon (decaNLP). These efforts place
more focus on how weights are shared across networks, evaluate the re-usability
of network components and identify use cases where MTL can significantly
outperform single-task solutions. This paper strives to provide a comprehensive
survey of the numerous recent MTL contributions to the field of natural
language processing and provide a forum to focus efforts on the hardest
unsolved problems in the next decade. While novel models that improve
performance on NLP benchmarks are continually produced, lasting MTL challenges
remain unsolved which could hold the key to better language understanding,
knowledge discovery and natural language interfaces.
| 2,020 | Computation and Language |
Toward Givenness Hierarchy Theoretic Natural Language Generation | Language-capable interactive robots participating in dialogues with human
interlocutors must be able to naturally and efficiently communicate about the
entities in their environment. A key aspect of such communication is the use of
anaphoric language. The linguistic theory of the Givenness Hierarchy(GH)
suggests that humans use anaphora based on the cognitive statuses their
referents have in the minds of their interlocutors. In previous work,
researchers presented GH-theoretic approaches to robot anaphora understanding.
In this paper we describe how the GH might need to be used quite differently to
facilitate robot anaphora generation.
| 2,020 | Computation and Language |
Exclusion and Inclusion -- A model agnostic approach to feature
importance in DNNs | Deep Neural Networks in NLP have enabled systems to learn complex non-linear
relationships. One of the major bottlenecks towards being able to use DNNs for
real world applications is their characterization as black boxes. To solve this
problem, we introduce a model agnostic algorithm which calculates phrase-wise
importance of input features. We contend that our method is generalizable to a
diverse set of tasks, by carrying out experiments for both Regression and
Classification. We also observe that our approach is robust to outliers,
implying that it only captures the essential aspects of the input.
| 2,021 | Computation and Language |
Neural Machine Translation model for University Email Application | Machine translation has many applications such as news translation, email
translation, official letter translation etc. Commercial translators, e.g.
Google Translation lags in regional vocabulary and are unable to learn the
bilingual text in the source and target languages within the input. In this
paper, a regional vocabulary-based application-oriented Neural Machine
Translation (NMT) model is proposed over the data set of emails used at the
University for communication over a period of three years. A state-of-the-art
Sequence-to-Sequence Neural Network for ML -> EN and EN -> ML translations is
compared with Google Translate using Gated Recurrent Unit Recurrent Neural
Network machine translation model with attention decoder. The low BLEU score of
Google Translation in comparison to our model indicates that the application
based regional models are better. The low BLEU score of EN -> ML of our model
and Google Translation indicates that the Malay Language has complex language
features corresponding to English.
| 2,020 | Computation and Language |
Neural Composition: Learning to Generate from Multiple Models | Decomposing models into multiple components is critically important in many
applications such as language modeling (LM) as it enables adapting individual
components separately and biasing of some components to the user's personal
preferences. Conventionally, contextual and personalized adaptation for
language models, are achieved through class-based factorization, which requires
class-annotated data, or through biasing to individual phrases which is limited
in scale. In this paper, we propose a system that combines model-defined
components, by learning when to activate the generation process from each
individual component, and how to combine probability distributions from each
component, directly from unlabeled text data.
| 2,020 | Computation and Language |
Robust Benchmarking for Machine Learning of Clinical Entity Extraction | Clinical studies often require understanding elements of a patient's
narrative that exist only in free text clinical notes. To transform notes into
structured data for downstream use, these elements are commonly extracted and
normalized to medical vocabularies. In this work, we audit the performance of
and indicate areas of improvement for state-of-the-art systems. We find that
high task accuracies for clinical entity normalization systems on the 2019 n2c2
Shared Task are misleading, and underlying performance is still brittle.
Normalization accuracy is high for common concepts (95.3%), but much lower for
concepts unseen in training data (69.3%). We demonstrate that current
approaches are hindered in part by inconsistencies in medical vocabularies,
limitations of existing labeling schemas, and narrow evaluation techniques. We
reformulate the annotation framework for clinical entity extraction to factor
in these issues to allow for robust end-to-end system benchmarking. We evaluate
concordance of annotations from our new framework between two annotators and
achieve a Jaccard similarity of 0.73 for entity recognition and an agreement of
0.83 for entity normalization. We propose a path forward to address the
demonstrated need for the creation of a reference standard to spur method
development in entity recognition and normalization.
| 2,020 | Computation and Language |
Paying Per-label Attention for Multi-label Extraction from Radiology
Reports | Training medical image analysis models requires large amounts of expertly
annotated data which is time-consuming and expensive to obtain. Images are
often accompanied by free-text radiology reports which are a rich source of
information. In this paper, we tackle the automated extraction of structured
labels from head CT reports for imaging of suspected stroke patients, using
deep learning. Firstly, we propose a set of 31 labels which correspond to
radiographic findings (e.g. hyperdensity) and clinical impressions (e.g.
haemorrhage) related to neurological abnormalities. Secondly, inspired by
previous work, we extend existing state-of-the-art neural network models with a
label-dependent attention mechanism. Using this mechanism and simple synthetic
data augmentation, we are able to robustly extract many labels with a single
model, classified according to the radiologist's reporting (positive,
uncertain, negative). This approach can be used in further research to
effectively extract many labels from medical text.
| 2,020 | Computation and Language |
SimulEval: An Evaluation Toolkit for Simultaneous Translation | Simultaneous translation on both text and speech focuses on a real-time and
low-latency scenario where the model starts translating before reading the
complete source input. Evaluating simultaneous translation models is more
complex than offline models because the latency is another factor to consider
in addition to translation quality. The research community, despite its growing
focus on novel modeling approaches to simultaneous translation, currently lacks
a universal evaluation procedure. Therefore, we present SimulEval, an
easy-to-use and general evaluation toolkit for both simultaneous text and
speech translation. A server-client scheme is introduced to create a
simultaneous translation scenario, where the server sends source input and
receives predictions for evaluation and the client executes customized
policies. Given a policy, it automatically performs simultaneous decoding and
collectively reports several popular latency metrics. We also adapt latency
metrics from text simultaneous translation to the speech task. Additionally,
SimulEval is equipped with a visualization interface to provide better
understanding of the simultaneous decoding process of a system. SimulEval has
already been extensively used for the IWSLT 2020 shared task on simultaneous
speech translation. Code will be released upon publication.
| 2,020 | Computation and Language |
Sentiment Analysis based Multi-person Multi-criteria Decision Making
Methodology using Natural Language Processing and Deep Learning for Smarter
Decision Aid. Case study of restaurant choice using TripAdvisor reviews | Decision making models are constrained by taking the expert evaluations with
pre-defined numerical or linguistic terms. We claim that the use of sentiment
analysis will allow decision making models to consider expert evaluations in
natural language. Accordingly, we propose the Sentiment Analysis based
Multi-person Multi-criteria Decision Making (SA-MpMcDM) methodology for smarter
decision aid, which builds the expert evaluations from their natural language
reviews, and even from their numerical ratings if they are available. The
SA-MpMcDM methodology incorporates an end-to-end multi-task deep learning model
for aspect based sentiment analysis, named DOC-ABSADeepL model, able to
identify the aspect categories mentioned in an expert review, and to distill
their opinions and criteria. The individual evaluations are aggregated via the
procedure named criteria weighting through the attention of the experts. We
evaluate the methodology in a case study of restaurant choice using TripAdvisor
reviews, hence we build, manually annotate, and release the TripR-2020 dataset
of restaurant reviews. We analyze the SA-MpMcDM methodology in different
scenarios using and not using natural language and numerical evaluations. The
analysis shows that the combination of both sources of information results in a
higher quality preference vector.
| 2,020 | Computation and Language |
TweepFake: about Detecting Deepfake Tweets | The recent advances in language modeling significantly improved the
generative capabilities of deep neural models: in 2019 OpenAI released GPT-2, a
pre-trained language model that can autonomously generate coherent, non-trivial
and human-like text samples. Since then, ever more powerful text generative
models have been developed. Adversaries can exploit these tremendous generative
capabilities to enhance social bots that will have the ability to write
plausible deepfake messages, hoping to contaminate public debate. To prevent
this, it is crucial to develop deepfake social media messages detection
systems. However, to the best of our knowledge no one has ever addressed the
detection of machine-generated texts on social networks like Twitter or
Facebook. With the aim of helping the research in this detection field, we
collected the first dataset of \real deepfake tweets, TweepFake. It is real in
the sense that each deepfake tweet was actually posted on Twitter. We collected
tweets from a total of 23 bots, imitating 17 human accounts. The bots are based
on various generation techniques, i.e., Markov Chains, RNN, RNN+Markov, LSTM,
GPT-2. We also randomly selected tweets from the humans imitated by the bots to
have an overall balanced dataset of 25,572 tweets (half human and half bots
generated). The dataset is publicly available on Kaggle. Lastly, we evaluated
13 deepfake text detection methods (based on various state-of-the-art
approaches) to both demonstrate the challenges that Tweepfake poses and create
a solid baseline of detection techniques. We hope that TweepFake can offer the
opportunity to tackle the deepfake detection on social media messages as well.
| 2,021 | Computation and Language |
The test set for the TransCoder system | The TransCoder system translates source code between Java, C++, and Python 3.
The test set that was used to evaluate its quality is missing important
features of Java, including the ability to define and use classes and the
ability to call user-defined functions other than recursively. Therefore, the
accuracy of TransCoder over programs with those features remains unknown.
| 2,020 | Computation and Language |
SemEval-2020 Task 7: Assessing Humor in Edited News Headlines | This paper describes the SemEval-2020 shared task "Assessing Humor in Edited
News Headlines." The task's dataset contains news headlines in which short
edits were applied to make them funny, and the funniness of these edited
headlines was rated using crowdsourcing. This task includes two subtasks, the
first of which is to estimate the funniness of headlines on a humor scale in
the interval 0-3. The second subtask is to predict, for a pair of edited
versions of the same original headline, which is the funnier version. To date,
this task is the most popular shared computational humor task, attracting 48
teams for the first subtask and 31 teams for the second.
| 2,020 | Computation and Language |
Extracting actionable information from microtexts | Microblogs such as Twitter represent a powerful source of information. Part
of this information can be aggregated beyond the level of individual posts.
Some of this aggregated information is referring to events that could or should
be acted upon in the interest of e-governance, public safety, or other levels
of public interest. Moreover, a significant amount of this information, if
aggregated, could complement existing information networks in a non-trivial
way. This dissertation proposes a semi-automatic method for extracting
actionable information that serves this purpose. First, we show that predicting
time to event is possible for both in-domain and cross-domain scenarios.
Second, we suggest a method which facilitates the definition of relevance for
an analyst's context and the use of this definition to analyze new data.
Finally, we propose a method to integrate the machine learning based relevant
information classification method with a rule-based information classification
technique to classify microtexts. Fully automatizing microtext analysis has
been our goal since the first day of this research project. Our efforts in this
direction informed us about the extent this automation can be realized. We
mostly first developed an automated approach, then we extended and improved it
by integrating human intervention at various steps of the automated approach.
Our experience confirms previous work that states that a well-designed human
intervention or contribution in design, realization, or evaluation of an
information system either improves its performance or enables its realization.
As our studies and results directed us toward its necessity and value, we were
inspired from previous studies in designing human involvement and customized
our approaches to benefit from human input.
| 2,020 | Computation and Language |
Overview of CLEF 2019 Lab ProtestNews: Extracting Protests from News in
a Cross-context Setting | We present an overview of the CLEF-2019 Lab ProtestNews on Extracting
Protests from News in the context of generalizable natural language processing.
The lab consists of document, sentence, and token level information
classification and extraction tasks that were referred as task 1, task 2, and
task 3 respectively in the scope of this lab. The tasks required the
participants to identify protest relevant information from English local news
at one or more aforementioned levels in a cross-context setting, which is
cross-country in the scope of this lab. The training and development data were
collected from India and test data was collected from India and China. The lab
attracted 58 teams to participate in the lab. 12 and 9 of these teams submitted
results and working notes respectively. We have observed neural networks yield
the best results and the performance drops significantly for majority of the
submissions in the cross-country setting, which is China.
| 2,020 | Computation and Language |
Cross-context News Corpus for Protest Events related Knowledge Base
Construction | We describe a gold standard corpus of protest events that comprise of various
local and international sources from various countries in English. The corpus
contains document, sentence, and token level annotations. This corpus
facilitates creating machine learning models that automatically classify news
articles and extract protest event-related information, constructing knowledge
bases which enable comparative social and political science studies. For each
news source, the annotation starts on random samples of news articles and
continues with samples that are drawn using active learning. Each batch of
samples was annotated by two social and political scientists, adjudicated by an
annotation supervisor, and was improved by identifying annotation errors
semi-automatically. We found that the corpus has the variety and quality to
develop and benchmark text classification and event extraction systems in a
cross-context setting, which contributes to the generalizability and robustness
of automated text processing systems. This corpus and the reported results will
set the currently lacking common ground in automated protest event collection
studies.
| 2,020 | Computation and Language |
A Survey on Text Classification: From Shallow to Deep Learning | Text classification is the most fundamental and essential task in natural
language processing. The last decade has seen a surge of research in this area
due to the unprecedented success of deep learning. Numerous methods, datasets,
and evaluation metrics have been proposed in the literature, raising the need
for a comprehensive and updated survey. This paper fills the gap by reviewing
the state-of-the-art approaches from 1961 to 2021, focusing on models from
traditional models to deep learning. We create a taxonomy for text
classification according to the text involved and the models used for feature
extraction and classification. We then discuss each of these categories in
detail, dealing with both the technical developments and benchmark datasets
that support tests of predictions. A comprehensive comparison between different
techniques, as well as identifying the pros and cons of various evaluation
metrics are also provided in this survey. Finally, we conclude by summarizing
key implications, future research directions, and the challenges facing the
research area.
| 2,021 | Computation and Language |
Multilingual Translation with Extensible Multilingual Pretraining and
Finetuning | Recent work demonstrates the potential of multilingual pretraining of
creating one model that can be used for various tasks in different languages.
Previous work in multilingual pretraining has demonstrated that machine
translation systems can be created by finetuning on bitext. In this work, we
show that multilingual translation models can be created through multilingual
finetuning. Instead of finetuning on one direction, a pretrained model is
finetuned on many directions at the same time. Compared to multilingual models
trained from scratch, starting from pretrained models incorporates the benefits
of large quantities of unlabeled monolingual data, which is particularly
important for low resource languages where bitext is not available. We
demonstrate that pretrained models can be extended to incorporate additional
languages without loss of performance. We double the number of languages in
mBART to support multilingual machine translation models of 50 languages.
Finally, we create the ML50 benchmark, covering low, mid, and high resource
languages, to facilitate reproducible research by standardizing training and
evaluation data. On ML50, we demonstrate that multilingual finetuning improves
on average 1 BLEU over the strongest baselines (being either multilingual from
scratch or bilingual finetuning) while improving 9.3 BLEU on average over
bilingual baselines from scratch.
| 2,020 | Computation and Language |
Relation Extraction with Self-determined Graph Convolutional Network | Relation Extraction is a way of obtaining the semantic relationship between
entities in text. The state-of-the-art methods use linguistic tools to build a
graph for the text in which the entities appear and then a Graph Convolutional
Network (GCN) is employed to encode the pre-built graphs. Although their
performance is promising, the reliance on linguistic tools results in a non
end-to-end process. In this work, we propose a novel model, the Self-determined
Graph Convolutional Network (SGCN), which determines a weighted graph using a
self-attention mechanism, rather using any linguistic tool. Then, the
self-determined graph is encoded using a GCN. We test our model on the TACRED
dataset and achieve the state-of-the-art result. Our experiments show that SGCN
outperforms the traditional GCN, which uses dependency parsing tools to build
the graph.
| 2,020 | Computation and Language |
Investigating the Effect of Emoji in Opinion Classification of Uzbek
Movie Review Comments | Opinion mining on social media posts has become more and more popular. Users
often express their opinion on a topic not only with words but they also use
image symbols such as emoticons and emoji. In this paper, we investigate the
effect of emoji-based features in opinion classification of Uzbek texts, and
more specifically movie review comments from YouTube. Several classification
algorithms are tested, and feature ranking is performed to evaluate the
discriminative ability of the emoji-based features.
| 2,020 | Computation and Language |
Video Question Answering on Screencast Tutorials | This paper presents a new video question answering task on screencast
tutorials. We introduce a dataset including question, answer and context
triples from the tutorial videos for a software. Unlike other video question
answering works, all the answers in our dataset are grounded to the domain
knowledge base. An one-shot recognition algorithm is designed to extract the
visual cues, which helps enhance the performance of video question answering.
We also propose several baseline neural network architectures based on various
aspects of video contexts from the dataset. The experimental results
demonstrate that our proposed models significantly improve the question
answering performances by incorporating multi-modal contexts and domain
knowledge.
| 2,020 | Computation and Language |
SemEval-2020 Task 5: Counterfactual Recognition | We present a counterfactual recognition (CR) task, the shared Task 5 of
SemEval-2020. Counterfactuals describe potential outcomes (consequents)
produced by actions or circumstances that did not happen or cannot happen and
are counter to the facts (antecedent). Counterfactual thinking is an important
characteristic of the human cognitive system; it connects antecedents and
consequents with causal relations. Our task provides a benchmark for
counterfactual recognition in natural language with two subtasks. Subtask-1
aims to determine whether a given sentence is a counterfactual statement or
not. Subtask-2 requires the participating systems to extract the antecedent and
consequent in a given counterfactual statement. During the SemEval-2020
official evaluation period, we received 27 submissions to Subtask-1 and 11 to
Subtask-2. The data, baseline code, and leaderboard can be found at
https://competitions.codalab.org/competitions/21691. The data and baseline code
are also available at https://zenodo.org/record/3932442.
| 2,020 | Computation and Language |
Deep Learning based Topic Analysis on Financial Emerging Event Tweets | Financial analyses of stock markets rely heavily on quantitative approaches
in an attempt to predict subsequent or market movements based on historical
prices and other measurable metrics. These quantitative analyses might have
missed out on un-quantifiable aspects like sentiment and speculation that also
impact the market. Analyzing vast amounts of qualitative text data to
understand public opinion on social media platform is one approach to address
this gap. This work carried out topic analysis on 28264 financial tweets [1]
via clustering to discover emerging events in the stock market. Three main
topics were discovered to be discussed frequently within the period. First, the
financial ratio EPS is a measure that has been discussed frequently by
investors. Secondly, short selling of shares were discussed heavily, it was
often mentioned together with Morgan Stanley. Thirdly, oil and energy sectors
were often discussed together with policy. These tweets were semantically
clustered by a method consisting of word2vec algorithm to obtain word
embeddings that map words to vectors. Semantic word clusters were then formed.
Each tweet was then vectorized using the Term Frequency-Inverse Document
Frequency (TF-IDF) values of the words it consisted of and based on which
clusters its words were in. Tweet vectors were then converted to compressed
representations by training a deep-autoencoder. K-means clusters were then
formed. This method reduces dimensionality and produces dense vectors, in
contrast to the usual Vector Space Model. Topic modelling with Latent Dirichlet
Allocation (LDA) and top frequent words were used to analyze clusters and
reveal emerging events.
| 2,020 | Computation and Language |
Elsevier OA CC-By Corpus | We introduce the Elsevier OA CC-BY corpus. This is the first open corpus of
Scientific Research papers which has a representative sample from across
scientific disciplines. This corpus not only includes the full text of the
article, but also the metadata of the documents, along with the bibliographic
information for each reference.
| 2,020 | Computation and Language |
LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific
BERT? | This paper presents the different models submitted by the LT@Helsinki team
for the SemEval 2020 Shared Task 12. Our team participated in sub-tasks A and
C; titled offensive language identification and offense target identification,
respectively. In both cases we used the so-called Bidirectional Encoder
Representation from Transformer (BERT), a model pre-trained by Google and
fine-tuned by us on the OLID and SOLID datasets. The results show that
offensive tweet classification is one of several language-based tasks where
BERT can achieve state-of-the-art results.
| 2,020 | Computation and Language |
Predicting the Humorousness of Tweets Using Gaussian Process Preference
Learning | Most humour processing systems to date make at best discrete, coarse-grained
distinctions between the comical and the conventional, yet such notions are
better conceptualized as a broad spectrum. In this paper, we present a
probabilistic approach, a variant of Gaussian process preference learning
(GPPL), that learns to rank and rate the humorousness of short texts by
exploiting human preference judgments and automatically sourced linguistic
annotations. We apply our system, which is similar to one that had previously
shown good performance on English-language one-liners annotated with pairwise
humorousness annotations, to the Spanish-language data set of the
HAHA@IberLEF2019 evaluation campaign. We report system performance for the
campaign's two subtasks, humour detection and funniness score prediction, and
discuss some issues arising from the conversion between the numeric scores used
in the HAHA@IberLEF2019 data and the pairwise judgment annotations required for
our method.
| 2,020 | Computation and Language |
Interactive Text Graph Mining with a Prolog-based Dialog Engine | On top of a neural network-based dependency parser and a graph-based natural
language processing module we design a Prolog-based dialog engine that explores
interactively a ranked fact database extracted from a text document.
We reorganize dependency graphs to focus on the most relevant content
elements of a sentence and integrate sentence identifiers as graph nodes.
Additionally, after ranking the graph we take advantage of the implicit
semantic information that dependency links and WordNet bring in the form of
subject-verb-object, is-a and part-of relations.
Working on the Prolog facts and their inferred consequences, the dialog
engine specializes the text graph with respect to a query and reveals
interactively the document's most relevant content elements.
The open-source code of the integrated system is available at
https://github.com/ptarau/DeepRank .
Under consideration in Theory and Practice of Logic Programming (TPLP).
| 2,021 | Computation and Language |
An improved Bayesian TRIE based model for SMS text normalization | Normalization of SMS text, commonly known as texting language, is being
pursued for more than a decade. A probabilistic approach based on the Trie data
structure was proposed in literature which was found to be better performing
than HMM based approaches proposed earlier in predicting the correct
alternative for an out-of-lexicon word. However, success of the Trie based
approach depends largely on how correctly the underlying probabilities of word
occurrences are estimated. In this work we propose a structural modification to
the existing Trie-based model along with a novel training algorithm and
probability generation scheme. We prove two theorems on statistical properties
of the proposed Trie and use them to claim that is an unbiased and consistent
estimator of the occurrence probabilities of the words. We further fuse our
model into the paradigm of noisy channel based error correction and provide a
heuristic to go beyond a Damerau Levenshtein distance of one. We also run
simulations to support our claims and show superiority of the proposed scheme
over previous works.
| 2,020 | Computation and Language |
NLPDove at SemEval-2020 Task 12: Improving Offensive Language Detection
with Cross-lingual Transfer | This paper describes our approach to the task of identifying offensive
languages in a multilingual setting. We investigate two data augmentation
strategies: using additional semi-supervised labels with different thresholds
and cross-lingual transfer with data selection. Leveraging the semi-supervised
dataset resulted in performance improvements compared to the baseline trained
solely with the manually-annotated dataset. We propose a new metric,
Translation Embedding Distance, to measure the transferability of instances for
cross-lingual data selection. We also introduce various preprocessing steps
tailored for social media text along with methods to fine-tune the pre-trained
multilingual BERT (mBERT) for offensive language identification. Our
multilingual systems achieved competitive results in Greek, Danish, and Turkish
at OffensEval 2020.
| 2,020 | Computation and Language |
Reliable Part-of-Speech Tagging of Historical Corpora through Set-Valued
Prediction | Syntactic annotation of corpora in the form of part-of-speech (POS) tags is a
key requirement for both linguistic research and subsequent automated natural
language processing (NLP) tasks. This problem is commonly tackled using machine
learning methods, i.e., by training a POS tagger on a sufficiently large corpus
of labeled data. While the problem of POS tagging can essentially be considered
as solved for modern languages, historical corpora turn out to be much more
difficult, especially due to the lack of native speakers and sparsity of
training data. Moreover, most texts have no sentences as we know them today,
nor a common orthography. These irregularities render the task of automated POS
tagging more difficult and error-prone. Under these circumstances, instead of
forcing the POS tagger to predict and commit to a single tag, it should be
enabled to express its uncertainty. In this paper, we consider POS tagging
within the framework of set-valued prediction, which allows the POS tagger to
express its uncertainty via predicting a set of candidate POS tags instead of
guessing a single one. The goal is to guarantee a high confidence that the
correct POS tag is included while keeping the number of candidates small. In
our experimental study, we find that extending state-of-the-art POS taggers to
set-valued prediction yields more precise and robust taggings, especially for
unknown words, i.e., words not occurring in the training data.
| 2,021 | Computation and Language |
A Survey of Orthographic Information in Machine Translation | Machine translation is one of the applications of natural language processing
which has been explored in different languages. Recently researchers started
paying attention towards machine translation for resource-poor languages and
closely related languages. A widespread and underlying problem for these
machine translation systems is the variation in orthographic conventions which
causes many issues to traditional approaches. Two languages written in two
different orthographies are not easily comparable, but orthographic information
can also be used to improve the machine translation system. This article offers
a survey of research regarding orthography's influence on machine translation
of under-resourced languages. It introduces under-resourced languages in terms
of machine translation and how orthographic information can be utilised to
improve machine translation. We describe previous work in this area, discussing
what underlying assumptions were made, and showing how orthographic knowledge
improves the performance of machine translation of under-resourced languages.
We discuss different types of machine translation and demonstrate a recent
trend that seeks to link orthographic information with well-established machine
translation methods. Considerable attention is given to current efforts of
cognates information at different levels of machine translation and the lessons
that can be drawn from this. Additionally, multilingual neural machine
translation of closely related languages is given a particular focus in this
survey. This article ends with a discussion of the way forward in machine
translation with orthographic information, focusing on multilingual settings
and bilingual lexicon induction.
| 2,021 | Computation and Language |
Prompt Agnostic Essay Scorer: A Domain Generalization Approach to
Cross-prompt Automated Essay Scoring | Cross-prompt automated essay scoring (AES) requires the system to use non
target-prompt essays to award scores to a target-prompt essay. Since obtaining
a large quantity of pre-graded essays to a particular prompt is often difficult
and unrealistic, the task of cross-prompt AES is vital for the development of
real-world AES systems, yet it remains an under-explored area of research.
Models designed for prompt-specific AES rely heavily on prompt-specific
knowledge and perform poorly in the cross-prompt setting, whereas current
approaches to cross-prompt AES either require a certain quantity of labelled
target-prompt essays or require a large quantity of unlabelled target-prompt
essays to perform transfer learning in a multi-step manner. To address these
issues, we introduce Prompt Agnostic Essay Scorer (PAES) for cross-prompt AES.
Our method requires no access to labelled or unlabelled target-prompt data
during training and is a single-stage approach. PAES is easy to apply in
practice and achieves state-of-the-art performance on the Automated Student
Assessment Prize (ASAP) dataset.
| 2,020 | Computation and Language |
Taking Notes on the Fly Helps BERT Pre-training | How to make unsupervised language pre-training more efficient and less
resource-intensive is an important research direction in NLP. In this paper, we
focus on improving the efficiency of language pre-training methods through
providing better data utilization. It is well-known that in language data
corpus, words follow a heavy-tail distribution. A large proportion of words
appear only very few times and the embeddings of rare words are usually poorly
optimized. We argue that such embeddings carry inadequate semantic signals,
which could make the data utilization inefficient and slow down the
pre-training of the entire model. To mitigate this problem, we propose Taking
Notes on the Fly (TNF), which takes notes for rare words on the fly during
pre-training to help the model understand them when they occur next time.
Specifically, TNF maintains a note dictionary and saves a rare word's
contextual information in it as notes when the rare word occurs in a sentence.
When the same rare word occurs again during training, the note information
saved beforehand can be employed to enhance the semantics of the current
sentence. By doing so, TNF provides better data utilization since
cross-sentence information is employed to cover the inadequate semantics caused
by rare words in the sentences. We implement TNF on both BERT and ELECTRA to
check its efficiency and effectiveness. Experimental results show that TNF's
training time is $60\%$ less than its backbone pre-training models when
reaching the same performance. When trained with the same number of iterations,
TNF outperforms its backbone methods on most of downstream tasks and the
average GLUE score. Source code is attached in the supplementary material.
| 2,021 | Computation and Language |
SARG: A Novel Semi Autoregressive Generator for Multi-turn Incomplete
Utterance Restoration | Dialogue systems in open domain have achieved great success due to the easily
obtained single-turn corpus and the development of deep learning, but the
multi-turn scenario is still a challenge because of the frequent coreference
and information omission. In this paper, we investigate the incomplete
utterance restoration which has brought general improvement over multi-turn
dialogue systems in recent studies. Meanwhile, jointly inspired by the
autoregression for text generation and the sequence labeling for text editing,
we propose a novel semi autoregressive generator (SARG) with the high
efficiency and flexibility. Moreover, experiments on two benchmarks show that
our proposed model significantly outperforms the state-of-the-art models in
terms of quality and inference speed.
| 2,020 | Computation and Language |
Predicting Multiple ICD-10 Codes from Brazilian-Portuguese Clinical
Notes | ICD coding from electronic clinical records is a manual, time-consuming and
expensive process. Code assignment is, however, an important task for billing
purposes and database organization. While many works have studied the problem
of automated ICD coding from free text using machine learning techniques, most
use records in the English language, especially from the MIMIC-III public
dataset. This work presents results for a dataset with Brazilian Portuguese
clinical notes. We develop and optimize a Logistic Regression model, a
Convolutional Neural Network (CNN), a Gated Recurrent Unit Neural Network and a
CNN with Attention (CNN-Att) for prediction of diagnosis ICD codes. We also
report our results for the MIMIC-III dataset, which outperform previous work
among models of the same families, as well as the state of the art. Compared to
MIMIC-III, the Brazilian Portuguese dataset contains far fewer words per
document, when only discharge summaries are used. We experiment concatenating
additional documents available in this dataset, achieving a great boost in
performance. The CNN-Att model achieves the best results on both datasets, with
micro-averaged F1 score of 0.537 on MIMIC-III and 0.485 on our dataset with
additional documents.
| 2,020 | Computation and Language |
A System for Worldwide COVID-19 Information Aggregation | The global pandemic of COVID-19 has made the public pay close attention to
related news, covering various domains, such as sanitation, treatment, and
effects on education. Meanwhile, the COVID-19 condition is very different among
the countries (e.g., policies and development of the epidemic), and thus
citizens would be interested in news in foreign countries. We build a system
for worldwide COVID-19 information aggregation containing reliable articles
from 10 regions in 7 languages sorted by topics. Our reliable COVID-19 related
website dataset collected through crowdsourcing ensures the quality of the
articles. A neural machine translation module translates articles in other
languages into Japanese and English. A BERT-based topic-classifier trained on
our article-topic pair dataset helps users find their interested information
efficiently by putting articles into different categories.
| 2,020 | Computation and Language |
A Study on Effects of Implicit and Explicit Language Model Information
for DBLSTM-CTC Based Handwriting Recognition | Deep Bidirectional Long Short-Term Memory (D-BLSTM) with a Connectionist
Temporal Classification (CTC) output layer has been established as one of the
state-of-the-art solutions for handwriting recognition. It is well known that
the DBLSTM trained by using a CTC objective function will learn both local
character image dependency for character modeling and long-range contextual
dependency for implicit language modeling. In this paper, we study the effects
of implicit and explicit language model information for DBLSTM-CTC based
handwriting recognition by comparing the performance of using or without using
an explicit language model in decoding. It is observed that even using one
million lines of training sentences to train the DBLSTM, using an explicit
language model is still helpful. To deal with such a large-scale training
problem, a GPU-based training tool has been developed for CTC training of
DBLSTM by using a mini-batch based epochwise Back Propagation Through Time
(BPTT) algorithm.
| 2,020 | Computation and Language |
Writer Identification Using Microblogging Texts for Social Media
Forensics | Establishing authorship of online texts is fundamental to combat cybercrimes.
Unfortunately, text length is limited on some platforms, making the challenge
harder. We aim at identifying the authorship of Twitter messages limited to 140
characters. We evaluate popular stylometric features, widely used in literary
analysis, and specific Twitter features like URLs, hashtags, replies or quotes.
We use two databases with 93 and 3957 authors, respectively. We test varying
sized author sets and varying amounts of training/test texts per author.
Performance is further improved by feature combination via automatic selection.
With a large number of training Tweets (>500), a good accuracy (Rank-5>80%) is
achievable with only a few dozens of test Tweets, even with several thousands
of authors. With smaller sample sizes (10-20 training Tweets), the search space
can be diminished by 9-15% while keeping a high chance that the correct author
is retrieved among the candidates. In such cases, automatic attribution can
provide significant time savings to experts in suspect search. For
completeness, we report verification results. With few training/test Tweets,
the EER is above 20-25%, which is reduced to < 15% if hundreds of training
Tweets are available. We also quantify the computational complexity and time
permanence of the employed features.
| 2,021 | Computation and Language |
Weighted Accuracy Algorithmic Approach In Counteracting Fake News And
Disinformation | As the world is becoming more dependent on the internet for information
exchange, some overzealous journalists, hackers, bloggers, individuals and
organizations tend to abuse the gift of free information environment by
polluting it with fake news, disinformation and pretentious content for their
own agenda. Hence, there is the need to address the issue of fake news and
disinformation with utmost seriousness. This paper proposes a methodology for
fake news detection and reporting through a constraint mechanism that utilizes
the combined weighted accuracies of four machine learning algorithms.
| 2,021 | Computation and Language |
Text-based classification of interviews for mental health -- juxtaposing
the state of the art | Currently, the state of the art for classification of psychiatric illness is
based on audio-based classification. This thesis aims to design and evaluate a
state of the art text classification network on this challenge. The hypothesis
is that a well designed text-based approach poses a strong competition against
the state-of-the-art audio based approaches. Dutch natural language models are
being limited by the scarcity of pre-trained monolingual NLP models, as a
result Dutch natural language models have a low capture of long range semantic
dependencies over sentences. For this issue, this thesis presents belabBERT, a
new Dutch language model extending the RoBERTa[15] architecture. belabBERT is
trained on a large Dutch corpus (+32GB) of web crawled texts. After this thesis
evaluates the strength of text-based classification, a brief exploration is
done, extending the framework to a hybrid text- and audio-based classification.
The goal of this hybrid framework is to show the principle of hybridisation
with a very basic audio-classification network. The overall goal is to create
the foundations for a hybrid psychiatric illness classification, by proving
that the new text-based classification is already a strong stand-alone
solution.
| 2,020 | Computation and Language |
Deep Learning Brasil -- NLP at SemEval-2020 Task 9: Overview of
Sentiment Analysis of Code-Mixed Tweets | In this paper, we describe a methodology to predict sentiment in code-mixed
tweets (hindi-english). Our team called verissimo.manoel in CodaLab developed
an approach based on an ensemble of four models (MultiFiT, BERT, ALBERT, and
XLNET). The final classification algorithm was an ensemble of some predictions
of all softmax values from these four models. This architecture was used and
evaluated in the context of the SemEval 2020 challenge (task 9), and our system
got 72.7% on the F1 score.
| 2,020 | Computation and Language |
ULD@NUIG at SemEval-2020 Task 9: Generative Morphemes with an Attention
Model for Sentiment Analysis in Code-Mixed Text | Code mixing is a common phenomena in multilingual societies where people
switch from one language to another for various reasons. Recent advances in
public communication over different social media sites have led to an increase
in the frequency of code-mixed usage in written language. In this paper, we
present the Generative Morphemes with Attention (GenMA) Model sentiment
analysis system contributed to SemEval 2020 Task 9 SentiMix. The system aims to
predict the sentiments of the given English-Hindi code-mixed tweets without
using word-level language tags instead inferring this automatically using a
morphological model. The system is based on a novel deep neural network (DNN)
architecture, which has outperformed the baseline F1-score on the test data-set
as well as the validation data-set. Our results can be found under the user
name "koustava" on the "Sentimix Hindi English" page
| 2,020 | Computation and Language |
Next word prediction based on the N-gram model for Kurdish Sorani and
Kurmanji | Next word prediction is an input technology that simplifies the process of
typing by suggesting the next word to a user to select, as typing in a
conversation consumes time. A few previous studies have focused on the Kurdish
language, including the use of next word prediction. However, the lack of a
Kurdish text corpus presents a challenge. Moreover, the lack of a sufficient
number of N-grams for the Kurdish language, for instance, five grams, is the
reason for the rare use of next Kurdish word prediction. Furthermore, the
improper display of several Kurdish letters in the Rstudio software is another
problem. This paper provides a Kurdish corpus, creates five, and presents a
unique research work on next word prediction for Kurdish Sorani and Kurmanji.
The N-gram model has been used for next word prediction to reduce the amount of
time while typing in the Kurdish language. In addition, little work has been
conducted on next Kurdish word prediction; thus, the N-gram model is utilized
to suggest text accurately. To do so, R programming and RStudio are used to
build the application. The model is 96.3% accurate.
| 2,020 | Computation and Language |
TensorCoder: Dimension-Wise Attention via Tensor Representation for
Natural Language Modeling | Transformer has been widely-used in many Natural Language Processing (NLP)
tasks and the scaled dot-product attention between tokens is a core module of
Transformer. This attention is a token-wise design and its complexity is
quadratic to the length of sequence, limiting its application potential for
long sequence tasks. In this paper, we propose a dimension-wise attention
mechanism based on which a novel language modeling approach (namely
TensorCoder) can be developed. The dimension-wise attention can reduce the
attention complexity from the original $O(N^2d)$ to $O(Nd^2)$, where $N$ is the
length of the sequence and $d$ is the dimensionality of head. We verify
TensorCoder on two tasks including masked language modeling and neural machine
translation. Compared with the original Transformer, TensorCoder not only
greatly reduces the calculation of the original model but also obtains improved
performance on masked language modeling task (in PTB dataset) and comparable
performance on machine translation tasks.
| 2,020 | Computation and Language |
Defining and Evaluating Fair Natural Language Generation | Our work focuses on the biases that emerge in the natural language generation
(NLG) task of sentence completion. In this paper, we introduce a framework of
fairness for NLG followed by an evaluation of gender biases in two
state-of-the-art language models. Our analysis provides a theoretical
formulation for biases in NLG and empirical evidence that existing language
generation models embed gender bias.
| 2,020 | Computation and Language |
To BERT or Not To BERT: Comparing Speech and Language-based Approaches
for Alzheimer's Disease Detection | Research related to automatically detecting Alzheimer's disease (AD) is
important, given the high prevalence of AD and the high cost of traditional
methods. Since AD significantly affects the content and acoustics of
spontaneous speech, natural language processing and machine learning provide
promising techniques for reliably detecting AD. We compare and contrast the
performance of two such approaches for AD detection on the recent ADReSS
challenge dataset: 1) using domain knowledge-based hand-crafted features that
capture linguistic and acoustic phenomena, and 2) fine-tuning Bidirectional
Encoder Representations from Transformer (BERT)-based sequence classification
models. We also compare multiple feature-based regression models for a
neuropsychological score task in the challenge. We observe that fine-tuned BERT
models, given the relative importance of linguistics in cognitive impairment
detection, outperform feature-based approaches on the AD detection task.
| 2,020 | Computation and Language |
Tense, aspect and mood based event extraction for situation analysis and
crisis management | Nowadays event extraction systems mainly deal with a relatively small amount
of information about temporal and modal qualifications of situations, primarily
processing assertive sentences in the past tense. However, systems with a wider
coverage of tense, aspect and mood can provide better analyses and can be used
in a wider range of text analysis applications. This thesis develops such a
system for Turkish language. This is accomplished by extending Open Source
Information Mining and Analysis (OPTIMA) research group's event extraction
software, by implementing appropriate extensions in the semantic representation
format, by adding a partial grammar which improves the TAM (Tense, Aspect and
Mood) marker, adverb analysis and matching functions of ExPRESS, and by
constructing an appropriate lexicon in the standard of CORLEONE. These
extensions are based on iv the theory of anchoring relations (Tem\"urc\"u,
2007, 2011) which is a crosslinguistically applicable semantic framework for
analyzing tense, aspect and mood related categories. The result is a system
which can, in addition to extracting basic event structures, classify sentences
given in news reports according to their temporal, modal and
volitional/illocutionary values. Although the focus is on news reports of
natural disasters, disease outbreaks and man-made disasters in Turkish
language, the approach can be adapted to other languages, domains and genres.
This event extraction and classification system, with further developments, can
provide a basis for automated browsing systems for preventing environmental and
humanitarian risk.
| 2,020 | Computation and Language |
LXPER Index: a curriculum-specific text readability assessment model for
EFL students in Korea | Automatic readability assessment is one of the most important applications of
Natural Language Processing (NLP) in education. Since automatic readability
assessment allows the fast selection of appropriate reading material for
readers at all levels of proficiency, it can be particularly useful for the
English education of English as Foreign Language (EFL) students around the
world. Most readability assessment models are developed for the native readers
of English and have low accuracy for texts in the non-native English Language
Training (ELT) curriculum. We introduce LXPER Index, which is a readability
assessment model for non-native EFL readers in the ELT curriculum of Korea. Our
experiments show that our new model, trained with CoKEC-text (Text Corpus of
the Korean ELT Curriculum), significantly improves the accuracy of automatic
readability assessment for texts in the Korean ELT curriculum.
| 2,020 | Computation and Language |
Model Reduction of Shallow CNN Model for Reliable Deployment of
Information Extraction from Medical Reports | Shallow Convolution Neural Network (CNN) is a time-tested tool for the
information extraction from cancer pathology reports. Shallow CNN performs
competitively on this task to other deep learning models including BERT, which
holds the state-of-the-art for many NLP tasks. The main insight behind this
eccentric phenomenon is that the information extraction from cancer pathology
reports require only a small number of domain-specific text segments to perform
the task, thus making the most of the texts and contexts excessive for the
task. Shallow CNN model is well-suited to identify these key short text
segments from the labeled training set; however, the identified text segments
remain obscure to humans. In this study, we fill this gap by developing a model
reduction tool to make a reliable connection between CNN filters and relevant
text segments by discarding the spurious connections. We reduce the complexity
of shallow CNN representation by approximating it with a linear transformation
of n-gram presence representation with a non-negativity and sparsity prior on
the transformation weights to obtain an interpretable model. Our approach
bridge the gap between the conventionally perceived trade-off boundary between
accuracy on the one side and explainability on the other by model reduction.
| 2,020 | Computation and Language |
Efficient Urdu Caption Generation using Attention based LSTM | Recent advancements in deep learning have created many opportunities to solve
real-world problems that remained unsolved for more than a decade. Automatic
caption generation is a major research field, and the research community has
done a lot of work on it in most common languages like English. Urdu is the
national language of Pakistan and also much spoken and understood in the
sub-continent region of Pakistan-India, and yet no work has been done for Urdu
language caption generation. Our research aims to fill this gap by developing
an attention-based deep learning model using techniques of sequence modeling
specialized for the Urdu language. We have prepared a dataset in the Urdu
language by translating a subset of the "Flickr8k" dataset containing 700 'man'
images. We evaluate our proposed technique on this dataset and show that it can
achieve a BLEU score of 0.83 in the Urdu language. We improve on the previous
state-of-the-art by using better CNN architectures and optimization techniques.
Furthermore, we provide a discussion on how the generated captions can be made
correct grammar-wise.
| 2,021 | Computation and Language |
Select, Extract and Generate: Neural Keyphrase Generation with
Layer-wise Coverage Attention | Natural language processing techniques have demonstrated promising results in
keyphrase generation. However, one of the major challenges in \emph{neural}
keyphrase generation is processing long documents using deep neural networks.
Generally, documents are truncated before given as inputs to neural networks.
Consequently, the models may miss essential points conveyed in the target
document. To overcome this limitation, we propose \emph{SEG-Net}, a neural
keyphrase generation model that is composed of two major components, (1) a
selector that selects the salient sentences in a document and (2) an
extractor-generator that jointly extracts and generates keyphrases from the
selected sentences. SEG-Net uses Transformer, a self-attentive architecture, as
the basic building block with a novel \emph{layer-wise} coverage attention to
summarize most of the points discussed in the document. The experimental
results on seven keyphrase generation benchmarks from scientific and web
documents demonstrate that SEG-Net outperforms the state-of-the-art neural
generative methods by a large margin.
| 2,021 | Computation and Language |
Word meaning in minds and machines | Machines have achieved a broad and growing set of linguistic competencies,
thanks to recent progress in Natural Language Processing (NLP). Psychologists
have shown increasing interest in such models, comparing their output to
psychological judgments such as similarity, association, priming, and
comprehension, raising the question of whether the models could serve as
psychological theories. In this article, we compare how humans and machines
represent the meaning of words. We argue that contemporary NLP systems are
fairly successful models of human word similarity, but they fall short in many
other respects. Current models are too strongly linked to the text-based
patterns in large corpora, and too weakly linked to the desires, goals, and
beliefs that people express through words. Word meanings must also be grounded
in perception and action and be capable of flexible combinations in ways that
current systems are not. We discuss more promising approaches to grounding NLP
systems and argue that they will be more successful with a more human-like,
conceptual basis for word meaning.
| 2,021 | Computation and Language |
Automated Topical Component Extraction Using Neural Network Attention
Scores from Source-based Essay Scoring | While automated essay scoring (AES) can reliably grade essays at scale,
automated writing evaluation (AWE) additionally provides formative feedback to
guide essay revision. However, a neural AES typically does not provide useful
feature representations for supporting AWE. This paper presents a method for
linking AWE and neural AES, by extracting Topical Components (TCs) representing
evidence from a source text using the intermediate output of attention layers.
We evaluate performance using a feature-based AES requiring TCs. Results show
that performance is comparable whether using automatically or manually
constructed TCs for 1) representing essays as rubric-based features, 2) grading
essays.
| 2,020 | Computation and Language |
Antibody Watch: Text Mining Antibody Specificity from the Literature | Antibodies are widely used reagents to test for expression of proteins and
other antigens. However, they might not always reliably produce results when
they do not specifically bind to the target proteins that their providers
designed them for, leading to unreliable research results. While many proposals
have been developed to deal with the problem of antibody specificity, it is
still challenging to cover the millions of antibodies that are available to
researchers. In this study, we investigate the feasibility of automatically
generating alerts to users of problematic antibodies by extracting statements
about antibody specificity reported in the literature. The extracted alerts can
be used to construct an "Antibody Watch" knowledge base containing supporting
statements of problematic antibodies. We developed a deep neural network system
and tested its performance with a corpus of more than two thousand articles
that reported uses of antibodies. We divided the problem into two tasks. Given
an input article, the first task is to identify snippets about antibody
specificity and classify if the snippets report that any antibody exhibits
non-specificity, and thus is problematic. The second task is to link each of
these snippets to one or more antibodies mentioned in the snippet. The
experimental evaluation shows that our system can accurately perform both
classification and linking tasks with weighted F-scores over 0.925 and 0.923,
respectively, and 0.914 overall when combined to complete the joint task. We
leveraged Research Resource Identifiers (RRID) to precisely identify antibodies
linked to the extracted specificity snippets. The result shows that it is
feasible to construct a reliable knowledge base about problematic antibodies by
text mining.
| 2,021 | Computation and Language |
Designing the Business Conversation Corpus | While the progress of machine translation of written text has come far in the
past several years thanks to the increasing availability of parallel corpora
and corpora-based training technologies, automatic translation of spoken text
and dialogues remains challenging even for modern systems. In this paper, we
aim to boost the machine translation quality of conversational texts by
introducing a newly constructed Japanese-English business conversation parallel
corpus. A detailed analysis of the corpus is provided along with challenging
examples for automatic translation. We also experiment with adding the corpus
in a machine translation training scenario and show how the resulting system
benefits from its use.
| 2,019 | Computation and Language |
An exploration of the encoding of grammatical gender in word embeddings | The vector representation of words, known as word embeddings, has opened a
new research approach in linguistic studies. These representations can capture
different types of information about words. The grammatical gender of nouns is
a typical classification of nouns based on their formal and semantic
properties. The study of grammatical gender based on word embeddings can give
insight into discussions on how grammatical genders are determined. In this
study, we compare different sets of word embeddings according to the accuracy
of a neural classifier determining the grammatical gender of nouns. It is found
that there is an overlap in how grammatical gender is encoded in Swedish,
Danish, and Dutch embeddings. Our experimental results on the contextualized
embeddings pointed out that adding more contextual information to embeddings is
detrimental to the classifier's performance. We also observed that removing
morpho-syntactic features such as articles from the training corpora of
embeddings decreases the classification performance dramatically, indicating a
large portion of the information is encoded in the relationship between nouns
and articles.
| 2,020 | Computation and Language |
Ontology-driven weak supervision for clinical entity classification in
electronic health records | In the electronic health record, using clinical notes to identify entities
such as disorders and their temporality (e.g. the order of an event relative to
a time index) can inform many important analyses. However, creating training
data for clinical entity tasks is time consuming and sharing labeled data is
challenging due to privacy concerns. The information needs of the COVID-19
pandemic highlight the need for agile methods of training machine learning
models for clinical notes. We present Trove, a framework for weakly supervised
entity classification using medical ontologies and expert-generated rules. Our
approach, unlike hand-labeled notes, is easy to share and modify, while
offering performance comparable to learning from manually labeled training
data. In this work, we validate our framework on six benchmark tasks and
demonstrate Trove's ability to analyze the records of patients visiting the
emergency department at Stanford Health Care for COVID-19 presenting symptoms
and risk factors.
| 2,021 | Computation and Language |
Improving End-to-End Speech-to-Intent Classification with Reptile | End-to-end spoken language understanding (SLU) systems have many advantages
over conventional pipeline systems, but collecting in-domain speech data to
train an end-to-end system is costly and time consuming. One question arises
from this: how to train an end-to-end SLU with limited amounts of data? Many
researchers have explored approaches that make use of other related data
resources, typically by pre-training parts of the model on high-resource speech
recognition. In this paper, we suggest improving the generalization performance
of SLU models with a non-standard learning algorithm, Reptile. Though Reptile
was originally proposed for model-agnostic meta learning, we argue that it can
also be used to directly learn a target task and result in better
generalization than conventional gradient descent. In this work, we employ
Reptile to the task of end-to-end spoken intent classification. Experiments on
four datasets of different languages and domains show improvement of intent
prediction accuracy, both when Reptile is used alone and used in addition to
pre-training.
| 2,020 | Computation and Language |
Multiple Texts as a Limiting Factor in Online Learning: Quantifying
(Dis-)similarities of Knowledge Networks across Languages | We test the hypothesis that the extent to which one obtains information on a
given topic through Wikipedia depends on the language in which it is consulted.
Controlling the size factor, we investigate this hypothesis for a number of 25
subject areas. Since Wikipedia is a central part of the web-based information
landscape, this indicates a language-related, linguistic bias. The article
therefore deals with the question of whether Wikipedia exhibits this kind of
linguistic relativity or not. From the perspective of educational science, the
article develops a computational model of the information landscape from which
multiple texts are drawn as typical input of web-based reading. For this
purpose, it develops a hybrid model of intra- and intertextual similarity of
different parts of the information landscape and tests this model on the
example of 35 languages and corresponding Wikipedias. In this way the article
builds a bridge between reading research, educational science, Wikipedia
research and computational linguistics.
| 2,021 | Computation and Language |
Computational linguistic assessment of textbook and online learning
media by means of threshold concepts in business education | Threshold concepts are key terms in domain-based knowledge acquisition. They
are regarded as building blocks of the conceptual development of domain
knowledge within particular learners. From a linguistic perspective, however,
threshold concepts are instances of specialized vocabularies, exhibiting
particular linguistic features. Threshold concepts are typically used in
specialized texts such as textbooks -- that is, within a formal learning
environment. However, they also occur in informal learning environments like
newspapers. In this article, a first approach is taken to combine both lines
into an overarching research program - that is, to provide a computational
linguistic assessment of different resources, including in particular online
resources, by means of threshold concepts. To this end, the distributive
profiles of 63 threshold concepts from business education (which have been
collected from threshold concept research) has been investigated in three kinds
of (German) resources, namely textbooks, newspapers, and Wikipedia. Wikipedia
is (one of) the largest and most widely used online resources. We looked at the
threshold concepts' frequency distribution, their compound distribution, and
their network structure within the three kind of resources. The two main
findings can be summarized as follows: Firstly, the three kinds of resources
can indeed be distinguished in terms of their threshold concepts' profiles.
Secondly, Wikipedia definitely appears to be a formal learning resource.
| 2,020 | Computation and Language |
Generalized Word Shift Graphs: A Method for Visualizing and Explaining
Pairwise Comparisons Between Texts | A common task in computational text analyses is to quantify how two corpora
differ according to a measurement like word frequency, sentiment, or
information content. However, collapsing the texts' rich stories into a single
number is often conceptually perilous, and it is difficult to confidently
interpret interesting or unexpected textual patterns without looming concerns
about data artifacts or measurement validity. To better capture fine-grained
differences between texts, we introduce generalized word shift graphs,
visualizations which yield a meaningful and interpretable summary of how
individual words contribute to the variation between two texts for any measure
that can be formulated as a weighted average. We show that this framework
naturally encompasses many of the most commonly used approaches for comparing
texts, including relative frequencies, dictionary scores, and entropy-based
measures like the Kullback-Leibler and Jensen-Shannon divergences. Through
several case studies, we demonstrate how generalized word shift graphs can be
flexibly applied across domains for diagnostic investigation, hypothesis
generation, and substantive interpretation. By providing a detailed lens into
textual shifts between corpora, generalized word shift graphs help
computational social scientists, digital humanists, and other text analysis
practitioners fashion more robust scientific narratives.
| 2,021 | Computation and Language |
Contextualized Translation of Automatically Segmented Speech | Direct speech-to-text translation (ST) models are usually trained on corpora
segmented at sentence level, but at inference time they are commonly fed with
audio split by a voice activity detector (VAD). Since VAD segmentation is not
syntax-informed, the resulting segments do not necessarily correspond to
well-formed sentences uttered by the speaker but, most likely, to fragments of
one or more sentences. This segmentation mismatch degrades considerably the
quality of ST models' output. So far, researchers have focused on improving
audio segmentation towards producing sentence-like splits. In this paper,
instead, we address the issue in the model, making it more robust to a
different, potentially sub-optimal segmentation. To this aim, we train our
models on randomly segmented data and compare two approaches: fine-tuning and
adding the previous segment as context. We show that our context-aware solution
is more robust to VAD-segmented input, outperforming a strong base model and
the fine-tuning on different VAD segmentations of an English-German test set by
up to 4.25 BLEU points.
| 2,020 | Computation and Language |
An Interpretable Deep Learning System for Automatically Scoring Request
for Proposals | The Managed Care system within Medicaid (US Healthcare) uses Request For
Proposals (RFP) to award contracts for various healthcare and related services.
RFP responses are very detailed documents (hundreds of pages) submitted by
competing organisations to win contracts. Subject matter expertise and domain
knowledge play an important role in preparing RFP responses along with analysis
of historical submissions. Automated analysis of these responses through
Natural Language Processing (NLP) systems can reduce time and effort needed to
explore historical responses, and assisting in writing better responses. Our
work draws parallels between scoring RFPs and essay scoring models, while
highlighting new challenges and the need for interpretability. Typical scoring
models focus on word level impacts to grade essays and other short write-ups.
We propose a novel Bi-LSTM based regression model, and provide deeper insight
into phrases which latently impact scoring of responses. We contend the merits
of our proposed methodology using extensive quantitative experiments. We also
qualitatively asses the impact of important phrases using human evaluators.
Finally, we introduce a novel problem statement that can be used to further
improve the state of the art in NLP based automatic scoring systems.
| 2,021 | Computation and Language |
Efficient MDI Adaptation for n-gram Language Models | This paper presents an efficient algorithm for n-gram language model
adaptation under the minimum discrimination information (MDI) principle, where
an out-of-domain language model is adapted to satisfy the constraints of
marginal probabilities of the in-domain data. The challenge for MDI language
model adaptation is its computational complexity. By taking advantage of the
backoff structure of n-gram model and the idea of hierarchical training method,
originally proposed for maximum entropy (ME) language models, we show that MDI
adaptation can be computed in linear-time complexity to the inputs in each
iteration. The complexity remains the same as ME models, although MDI is more
general than ME. This makes MDI adaptation practical for large corpus and
vocabulary. Experimental results confirm the scalability of our algorithm on
very large datasets, while MDI adaptation gets slightly worse perplexity but
better word error rate results compared to simple linear interpolation.
| 2,020 | Computation and Language |
ConvBERT: Improving BERT with Span-based Dynamic Convolution | Pre-trained language models like BERT and its variants have recently achieved
impressive performance in various natural language understanding tasks.
However, BERT heavily relies on the global self-attention block and thus
suffers large memory footprint and computation cost. Although all its attention
heads query on the whole input sequence for generating the attention map from a
global perspective, we observe some heads only need to learn local
dependencies, which means the existence of computation redundancy. We therefore
propose a novel span-based dynamic convolution to replace these self-attention
heads to directly model local dependencies. The novel convolution heads,
together with the rest self-attention heads, form a new mixed attention block
that is more efficient at both global and local context learning. We equip BERT
with this mixed attention design and build a ConvBERT model. Experiments have
shown that ConvBERT significantly outperforms BERT and its variants in various
downstream tasks, with lower training cost and fewer model parameters.
Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than
ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained
models will be released.
| 2,021 | Computation and Language |
Question and Answer Test-Train Overlap in Open-Domain Question Answering
Datasets | Ideally Open-Domain Question Answering models should exhibit a number of
competencies, ranging from simply memorizing questions seen at training time,
to answering novel question formulations with answers seen during training, to
generalizing to completely novel questions with novel answers. However, single
aggregated test set scores do not show the full picture of what capabilities
models truly have. In this work, we perform a detailed study of the test sets
of three popular open-domain benchmark datasets with respect to these
competencies. We find that 60-70% of test-time answers are also present
somewhere in the training sets. We also find that 30% of test-set questions
have a near-duplicate paraphrase in their corresponding training sets. Using
these findings, we evaluate a variety of popular open-domain models to obtain
greater insight into what extent they can actually generalize, and what drives
their overall performance. We find that all models perform dramatically worse
on questions that cannot be memorized from training sets, with a mean absolute
performance difference of 63% between repeated and non-repeated data. Finally
we show that simple nearest-neighbor models out-perform a BART closed-book QA
model, further highlighting the role that training set memorization plays in
these benchmarks
| 2,020 | Computation and Language |
Compositional Networks Enable Systematic Generalization for Grounded
Language Understanding | Humans are remarkably flexible when understanding new sentences that include
combinations of concepts they have never encountered before. Recent work has
shown that while deep networks can mimic some human language abilities when
presented with novel sentences, systematic variation uncovers the limitations
in the language-understanding abilities of networks. We demonstrate that these
limitations can be overcome by addressing the generalization challenges in the
gSCAN dataset, which explicitly measures how well an agent is able to interpret
novel linguistic commands grounded in vision, e.g., novel pairings of
adjectives and nouns. The key principle we employ is compositionality: that the
compositional structure of networks should reflect the compositional structure
of the problem domain they address, while allowing other parameters to be
learned end-to-end. We build a general-purpose mechanism that enables agents to
generalize their language understanding to compositional domains. Crucially,
our network has the same state-of-the-art performance as prior work while
generalizing its knowledge when prior work does not. Our network also provides
a level of interpretability that enables users to inspect what each part of
networks learns. Robust grounded language understanding without dramatic
failures and without corner cases is critical to building safe and fair robots;
we demonstrate the significant role that compositionality can play in achieving
that goal.
| 2,021 | Computation and Language |
Discovering and Categorising Language Biases in Reddit | We present a data-driven approach using word embeddings to discover and
categorise language biases on the discussion platform Reddit. As spaces for
isolated user communities, platforms such as Reddit are increasingly connected
to issues of racism, sexism and other forms of discrimination. Hence, there is
a need to monitor the language of these groups. One of the most promising AI
approaches to trace linguistic biases in large textual datasets involves word
embeddings, which transform text into high-dimensional dense vectors and
capture semantic relations between words. Yet, previous studies require
predefined sets of potential biases to study, e.g., whether gender is more or
less associated with particular types of jobs. This makes these approaches
unfit to deal with smaller and community-centric datasets such as those on
Reddit, which contain smaller vocabularies and slang, as well as biases that
may be particular to that community. This paper proposes a data-driven approach
to automatically discover language biases encoded in the vocabulary of online
discourse communities on Reddit. In our approach, protected attributes are
connected to evaluative words found in the data, which are then categorised
through a semantic analysis system. We verify the effectiveness of our method
by comparing the biases we discover in the Google News dataset with those found
in previous literature. We then successfully discover gender bias, religion
bias, and ethnic bias in different Reddit communities. We conclude by
discussing potential application scenarios and limitations of this data-driven
bias discovery method.
| 2,021 | Computation and Language |
aschern at SemEval-2020 Task 11: It Takes Three to Tango: RoBERTa, CRF,
and Transfer Learning | We describe our system for SemEval-2020 Task 11 on Detection of Propaganda
Techniques in News Articles. We developed ensemble models using RoBERTa-based
neural architectures, additional CRF layers, transfer learning between the two
subtasks, and advanced post-processing to handle the multi-label nature of the
task, the consistency between nested spans, repetitions, and labels from
similar spans in training. We achieved sizable improvements over baseline
fine-tuned RoBERTa models, and the official evaluation ranked our system 3rd
(almost tied with the 2nd) out of 36 teams on the span identification subtask
with an F1 score of 0.491, and 2nd (almost tied with the 1st) out of 31 teams
on the technique classification subtask with an F1 score of 0.62.
| 2,020 | Computation and Language |
Semantic Complexity in End-to-End Spoken Language Understanding | End-to-end spoken language understanding (SLU) models are a class of model
architectures that predict semantics directly from speech. Because of their
input and output types, we refer to them as speech-to-interpretation (STI)
models. Previous works have successfully applied STI models to targeted use
cases, such as recognizing home automation commands, however no study has yet
addressed how these models generalize to broader use cases. In this work, we
analyze the relationship between the performance of STI models and the
difficulty of the use case to which they are applied. We introduce empirical
measures of dataset semantic complexity to quantify the difficulty of the SLU
tasks. We show that near-perfect performance metrics for STI models reported in
the literature were obtained with datasets that have low semantic complexity
values. We perform experiments where we vary the semantic complexity of a
large, proprietary dataset and show that STI model performance correlates with
our semantic complexity measures, such that performance increases as complexity
values decrease. Our results show that it is important to contextualize an STI
model's performance with the complexity values of its training dataset to
reveal the scope of its applicability.
| 2,020 | Computation and Language |
A Multilingual Neural Machine Translation Model for Biomedical Data | We release a multilingual neural machine translation model, which can be used
to translate text in the biomedical domain. The model can translate from 5
languages (French, German, Italian, Korean and Spanish) into English. It is
trained with large amounts of generic and biomedical data, using domain tags.
Our benchmarks show that it performs near state-of-the-art both on news
(generic domain) and biomedical test sets, and that it outperforms the existing
publicly released models. We believe that this release will help the
large-scale multilingual analysis of the digital content of the COVID-19 crisis
and of its effects on society, economy, and healthcare policies.
We also release a test set of biomedical text for Korean-English. It consists
of 758 sentences from official guidelines and recent papers, all about
COVID-19.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.