Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
LIMSI_UPV at SemEval-2020 Task 9: Recurrent Convolutional Neural Network
for Code-mixed Sentiment Analysis | This paper describes the participation of LIMSI UPV team in SemEval-2020 Task
9: Sentiment Analysis for Code-Mixed Social Media Text. The proposed approach
competed in SentiMix Hindi-English subtask, that addresses the problem of
predicting the sentiment of a given Hindi-English code-mixed tweet. We propose
Recurrent Convolutional Neural Network that combines both the recurrent neural
network and the convolutional network to better capture the semantics of the
text, for code-mixed sentiment analysis. The proposed system obtained 0.69
(best run) in terms of F1 score on the given test data and achieved the 9th
place (Codalab username: somban) in the SentiMix Hindi-English subtask.
| 2,020 | Computation and Language |
SEEC: Semantic Vector Federation across Edge Computing Environments | Semantic vector embedding techniques have proven useful in learning semantic
representations of data across multiple domains. A key application enabled by
such techniques is the ability to measure semantic similarity between given
data samples and find data most similar to a given sample. State-of-the-art
embedding approaches assume all data is available on a single site. However, in
many business settings, data is distributed across multiple edge locations and
cannot be aggregated due to a variety of constraints. Hence, the applicability
of state-of-the-art embedding approaches is limited to freely shared datasets,
leaving out applications with sensitive or mission-critical data. This paper
addresses this gap by proposing novel unsupervised algorithms called
\emph{SEEC} for learning and applying semantic vector embedding in a variety of
distributed settings. Specifically, for scenarios where multiple edge locations
can engage in joint learning, we adapt the recently proposed federated learning
techniques for semantic vector embedding. Where joint learning is not possible,
we propose novel semantic vector translation algorithms to enable semantic
query across multiple edge locations, each with its own semantic vector-space.
Experimental results on natural language as well as graph datasets show that
this may be a promising new direction.
| 2,020 | Computation and Language |
A Bidirectional Tree Tagging Scheme for Joint Medical Relation
Extraction | Joint medical relation extraction refers to extracting triples, composed of
entities and relations, from the medical text with a single model. One of the
solutions is to convert this task into a sequential tagging task. However, in
the existing works, the methods of representing and tagging the triples in a
linear way failed to the overlapping triples, and the methods of organizing the
triples as a graph faced the challenge of large computational effort. In this
paper, inspired by the tree-like relation structures in the medical text, we
propose a novel scheme called Bidirectional Tree Tagging (BiTT) to form the
medical relation triples into two two binary trees and convert the trees into a
word-level tags sequence. Based on BiTT scheme, we develop a joint relation
extraction model to predict the BiTT tags and further extract medical triples
efficiently. Our model outperforms the best baselines by 2.0\% and 2.5\% in F1
score on two medical datasets. What's more, the models with our BiTT scheme
also obtain promising results in three public datasets of other domains.
| 2,022 | Computation and Language |
Discovering Bilingual Lexicons in Polyglot Word Embeddings | Bilingual lexicons and phrase tables are critical resources for modern
Machine Translation systems. Although recent results show that without any seed
lexicon or parallel data, highly accurate bilingual lexicons can be learned
using unsupervised methods, such methods rely on the existence of large, clean
monolingual corpora. In this work, we utilize a single Skip-gram model trained
on a multilingual corpus yielding polyglot word embeddings, and present a novel
finding that a surprisingly simple constrained nearest-neighbor sampling
technique in this embedding space can retrieve bilingual lexicons, even in
harsh social media data sets predominantly written in English and Romanized
Hindi and often exhibiting code switching. Our method does not require
monolingual corpora, seed lexicons, or any other such resources. Additionally,
across three European language pairs, we observe that polyglot word embeddings
indeed learn a rich semantic representation of words and substantial bilingual
lexicons can be retrieved using our constrained nearest neighbor sampling. We
investigate potential reasons and downstream applications in settings spanning
both clean texts and noisy social media data sets, and in both resource-rich
and under-resourced language pairs.
| 2,020 | Computation and Language |
Generative Models are Unsupervised Predictors of Page Quality: A
Colossal-Scale Study | Large generative language models such as GPT-2 are well-known for their
ability to generate text as well as their utility in supervised downstream
tasks via fine-tuning. Our work is twofold: firstly we demonstrate via human
evaluation that classifiers trained to discriminate between human and
machine-generated text emerge as unsupervised predictors of "page quality",
able to detect low quality content without any training. This enables fast
bootstrapping of quality indicators in a low-resource setting. Secondly,
curious to understand the prevalence and nature of low quality pages in the
wild, we conduct extensive qualitative and quantitative analysis over 500
million web articles, making this the largest-scale study ever conducted on the
topic.
| 2,020 | Computation and Language |
I-AID: Identifying Actionable Information from Disaster-related Tweets | Social media plays a significant role in disaster management by providing
valuable data about affected people, donations and help requests. Recent
studies highlight the need to filter information on social media into
fine-grained content labels. However, identifying useful information from
massive amounts of social media posts during a crisis is a challenging task. In
this paper, we propose I-AID, a multimodel approach to automatically categorize
tweets into multi-label information types and filter critical information from
the enormous volume of social media data. I-AID incorporates three main
components: i) a BERT-based encoder to capture the semantics of a tweet and
represent as a low-dimensional vector, ii) a graph attention network (GAT) to
apprehend correlations between tweets' words/entities and the corresponding
information types, and iii) a Relation Network as a learnable distance metric
to compute the similarity between tweets and their corresponding information
types in a supervised way. We conducted several experiments on two real
publicly-available datasets. Our results indicate that I-AID outperforms
state-of-the-art approaches in terms of weighted average F1 score by +6% and
+4% on the TREC-IS dataset and COVID-19 Tweets, respectively.
| 2,021 | Computation and Language |
C1 at SemEval-2020 Task 9: SentiMix: Sentiment Analysis for Code-Mixed
Social Media Text using Feature Engineering | In today's interconnected and multilingual world, code-mixing of languages on
social media is a common occurrence. While many Natural Language Processing
(NLP) tasks like sentiment analysis are mature and well designed for
monolingual text, techniques to apply these tasks to code-mixed text still
warrant exploration. This paper describes our feature engineering approach to
sentiment analysis in code-mixed social media text for SemEval-2020 Task 9:
SentiMix. We tackle this problem by leveraging a set of hand-engineered
lexical, sentiment, and metadata features to design a classifier that can
disambiguate between "positive", "negative" and "neutral" sentiment. With this
model, we are able to obtain a weighted F1 score of 0.65 for the "Hinglish"
task and 0.63 for the "Spanglish" tasks
| 2,020 | Computation and Language |
Classifier Combination Approach for Question Classification for Bengali
Question Answering System | Question classification (QC) is a prime constituent of automated question
answering system. The work presented here demonstrates that the combination of
multiple models achieve better classification performance than those obtained
with existing individual models for the question classification task in
Bengali. We have exploited state-of-the-art multiple model combination
techniques, i.e., ensemble, stacking and voting, to increase QC accuracy.
Lexical, syntactic and semantic features of Bengali questions are used for four
well-known classifiers, namely Na\"{\i}ve Bayes, kernel Na\"{\i}ve Bayes, Rule
Induction, and Decision Tree, which serve as our base learners. Single-layer
question-class taxonomy with 8 coarse-grained classes is extended to two-layer
taxonomy by adding 69 fine-grained classes. We carried out the experiments both
on single-layer and two-layer taxonomies. Experimental results confirmed that
classifier combination approaches outperform single classifier classification
approaches by 4.02% for coarse-grained question classes. Overall, the stacking
approach produces the best results for fine-grained classification and achieves
87.79% of accuracy. The approach presented here could be used in other
Indo-Aryan or Indic languages to develop a question answering system.
| 2,019 | Computation and Language |
Detecting Generic Music Features with Single Layer Feedforward Network
using Unsupervised Hebbian Computation | With the ever-increasing number of digital music and vast music track
features through popular online music streaming software and apps, feature
recognition using the neural network is being used for experimentation to
produce a wide range of results across a variety of experiments recently.
Through this work, the authors extract information on such features from a
popular open-source music corpus and explored new recognition techniques, by
applying unsupervised Hebbian learning techniques on their single-layer neural
network using the same dataset. The authors show the detailed empirical
findings to simulate how such an algorithm can help a single layer feedforward
network in training for music feature learning as patterns. The unsupervised
training algorithm enhances their proposed neural network to achieve an
accuracy of 90.36% for successful music feature detection. For comparative
analysis against similar tasks, authors put their results with the likes of
several previous benchmark works. They further discuss the limitations and
thorough error analysis of their work. The authors hope to discover and gather
new information about this particular classification technique and its
performance, and further understand future potential directions and prospects
that could improve the art of computational music feature recognition.
| 2,020 | Computation and Language |
SemEval-2020 Task 6: Definition extraction from free text with the DEFT
corpus | Research on definition extraction has been conducted for well over a decade,
largely with significant constraints on the type of definitions considered. In
this work, we present DeftEval, a SemEval shared task in which participants
must extract definitions from free text using a term-definition pair corpus
that reflects the complex reality of definitions in natural language.
Definitions and glosses in free text often appear without explicit indicators,
across sentences boundaries, or in an otherwise complex linguistic manner.
DeftEval involved 3 distinct subtasks: 1)Sentence classification, 2) sequence
labeling, and 3) relation extraction.
| 2,020 | Computation and Language |
PNEL: Pointer Network based End-To-End Entity Linking over Knowledge
Graphs | Question Answering systems are generally modelled as a pipeline consisting of
a sequence of steps. In such a pipeline, Entity Linking (EL) is often the first
step. Several EL models first perform span detection and then entity
disambiguation. In such models errors from the span detection phase cascade to
later steps and result in a drop of overall accuracy. Moreover, lack of gold
entity spans in training data is a limiting factor for span detector training.
Hence the movement towards end-to-end EL models began where no separate span
detection step is involved. In this work we present a novel approach to
end-to-end EL by applying the popular Pointer Network model, which achieves
competitive performance. We demonstrate this in our evaluation over three
datasets on the Wikidata Knowledge Graph.
| 2,020 | Computation and Language |
Semantic Sentiment Analysis Based on Probabilistic Graphical Models and
Recurrent Neural Network | Sentiment Analysis is the task of classifying documents based on the
sentiments expressed in textual form, this can be achieved by using lexical and
semantic methods. The purpose of this study is to investigate the use of
semantics to perform sentiment analysis based on probabilistic graphical models
and recurrent neural networks. In the empirical evaluation, the classification
performance of the graphical models was compared with some traditional machine
learning classifiers and a recurrent neural network. The datasets used for the
experiments were IMDB movie reviews, Amazon Consumer Product reviews, and
Twitter Review datasets. After this empirical study, we conclude that the
inclusion of semantics for sentiment analysis tasks can greatly improve the
performance of a classifier, as the semantic feature extraction methods reduce
uncertainties in classification resulting in more accurate predictions.
| 2,020 | Computation and Language |
Extracting Semantic Concepts and Relations from Scientific Publications
by Using Deep Learning | With the large volume of unstructured data that increases constantly on the
web, the motivation of representing the knowledge in this data in the
machine-understandable form is increased. Ontology is one of the major
cornerstones of representing the information in a more meaningful way on the
semantic Web. The current ontology repositories are quite limited either for
their scope or for currentness. In addition, the current ontology extraction
systems have many shortcomings and drawbacks, such as using a small dataset,
depending on a large amount predefined patterns to extract semantic relations,
and extracting a very few types of relations. The aim of this paper is to
introduce a proposal of automatically extracting semantic concepts and
relations from scientific publications. This paper suggests new types of
semantic relations and points out of using deep learning (DL) models for
semantic relation extraction.
| 2,021 | Computation and Language |
Hearings and mishearings: decrypting the spoken word | We propose a model of the speech perception of individual words in the
presence of mishearings. This phenomenological approach is based on concepts
used in linguistics, and provides a formalism that is universal across
languages. We put forward an efficient two-parameter form for the word length
distribution, and introduce a simple representation of mishearings, which we
use in our subsequent modelling of word recognition. In a context-free
scenario, word recognition often occurs via anticipation when, part-way into a
word, we can correctly guess its full form. We give a quantitative estimate of
this anticipation threshold when no mishearings occur, in terms of model
parameters. As might be expected, the whole anticipation effect disappears when
there are sufficiently many mishearings. Our global approach to the problem of
speech perception is in the spirit of an optimisation problem. We show for
instance that speech perception is easy when the word length is less than a
threshold, to be identified with a static transition, and hard otherwise. We
extend this to the dynamics of word recognition, proposing an intuitive
approach highlighting the distinction between individual, isolated mishearings
and clusters of contiguous mishearings. At least in some parameter range, a
dynamical transition is manifest well before the static transition is reached,
as is the case for many other examples of complex systems.
| 2,020 | Computation and Language |
Summary-Source Proposition-level Alignment: Task, Datasets and
Supervised Baseline | Aligning sentences in a reference summary with their counterparts in source
documents was shown as a useful auxiliary summarization task, notably for
generating training data for salience detection. Despite its assessed utility,
the alignment step was mostly approached with heuristic unsupervised methods,
typically ROUGE-based, and was never independently optimized or evaluated. In
this paper, we propose establishing summary-source alignment as an explicit
task, while introducing two major novelties: (1) applying it at the more
accurate proposition span level, and (2) approaching it as a supervised
classification task. To that end, we created a novel training dataset for
proposition-level alignment, derived automatically from available summarization
evaluation data. In addition, we crowdsourced dev and test datasets, enabling
model development and proper evaluation. Utilizing these data, we present a
supervised proposition alignment baseline model, showing improved
alignment-quality over the unsupervised approach.
| 2,021 | Computation and Language |
Document Similarity from Vector Space Densities | We propose a computationally light method for estimating similarities between
text documents, which we call the density similarity (DS) method. The method is
based on a word embedding in a high-dimensional Euclidean space and on kernel
regression, and takes into account semantic relations among words. We find that
the accuracy of this method is virtually the same as that of a state-of-the-art
method, while the gain in speed is very substantial. Additionally, we introduce
generalized versions of the top-k accuracy metric and of the Jaccard metric of
agreement between similarity models.
| 2,020 | Computation and Language |
Automatic Assignment of Radiology Examination Protocols Using
Pre-trained Language Models with Knowledge Distillation | Selecting radiology examination protocol is a repetitive, and time-consuming
process. In this paper, we present a deep learning approach to automatically
assign protocols to computer tomography examinations, by pre-training a
domain-specific BERT model ($BERT_{rad}$). To handle the high data imbalance
across exam protocols, we used a knowledge distillation approach that
up-sampled the minority classes through data augmentation. We compared
classification performance of the described approach with the statistical
n-gram models using Support Vector Machine (SVM), Gradient Boosting Machine
(GBM), and Random Forest (RF) classifiers, as well as the Google's
$BERT_{base}$ model. SVM, GBM and RF achieved macro-averaged F1 scores of 0.45,
0.45, and 0.6 while $BERT_{base}$ and $BERT_{rad}$ achieved 0.61 and 0.63.
Knowledge distillation improved overall performance on the minority classes,
achieving a F1 score of 0.66.
| 2,021 | Computation and Language |
Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models | We propose a general framework called Text Modular Networks(TMNs) for
building interpretable systems that learn to solve complex tasks by decomposing
them into simpler ones solvable by existing models. To ensure solvability of
simpler tasks, TMNs learn the textual input-output behavior (i.e., language) of
existing models through their datasets. This differs from prior
decomposition-based approaches which, besides being designed specifically for
each complex task, produce decompositions independent of existing sub-models.
Specifically, we focus on Question Answering (QA) and show how to train a
next-question generator to sequentially produce sub-questions targeting
appropriate sub-models, without additional human annotation. These
sub-questions and answers provide a faithful natural language explanation of
the model's reasoning. We use this framework to build ModularQA, a system that
can answer multi-hop reasoning questions by decomposing them into sub-questions
answerable by a neural factoid single-span QA model and a symbolic calculator.
Our experiments show that ModularQA is more versatile than existing explainable
systems for DROP and HotpotQA datasets, is more robust than state-of-the-art
blackbox (uninterpretable) systems, and generates more understandable and
trustworthy explanations compared to prior work.
| 2,021 | Computation and Language |
Automated Storytelling via Causal, Commonsense Plot Ordering | Automated story plot generation is the task of generating a coherent sequence
of plot events. Causal relations between plot events are believed to increase
the perception of story and plot coherence. In this work, we introduce the
concept of soft causal relations as causal relations inferred from commonsense
reasoning. We demonstrate C2PO, an approach to narrative generation that
operationalizes this concept through Causal, Commonsense Plot Ordering. Using
human-participant protocols, we evaluate our system against baseline systems
with different commonsense reasoning reasoning and inductive biases to
determine the role of soft causal relations in perceived story quality. Through
these studies we also probe the interplay of how changes in commonsense norms
across storytelling genres affect perceptions of story quality.
| 2,021 | Computation and Language |
A Practical Chinese Dependency Parser Based on A Large-scale Dataset | Dependency parsing is a longstanding natural language processing task, with
its outputs crucial to various downstream tasks. Recently, neural network based
(NN-based) dependency parsing has achieved significant progress and obtained
the state-of-the-art results. As we all know, NN-based approaches require
massive amounts of labeled training data, which is very expensive because it
requires human annotation by experts. Thus few industrial-oriented dependency
parser tools are publicly available. In this report, we present Baidu
Dependency Parser (DDParser), a new Chinese dependency parser trained on a
large-scale manually labeled dataset called Baidu Chinese Treebank (DuCTB).
DuCTB consists of about one million annotated sentences from multiple sources
including search logs, Chinese newswire, various forum discourses, and
conversation programs. DDParser is extended on the graph-based biaffine parser
to accommodate to the characteristics of Chinese dataset. We conduct
experiments on two test sets: the standard test set with the same distribution
as the training set and the random test set sampled from other sources, and the
labeled attachment scores (LAS) of them are 92.9% and 86.9% respectively.
DDParser achieves the state-of-the-art results, and is released at
https://github.com/baidu/DDParser.
| 2,020 | Computation and Language |
Revisiting the Open-Domain Question Answering Pipeline | Open-domain question answering (QA) is the tasl of identifying answers to
natural questions from a large corpus of documents. The typical open-domain QA
system starts with information retrieval to select a subset of documents from
the corpus, which are then processed by a machine reader to select the answer
spans. This paper describes Mindstone, an open-domain QA system that consists
of a new multi-stage pipeline that employs a traditional BM25-based information
retriever, RM3-based neural relevance feedback, neural ranker, and a machine
reading comprehension stage. This paper establishes a new baseline for
end-to-end performance on question answering for Wikipedia/SQuAD dataset
(EM=58.1, F1=65.8), with substantial gains over the previous state of the art
(Yang et al., 2019b). We also show how the new pipeline enables the use of
low-resolution labels, and can be easily tuned to meet various timing
requirements.
| 2,020 | Computation and Language |
Variational Inference-Based Dropout in Recurrent Neural Networks for
Slot Filling in Spoken Language Understanding | This paper proposes to generalize the variational recurrent neural network
(RNN) with variational inference (VI)-based dropout regularization employed for
the long short-term memory (LSTM) cells to more advanced RNN architectures like
gated recurrent unit (GRU) and bi-directional LSTM/GRU. The new variational
RNNs are employed for slot filling, which is an intriguing but challenging task
in spoken language understanding. The experiments on the ATIS dataset suggest
that the variational RNNs with the VI-based dropout regularization can
significantly improve the naive dropout regularization RNNs-based baseline
systems in terms of F-measure. Particularly, the variational RNN with
bi-directional LSTM/GRU obtains the best F-measure score.
| 2,020 | Computation and Language |
FAT ALBERT: Finding Answers in Large Texts using Semantic Similarity
Attention Layer based on BERT | Machine based text comprehension has always been a significant research field
in natural language processing. Once a full understanding of the text context
and semantics is achieved, a deep learning model can be trained to solve a
large subset of tasks, e.g. text summarization, classification and question
answering. In this paper we focus on the question answering problem,
specifically the multiple choice type of questions. We develop a model based on
BERT, a state-of-the-art transformer network. Moreover, we alleviate the
ability of BERT to support large text corpus by extracting the highest
influence sentences through a semantic similarity model. Evaluations of our
proposed model demonstrate that it outperforms the leading models in the
MovieQA challenge and we are currently ranked first in the leader board with
test accuracy of 87.79%. Finally, we discuss the model shortcomings and suggest
possible improvements to overcome these limitations.
| 2,020 | Computation and Language |
Cross-Utterance Language Models with Acoustic Error Sampling | The effective exploitation of richer contextual information in language
models (LMs) is a long-standing research problem for automatic speech
recognition (ASR). A cross-utterance LM (CULM) is proposed in this paper, which
augments the input to a standard long short-term memory (LSTM) LM with a
context vector derived from past and future utterances using an extraction
network. The extraction network uses another LSTM to encode surrounding
utterances into vectors which are integrated into a context vector using either
a projection of LSTM final hidden states, or a multi-head self-attentive layer.
In addition, an acoustic error sampling technique is proposed to reduce the
mismatch between training and test-time. This is achieved by considering
possible ASR errors into the model training procedure, and can therefore
improve the word error rate (WER). Experiments performed on both AMI and
Switchboard datasets show that CULMs outperform the LSTM LM baseline WER. In
particular, the CULM with a self-attentive layer-based extraction network and
acoustic error sampling achieves 0.6% absolute WER reduction on AMI, 0.3% WER
reduction on the Switchboard part and 0.9% WER reduction on the Callhome part
of Eval2000 test set over the respective baselines.
| 2,020 | Computation and Language |
PGST: a Polyglot Gender Style Transfer method | Recent developments in Text Style Transfer have led this field to be more
highlighted than ever. The task of transferring an input's style to another is
accompanied by plenty of challenges (e.g., fluency and content preservation)
that need to be taken care of. In this research, we introduce PGST, a novel
polyglot text style transfer approach in the gender domain, composed of
different constitutive elements. In contrast to prior studies, it is feasible
to apply a style transfer method in multiple languages by fulfilling our
method's predefined elements. We have proceeded with a pre-trained word
embedding for token replacement purposes, a character-based token classifier
for gender exchange purposes, and a beam search algorithm for extracting the
most fluent combination. Since different approaches are introduced in our
research, we determine a trade-off value for evaluating different models'
success in faking our gender identification model with transferred text. To
demonstrate our method's multilingual applicability, we applied our method on
both English and Persian corpora and ended up defeating our proposed gender
identification model by 45.6% and 39.2%, respectively. While this research's
focus is not limited to a specific language, our obtained evaluation results
are highly competitive in an analogy among English state of the art methods.
| 2,021 | Computation and Language |
ASTRAL: Adversarial Trained LSTM-CNN for Named Entity Recognition | Named Entity Recognition (NER) is a challenging task that extracts named
entities from unstructured text data, including news, articles, social
comments, etc. The NER system has been studied for decades. Recently, the
development of Deep Neural Networks and the progress of pre-trained word
embedding have become a driving force for NER. Under such circumstances, how to
make full use of the information extracted by word embedding requires more
in-depth research. In this paper, we propose an Adversarial Trained LSTM-CNN
(ASTRAL) system to improve the current NER method from both the model structure
and the training process. In order to make use of the spatial information
between adjacent words, Gated-CNN is introduced to fuse the information of
adjacent words. Besides, a specific Adversarial training method is proposed to
deal with the overfitting problem in NER. We add perturbation to variables in
the network during the training process, making the variables more diverse,
improving the generalization and robustness of the model. Our model is
evaluated on three benchmarks, CoNLL-03, OntoNotes 5.0, and WNUT-17, achieving
state-of-the-art results. Ablation study and case study also show that our
system can converge faster and is less prone to overfitting.
| 2,020 | Computation and Language |
Generalisation of Cyberbullying Detection | Cyberbullying is a problem in today's ubiquitous online communities.
Filtering it out of online conversations has proven a challenge, and efforts
have led to the creation of many different datasets, all offered as resources
to train classifiers. Through these datasets, we will explore the variety of
definitions of cyberbullying behaviors and the impact of these differences on
the portability of one classifier to another community. By analyzing the
similarities between datasets, we also gain insight on the generalization power
of the classifiers trained from them. A study of ensemble models combining
these classifiers will help us understand how they interact with each other.
| 2,020 | Computation and Language |
Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake
Claim Classification | The rampant integration of social media in our every day lives and culture
has given rise to fast and easier access to the flow of information than ever
in human history. However, the inherently unsupervised nature of social media
platforms has also made it easier to spread false information and fake news.
Furthermore, the high volume and velocity of information flow in such platforms
make manual supervision and control of information propagation infeasible. This
paper aims to address this issue by proposing a novel deep learning approach
for automated detection of false short-text claims on social media. We first
introduce Sentimental LIAR, which extends the LIAR dataset of short claims by
adding features based on sentiment and emotion analysis of claims. Furthermore,
we propose a novel deep learning architecture based on the BERT-Base language
model for classification of claims as genuine or fake. Our results demonstrate
that the proposed architecture trained on Sentimental LIAR can achieve an
accuracy of 70%, which is an improvement of ~30% over previously reported
results for the LIAR benchmark.
| 2,020 | Computation and Language |
MALCOM: Generating Malicious Comments to Attack Neural Fake News
Detection Models | In recent years, the proliferation of so-called "fake news" has caused much
disruptions in society and weakened the news ecosystem. Therefore, to mitigate
such problems, researchers have developed state-of-the-art models to
auto-detect fake news on social media using sophisticated data science and
machine learning techniques. In this work, then, we ask "what if adversaries
attempt to attack such detection models?" and investigate related issues by (i)
proposing a novel threat model against fake news detectors, in which
adversaries can post malicious comments toward news articles to mislead fake
news detectors, and (ii) developing MALCOM, an end-to-end adversarial comment
generation framework to achieve such an attack. Through a comprehensive
evaluation, we demonstrate that about 94% and 93.5% of the time on average
MALCOM can successfully mislead five of the latest neural detection models to
always output targeted real and fake news labels. Furthermore, MALCOM can also
fool black box fake news detectors to always output real news labels 90% of the
time on average. We also compare our attack model with four baselines across
two real-world datasets, not only on attack performance but also on generated
quality, coherency, transferability, and robustness.
| 2,020 | Computation and Language |
Too good to be true? Predicting author profiles from abusive language | The problem of online threats and abuse could potentially be mitigated with a
computational approach, where sources of abuse are better understood or
identified through author profiling. However, abusive language constitutes a
specific domain of language for which it has not yet been tested whether
differences emerge based on a text author's personality, age, or gender. This
study examines statistical relationships between author demographics and
abusive vs normal language, and performs prediction experiments for
personality, age, and gender. Although some statistical relationships were
established between author characteristics and language use, these patterns did
not translate to high prediction performance. Personality traits were predicted
within 15% of their actual value, age was predicted with an error margin of 10
years, and gender was classified correctly in 70% of the cases. These results
are poor when compared to previous research on author profiling, therefore we
urge caution in applying this within the context of abusive language and threat
assessment.
| 2,020 | Computation and Language |
An exploratory study of L1-specific non-words | In this paper, we explore L1-specific non-words, i.e. non-words in a target
language (in this case Swedish) that are re-ranked by a different-language
language model. We surmise that speakers of a certain L1 will react different
to L1-specific non-words than to general non-words. We present the results from
two small case studies exploring whether re-ranking non-words with different
language models leads to a perceived difference in `Swedishness' (pilot study
1) and whether German and English native speakers have longer reaction times in
a lexical decision task when presented with their respective L1-specific
non-words (pilot study 2). Tentative results seem to indicate that L1-specific
non-words are processed second-slowest, after purely Swedish-looking non-words.
| 2,020 | Computation and Language |
Garain at SemEval-2020 Task 12: Sequence based Deep Learning for
Categorizing Offensive Language in Social Media | SemEval-2020 Task 12 was OffenseEval: Multilingual Offensive Language
Identification in Social Media (Zampieri et al., 2020). The task was subdivided
into multiple languages and datasets were provided for each one. The task was
further divided into three sub-tasks: offensive language identification,
automatic categorization of offense types, and offense target identification. I
have participated in the task-C, that is, offense target identification. For
preparing the proposed system, I have made use of Deep Learning networks like
LSTMs and frameworks like Keras which combine the bag of words model with
automatically generated sequence based features and manually extracted features
from the given dataset. My system on training on 25% of the whole dataset
achieves macro averaged f1 score of 47.763%.
| 2,020 | Computation and Language |
Comparative Evaluation of Pretrained Transfer Learning Models on
Automatic Short Answer Grading | Automatic Short Answer Grading (ASAG) is the process of grading the student
answers by computational approaches given a question and the desired answer.
Previous works implemented the methods of concept mapping, facet mapping, and
some used the conventional word embeddings for extracting semantic features.
They extracted multiple features manually to train on the corresponding
datasets. We use pretrained embeddings of the transfer learning models, ELMo,
BERT, GPT, and GPT-2 to assess their efficiency on this task. We train with a
single feature, cosine similarity, extracted from the embeddings of these
models. We compare the RMSE scores and correlation measurements of the four
models with previous works on Mohler dataset. Our work demonstrates that ELMo
outperformed the other three models. We also, briefly describe the four
transfer learning models and conclude with the possible causes of poor results
of transfer learning models.
| 2,020 | Computation and Language |
A Simple Global Neural Discourse Parser | Discourse parsing is largely dominated by greedy parsers with
manually-designed features, while global parsing is rare due to its
computational expense. In this paper, we propose a simple chart-based neural
discourse parser that does not require any manually-crafted features and is
based on learned span representations only. To overcome the computational
challenge, we propose an independence assumption between the label assigned to
a node in the tree and the splitting point that separates its children, which
results in tractable decoding. We empirically demonstrate that our model
achieves the best performance among global parsers, and comparable performance
to state-of-art greedy parsers, using only learned span representations.
| 2,020 | Computation and Language |
Learning to summarize from human feedback | As language models become more powerful, training and evaluation are
increasingly bottlenecked by the data and metrics used for a particular task.
For example, summarization models are often trained to predict human reference
summaries and evaluated using ROUGE, but both of these metrics are rough
proxies for what we really care about -- summary quality. In this work, we show
that it is possible to significantly improve summary quality by training a
model to optimize for human preferences. We collect a large, high-quality
dataset of human comparisons between summaries, train a model to predict the
human-preferred summary, and use that model as a reward function to fine-tune a
summarization policy using reinforcement learning. We apply our method to a
version of the TL;DR dataset of Reddit posts and find that our models
significantly outperform both human reference summaries and much larger models
fine-tuned with supervised learning alone. Our models also transfer to CNN/DM
news articles, producing summaries nearly as good as the human reference
without any news-specific fine-tuning. We conduct extensive analyses to
understand our human feedback dataset and fine-tuned models We establish that
our reward model generalizes to new datasets, and that optimizing our reward
model results in better summaries than optimizing ROUGE according to humans. We
hope the evidence from our paper motivates machine learning researchers to pay
closer attention to how their training loss affects the model behavior they
actually want.
| 2,022 | Computation and Language |
orgFAQ: A New Dataset and Analysis on Organizational FAQs and User
Questions | Frequently Asked Questions (FAQ) webpages are created by organizations for
their users. FAQs are used in several scenarios, e.g., to answer user
questions. On the other hand, the content of FAQs is affected by user questions
by definition. In order to promote research in this field, several FAQ datasets
exist. However, we claim that being collected from community websites, they do
not correctly represent challenges associated with FAQs in an organizational
context. Thus, we release orgFAQ, a new dataset composed of $6988$ user
questions and $1579$ corresponding FAQs that were extracted from organizations'
FAQ webpages in the Jobs domain. In this paper, we provide an analysis of the
properties of such FAQs, and demonstrate the usefulness of our new dataset by
utilizing it in a relevant task from the Jobs domain. We also show the value of
the orgFAQ dataset in a task of a different domain - the COVID-19 pandemic.
| 2,020 | Computation and Language |
Biomedical named entity recognition using BERT in the machine reading
comprehension framework | Recognition of biomedical entities from literature is a challenging research
focus, which is the foundation for extracting a large amount of biomedical
knowledge existing in unstructured texts into structured formats. Using the
sequence labeling framework to implement biomedical named entity recognition
(BioNER) is currently a conventional method. This method, however, often cannot
take full advantage of the semantic information in the dataset, and the
performance is not always satisfactory. In this work, instead of treating the
BioNER task as a sequence labeling problem, we formulate it as a machine
reading comprehension (MRC) problem. This formulation can introduce more prior
knowledge utilizing well-designed queries, and no longer need decoding
processes such as conditional random fields (CRF). We conduct experiments on
six BioNER datasets, and the experimental results demonstrate the effectiveness
of our method. Our method achieves state-of-the-art (SOTA) performance on the
BC4CHEMD, BC5CDR-Chem, BC5CDR-Disease, NCBI-Disease, BC2GM and JNLPBA datasets,
achieving F1-scores of 92.92%, 94.19%, 87.83%, 90.04%, 85.48% and 78.93%,
respectively.
| 2,021 | Computation and Language |
SRQA: Synthetic Reader for Factoid Question Answering | The question answering system can answer questions from various fields and
forms with deep neural networks, but it still lacks effective ways when facing
multiple evidences. We introduce a new model called SRQA, which means Synthetic
Reader for Factoid Question Answering. This model enhances the question
answering system in the multi-document scenario from three aspects: model
structure, optimization goal, and training method, corresponding to Multilayer
Attention (MA), Cross Evidence (CE), and Adversarial Training (AT)
respectively. First, we propose a multilayer attention network to obtain a
better representation of the evidences. The multilayer attention mechanism
conducts interaction between the question and the passage within each layer,
making the token representation of evidences in each layer takes the
requirement of the question into account. Second, we design a cross evidence
strategy to choose the answer span within more evidences. We improve the
optimization goal, considering all the answers' locations in multiple evidences
as training targets, which leads the model to reason among multiple evidences.
Third, adversarial training is employed to high-level variables besides the
word embedding in our model. A new normalization method is also proposed for
adversarial perturbations so that we can jointly add perturbations to several
target variables. As an effective regularization method, adversarial training
enhances the model's ability to process noisy data. Combining these three
strategies, we enhance the contextual representation and locating ability of
our model, which could synthetically extract the answer span from several
evidences. We perform SRQA on the WebQA dataset, and experiments show that our
model outperforms the state-of-the-art models (the best fuzzy score of our
model is up to 78.56%, with an improvement of about 2%).
| 2,020 | Computation and Language |
The ADAPT Enhanced Dependency Parser at the IWPT 2020 Shared Task | We describe the ADAPT system for the 2020 IWPT Shared Task on parsing
enhanced Universal Dependencies in 17 languages. We implement a pipeline
approach using UDPipe and UDPipe-future to provide initial levels of
annotation. The enhanced dependency graph is either produced by a graph-based
semantic dependency parser or is built from the basic tree using a small set of
heuristics. Our results show that, for the majority of languages, a semantic
dependency parser can be successfully applied to the task of parsing enhanced
dependencies.
Unfortunately, we did not ensure a connected graph as part of our pipeline
approach and our competition submission relied on a last-minute fix to pass the
validation script which harmed our official evaluation scores significantly.
Our submission ranked eighth in the official evaluation with a macro-averaged
coarse ELAS F1 of 67.23 and a treebank average of 67.49. We later implemented
our own graph-connecting fix which resulted in a score of 79.53 (language
average) or 79.76 (treebank average), which would have placed fourth in the
competition evaluation.
| 2,020 | Computation and Language |
Grounded Language Learning Fast and Slow | Recent work has shown that large text-based neural language models, trained
with conventional supervised learning objectives, acquire a surprising
propensity for few- and one-shot learning. Here, we show that an embodied agent
situated in a simulated 3D world, and endowed with a novel dual-coding external
memory, can exhibit similar one-shot word learning when trained with
conventional reinforcement learning algorithms. After a single introduction to
a novel object via continuous visual perception and a language prompt ("This is
a dax"), the agent can re-identify the object and manipulate it as instructed
("Put the dax on the bed"). In doing so, it seamlessly integrates short-term,
within-episode knowledge of the appropriate referent for the word "dax" with
long-term lexical and motor knowledge acquired across episodes (i.e. "bed" and
"putting"). We find that, under certain training conditions and with a
particular memory writing mechanism, the agent's one-shot word-object binding
generalizes to novel exemplars within the same ShapeNet category, and is
effective in settings with unfamiliar numbers of objects. We further show how
dual-coding memory can be exploited as a signal for intrinsic motivation,
stimulating the agent to seek names for objects that may be useful for later
executing instructions. Together, the results demonstrate that deep neural
networks can exploit meta-learning, episodic memory and an explicitly
multi-modal environment to account for 'fast-mapping', a fundamental pillar of
human cognitive development and a potentially transformative capacity for
agents that interact with human users.
| 2,020 | Computation and Language |
A Python Library for Exploratory Data Analysis on Twitter Data based on
Tokens and Aggregated Origin-Destination Information | Twitter is perhaps the social media more amenable for research. It requires
only a few steps to obtain information, and there are plenty of libraries that
can help in this regard. Nonetheless, knowing whether a particular event is
expressed on Twitter is a challenging task that requires a considerable
collection of tweets. This proposal aims to facilitate, to a researcher
interested, the process of mining events on Twitter by opening a collection of
processed information taken from Twitter since December 2015. The events could
be related to natural disasters, health issues, and people's mobility, among
other studies that can be pursued with the library proposed. Different
applications are presented in this contribution to illustrate the library's
capabilities: an exploratory analysis of the topics discovered in tweets, a
study on similarity among dialects of the Spanish language, and a mobility
report on different countries. In summary, the Python library presented is
applied to different domains and retrieves a plethora of information in terms
of frequencies by day of words and bi-grams of words for Arabic, English,
Spanish, and Russian languages. As well as mobility information related to the
number of travels among locations for more than 200 countries or territories.
| 2,021 | Computation and Language |
A Comprehensive Analysis of Information Leakage in Deep Transfer
Learning | Transfer learning is widely used for transferring knowledge from a source
domain to the target domain where the labeled data is scarce. Recently, deep
transfer learning has achieved remarkable progress in various applications.
However, the source and target datasets usually belong to two different
organizations in many real-world scenarios, potential privacy issues in deep
transfer learning are posed. In this study, to thoroughly analyze the potential
privacy leakage in deep transfer learning, we first divide previous methods
into three categories. Based on that, we demonstrate specific threats that lead
to unintentional privacy leakage in each category. Additionally, we also
provide some solutions to prevent these threats. To the best of our knowledge,
our study is the first to provide a thorough analysis of the information
leakage issues in deep transfer learning methods and provide potential
solutions to the issue. Extensive experiments on two public datasets and an
industry dataset are conducted to show the privacy leakage under different deep
transfer learning settings and defense solution effectiveness.
| 2,020 | Computation and Language |
Dynamic Context-guided Capsule Network for Multimodal Machine
Translation | Multimodal machine translation (MMT), which mainly focuses on enhancing
text-only translation with visual features, has attracted considerable
attention from both computer vision and natural language processing
communities. Most current MMT models resort to attention mechanism, global
context modeling or multimodal joint representation learning to utilize visual
features. However, the attention mechanism lacks sufficient semantic
interactions between modalities while the other two provide fixed visual
context, which is unsuitable for modeling the observed variability when
generating translation. To address the above issues, in this paper, we propose
a novel Dynamic Context-guided Capsule Network (DCCN) for MMT. Specifically, at
each timestep of decoding, we first employ the conventional source-target
attention to produce a timestep-specific source-side context vector. Next, DCCN
takes this vector as input and uses it to guide the iterative extraction of
related visual features via a context-guided dynamic routing mechanism.
Particularly, we represent the input image with global and regional visual
features, we introduce two parallel DCCNs to model multimodal context vectors
with visual features at different granularities. Finally, we obtain two
multimodal context vectors, which are fused and incorporated into the decoder
for the prediction of the target word. Experimental results on the Multi30K
dataset of English-to-German and English-to-French translation demonstrate the
superiority of DCCN. Our code is available on
https://github.com/DeepLearnXMU/MM-DCCN.
| 2,020 | Computation and Language |
AutoTrans: Automating Transformer Design via Reinforced Architecture
Search | Though the transformer architectures have shown dominance in many natural
language understanding tasks, there are still unsolved issues for the training
of transformer models, especially the need for a principled way of warm-up
which has shown importance for stable training of a transformer, as well as
whether the task at hand prefer to scale the attention product or not. In this
paper, we empirically explore automating the design choices in the transformer
model, i.e., how to set layer-norm, whether to scale, number of layers, number
of heads, activation function, etc, so that one can obtain a transformer
architecture that better suits the tasks at hand. RL is employed to navigate
along search space, and special parameter sharing strategies are designed to
accelerate the search. It is shown that sampling a proportion of training data
per epoch during search help to improve the search quality. Experiments on the
CoNLL03, Multi-30k, IWSLT14 and WMT-14 shows that the searched transformer
model can outperform the standard transformers. In particular, we show that our
learned model can be trained more robustly with large learning rates without
warm-up.
| 2,021 | Computation and Language |
Linguistically inspired morphological inflection with a sequence to
sequence model | Inflection is an essential part of every human language's morphology, yet
little effort has been made to unify linguistic theory and computational
methods in recent years. Methods of string manipulation are used to infer
inflectional changes; our research question is whether a neural network would
be capable of learning inflectional morphemes for inflection production in a
similar way to a human in early stages of language acquisition. We are using an
inflectional corpus (Metheniti and Neumann, 2020) and a single layer seq2seq
model to test this hypothesis, in which the inflectional affixes are learned
and predicted as a block and the word stem is modelled as a character sequence
to account for infixation. Our character-morpheme-based model creates
inflection by predicting the stem character-to-character and the inflectional
affixes as character blocks. We conducted three experiments on creating an
inflected form of a word given the lemma and a set of input and target
features, comparing our architecture to a mainstream character-based model with
the same hyperparameters, training and test sets. Overall for 17 languages, we
noticed small improvements on inflecting known lemmas (+0.68%) but steadily
better performance of our model in predicting inflected forms of unknown words
(+3.7%) and small improvements on predicting in a low-resource scenario
(+1.09%)
| 2,020 | Computation and Language |
Going Beyond T-SNE: Exposing \texttt{whatlies} in Text Embeddings | We introduce whatlies, an open source toolkit for visually inspecting word
and sentence embeddings. The project offers a unified and extensible API with
current support for a range of popular embedding backends including spaCy,
tfhub, huggingface transformers, gensim, fastText and BytePair embeddings. The
package combines a domain specific language for vector arithmetic with
visualisation tools that make exploring word embeddings more intuitive and
concise. It offers support for many popular dimensionality reduction techniques
as well as many interactive visualisations that can either be statically
exported or shared via Jupyter notebooks. The project documentation is
available from https://rasahq.github.io/whatlies/.
| 2,020 | Computation and Language |
KILT: a Benchmark for Knowledge Intensive Language Tasks | Challenging problems such as open-domain question answering, fact checking,
slot filling and entity linking require access to large, external knowledge
sources. While some models do well on individual tasks, developing general
models is difficult as each task might require computationally expensive
indexing of custom knowledge sources, in addition to dedicated infrastructure.
To catalyze research on models that condition on specific information in large
textual resources, we present a benchmark for knowledge-intensive language
tasks (KILT). All tasks in KILT are grounded in the same snapshot of Wikipedia,
reducing engineering turnaround through the re-use of components, as well as
accelerating research into task-agnostic memory architectures. We test both
task-specific and general baselines, evaluating downstream performance in
addition to the ability of the models to provide provenance. We find that a
shared dense vector index coupled with a seq2seq model is a strong baseline,
outperforming more tailor-made approaches for fact checking, open-domain
question answering and dialogue, and yielding competitive results on entity
linking and slot filling, by generating disambiguated text. KILT data and code
are available at https://github.com/facebookresearch/KILT.
| 2,021 | Computation and Language |
Recent Trends in the Use of Deep Learning Models for Grammar Error
Handling | Grammar error handling (GEH) is an important topic in natural language
processing (NLP). GEH includes both grammar error detection and grammar error
correction. Recent advances in computation systems have promoted the use of
deep learning (DL) models for NLP problems such as GEH. In this survey we focus
on two main DL approaches for GEH: neural machine translation models and editor
models. We describe the three main stages of the pipeline for these models:
data preparation, training, and inference. Additionally, we discuss different
techniques to improve the performance of these models at each stage of the
pipeline. We compare the performance of different models and conclude with
proposed future directions.
| 2,020 | Computation and Language |
Accenture at CheckThat! 2020: If you say so: Post-hoc fact-checking of
claims using transformer-based models | We introduce the strategies used by the Accenture Team for the CLEF2020
CheckThat! Lab, Task 1, on English and Arabic. This shared task evaluated
whether a claim in social media text should be professionally fact checked. To
a journalist, a statement presented as fact, which would be of interest to a
large audience, requires professional fact-checking before dissemination. We
utilized BERT and RoBERTa models to identify claims in social media text a
professional fact-checker should review, and rank these in priority order for
the fact-checker. For the English challenge, we fine-tuned a RoBERTa model and
added an extra mean pooling layer and a dropout layer to enhance
generalizability to unseen text. For the Arabic task, we fine-tuned
Arabic-language BERT models and demonstrate the use of back-translation to
amplify the minority class and balance the dataset. The work presented here was
scored 1st place in the English track, and 1st, 2nd, 3rd, and 4th place in the
Arabic track.
| 2,020 | Computation and Language |
Bio-inspired Structure Identification in Language Embeddings | Word embeddings are a popular way to improve downstream performances in
contemporary language modeling. However, the underlying geometric structure of
the embedding space is not well understood. We present a series of explorations
using bio-inspired methodology to traverse and visualize word embeddings,
demonstrating evidence of discernible structure. Moreover, our model also
produces word similarity rankings that are plausible yet very different from
common similarity metrics, mainly cosine similarity and Euclidean distance. We
show that our bio-inspired model can be used to investigate how different word
embedding techniques result in different semantic outputs, which can emphasize
or obscure particular interpretations in textual data.
| 2,020 | Computation and Language |
MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label
Distribution Learning and Contextual Embeddings | This paper presents our submission to the SemEval 2020 - Task 10 on emphasis
selection in written text. We approach this emphasis selection problem as a
sequence labeling task where we represent the underlying text with various
contextual embedding models. We also employ label distribution learning to
account for annotator disagreements. We experiment with the choice of model
architectures, trainability of layers, and different contextual embeddings. Our
best performing architecture is an ensemble of different models, which achieved
an overall matching score of 0.783, placing us 15th out of 31 participating
teams. Lastly, we analyze the results in terms of parts of speech tags,
sentence lengths, and word ordering.
| 2,020 | Computation and Language |
QiaoNing at SemEval-2020 Task 4: Commonsense Validation and Explanation
system based on ensemble of language model | In this paper, we present language model system submitted to SemEval-2020
Task 4 competition: "Commonsense Validation and Explanation". We participate in
two subtasks for subtask A: validation and subtask B: Explanation. We
implemented with transfer learning using pretrained language models (BERT,
XLNet, RoBERTa, and ALBERT) and fine-tune them on this task. Then we compared
their characteristics in this task to help future researchers understand and
use these models more properly. The ensembled model better solves this problem,
making the model's accuracy reached 95.9% on subtask A, which just worse than
human's by only 3% accuracy.
| 2,020 | Computation and Language |
Once Upon A Time In Visualization: Understanding the Use of Textual
Narratives for Causality | Causality visualization can help people understand temporal chains of events,
such as messages sent in a distributed system, cause and effect in a historical
conflict, or the interplay between political actors over time. However, as the
scale and complexity of these event sequences grows, even these visualizations
can become overwhelming to use. In this paper, we propose the use of textual
narratives as a data-driven storytelling method to augment causality
visualization. We first propose a design space for how textual narratives can
be used to describe causal data. We then present results from a crowdsourced
user study where participants were asked to recover causality information from
two causality visualizations--causal graphs and Hasse diagrams--with and
without an associated textual narrative. Finally, we describe CAUSEWORKS, a
causality visualization system for understanding how specific interventions
influence a causal model. The system incorporates an automatic textual
narrative mechanism based on our design space. We validate CAUSEWORKS through
interviews with experts who used the system for understanding complex events.
| 2,020 | Computation and Language |
BANANA at WNUT-2020 Task 2: Identifying COVID-19 Information on Twitter
by Combining Deep Learning and Transfer Learning Models | The outbreak COVID-19 virus caused a significant impact on the health of
people all over the world. Therefore, it is essential to have a piece of
constant and accurate information about the disease with everyone. This paper
describes our prediction system for WNUT-2020 Task 2: Identification of
Informative COVID-19 English Tweets. The dataset for this task contains size
10,000 tweets in English labeled by humans. The ensemble model from our three
transformer and deep learning models is used for the final prediction. The
experimental result indicates that we have achieved F1 for the INFORMATIVE
label on our systems at 88.81% on the test set.
| 2,021 | Computation and Language |
Automatic Dialect Adaptation in Finnish and its Effect on Perceived
Creativity | We present a novel approach for adapting text written in standard Finnish to
different dialects. We experiment with character level NMT models both by using
a multi-dialectal and transfer learning approaches. The models are tested with
over 20 different dialects. The results seem to favor transfer learning,
although not strongly over the multi-dialectal approach. We study the influence
dialectal adaptation has on perceived creativity of computer generated poetry.
Our results suggest that the more the dialect deviates from the standard
Finnish, the lower scores people tend to give on an existing evaluation metric.
However, on a word association test, people associate creativity and
originality more with dialect and fluency more with standard Finnish.
| 2,020 | Computation and Language |
SemEval-2020 Task 11: Detection of Propaganda Techniques in News
Articles | We present the results and the main findings of SemEval-2020 Task 11 on
Detection of Propaganda Techniques in News Articles. The task featured two
subtasks. Subtask SI is about Span Identification: given a plain-text document,
spot the specific text fragments containing propaganda. Subtask TC is about
Technique Classification: given a specific text fragment, in the context of a
full document, determine the propaganda technique it uses, choosing from an
inventory of 14 possible propaganda techniques. The task attracted a large
number of participants: 250 teams signed up to participate and 44 made a
submission on the test set. In this paper, we present the task, analyze the
results, and discuss the system submissions and the methods they used. For both
subtasks, the best systems used pre-trained Transformers and ensembles.
| 2,020 | Computation and Language |
Romanian Diacritics Restoration Using Recurrent Neural Networks | Diacritics restoration is a mandatory step for adequately processing Romanian
texts, and not a trivial one, as you generally need context in order to
properly restore a character. Most previous methods which were experimented for
Romanian restoration of diacritics do not use neural networks. Among those that
do, there are no solutions specifically optimized for this particular language
(i.e., they were generally designed to work on many different languages).
Therefore we propose a novel neural architecture based on recurrent neural
networks that can attend information at different levels of abstractions in
order to restore diacritics.
| 2,020 | Computation and Language |
UPB at SemEval-2020 Task 8: Joint Textual and Visual Modeling in a
Multi-Task Learning Architecture for Memotion Analysis | Users from the online environment can create different ways of expressing
their thoughts, opinions, or conception of amusement. Internet memes were
created specifically for these situations. Their main purpose is to transmit
ideas by using combinations of images and texts such that they will create a
certain state for the receptor, depending on the message the meme has to send.
These posts can be related to various situations or events, thus adding a funny
side to any circumstance our world is situated in. In this paper, we describe
the system developed by our team for SemEval-2020 Task 8: Memotion Analysis.
More specifically, we introduce a novel system to analyze these posts, a
multimodal multi-task learning architecture that combines ALBERT for text
encoding with VGG-16 for image representation. In this manner, we show that the
information behind them can be properly revealed. Our approach achieves good
performance on each of the three subtasks of the current competition, ranking
11th for Subtask A (0.3453 macro F1-score), 1st for Subtask B (0.5183 macro
F1-score), and 3rd for Subtask C (0.3171 macro F1-score) while exceeding the
official baseline results by high margins.
| 2,020 | Computation and Language |
UPB at SemEval-2020 Task 9: Identifying Sentiment in Code-Mixed Social
Media Texts using Transformers and Multi-Task Learning | Sentiment analysis is a process widely used in opinion mining campaigns
conducted today. This phenomenon presents applications in a variety of fields,
especially in collecting information related to the attitude or satisfaction of
users concerning a particular subject. However, the task of managing such a
process becomes noticeably more difficult when it is applied in cultures that
tend to combine two languages in order to express ideas and thoughts. By
interleaving words from two languages, the user can express with ease, but at
the cost of making the text far less intelligible for those who are not
familiar with this technique, but also for standard opinion mining algorithms.
In this paper, we describe the systems developed by our team for SemEval-2020
Task 9 that aims to cover two well-known code-mixed languages: Hindi-English
and Spanish-English.
We intend to solve this issue by introducing a solution that takes advantage
of several neural network approaches, as well as pre-trained word embeddings.
Our approach (multlingual BERT) achieves promising performance on the
Hindi-English task, with an average F1-score of 0.6850, registered on the
competition leaderboard, ranking our team 16th out of 62 participants. For the
Spanish-English task, we obtained an average F1-score of 0.7064 ranking our
team 17th out of 29 participants by using another multilingual
Transformer-based model, XLM-RoBERTa.
| 2,020 | Computation and Language |
Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock
Humorous Headlines | We use pretrained transformer-based language models in SemEval-2020 Task 7:
Assessing the Funniness of Edited News Headlines. Inspired by the incongruity
theory of humor, we use a contrastive approach to capture the surprise in the
edited headlines. In the official evaluation, our system gets 0.531 RMSE in
Subtask 1, 11th among 49 submissions. In Subtask 2, our system gets 0.632
accuracy, 9th among 32 submissions.
| 2,020 | Computation and Language |
E-BERT: A Phrase and Product Knowledge Enhanced Language Model for
E-commerce | Pre-trained language models such as BERT have achieved great success in a
broad range of natural language processing tasks. However, BERT cannot well
support E-commerce related tasks due to the lack of two levels of domain
knowledge, i.e., phrase-level and product-level. On one hand, many E-commerce
tasks require an accurate understanding of domain phrases, whereas such
fine-grained phrase-level knowledge is not explicitly modeled by BERT's
training objective. On the other hand, product-level knowledge like product
associations can enhance the language modeling of E-commerce, but they are not
factual knowledge thus using them indiscriminately may introduce noise. To
tackle the problem, we propose a unified pre-training framework, namely,
E-BERT. Specifically, to preserve phrase-level knowledge, we introduce Adaptive
Hybrid Masking, which allows the model to adaptively switch from learning
preliminary word knowledge to learning complex phrases, based on the fitting
progress of two modes. To utilize product-level knowledge, we introduce
Neighbor Product Reconstruction, which trains E-BERT to predict a product's
associated neighbors with a denoising cross attention layer. Our investigation
reveals promising results in four downstream tasks, i.e., review-based question
answering, aspect extraction, aspect sentiment classification, and product
classification.
| 2,021 | Computation and Language |
TransModality: An End2End Fusion Method with Transformer for Multimodal
Sentiment Analysis | Multimodal sentiment analysis is an important research area that predicts
speaker's sentiment tendency through features extracted from textual, visual
and acoustic modalities. The central challenge is the fusion method of the
multimodal information. A variety of fusion methods have been proposed, but few
of them adopt end-to-end translation models to mine the subtle correlation
between modalities. Enlightened by recent success of Transformer in the area of
machine translation, we propose a new fusion method, TransModality, to address
the task of multimodal sentiment analysis. We assume that translation between
modalities contributes to a better joint representation of speaker's utterance.
With Transformer, the learned features embody the information both from the
source modality and the target modality. We validate our model on multiple
multimodal datasets: CMU-MOSI, MELD, IEMOCAP. The experiments show that our
proposed method achieves the state-of-the-art performance.
| 2,020 | Computation and Language |
Team Alex at CLEF CheckThat! 2020: Identifying Check-Worthy Tweets With
Transformer Models | While misinformation and disinformation have been thriving in social media
for years, with the emergence of the COVID-19 pandemic, the political and the
health misinformation merged, thus elevating the problem to a whole new level
and giving rise to the first global infodemic. The fight against this infodemic
has many aspects, with fact-checking and debunking false and misleading claims
being among the most important ones. Unfortunately, manual fact-checking is
time-consuming and automatic fact-checking is resource-intense, which means
that we need to pre-filter the input social media posts and to throw out those
that do not appear to be check-worthy. With this in mind, here we propose a
model for detecting check-worthy tweets about COVID-19, which combines deep
contextualized text representations with modeling the social context of the
tweet. We further describe a number of additional experiments and comparisons,
which we believe should be useful for future research as they provide some
indication about what techniques are effective for the task. Our official
submission to the English version of CLEF-2020 CheckThat! Task 1, system
Team_Alex, was ranked second with a MAP score of 0.8034, which is almost tied
with the wining system, lagging behind by just 0.003 MAP points absolute.
| 2,020 | Computation and Language |
UIT-HSE at WNUT-2020 Task 2: Exploiting CT-BERT for Identifying COVID-19
Information on the Twitter Social Network | Recently, COVID-19 has affected a variety of real-life aspects of the world
and led to dreadful consequences. More and more tweets about COVID-19 has been
shared publicly on Twitter. However, the plurality of those Tweets are
uninformative, which is challenging to build automatic systems to detect the
informative ones for useful AI applications. In this paper, we present our
results at the W-NUT 2020 Shared Task 2: Identification of Informative COVID-19
English Tweets. In particular, we propose our simple but effective approach
using the transformer-based models based on COVID-Twitter-BERT (CT-BERT) with
different fine-tuning techniques. As a result, we achieve the F1-Score of
90.94\% with the third place on the leaderboard of this task which attracted 56
submitted teams in total.
| 2,020 | Computation and Language |
TorchKGE: Knowledge Graph Embedding in Python and PyTorch | TorchKGE is a Python module for knowledge graph (KG) embedding relying solely
on PyTorch. This package provides researchers and engineers with a clean and
efficient API to design and test new models. It features a KG data structure,
simple model interfaces and modules for negative sampling and model evaluation.
Its main strength is a very fast evaluation module for the link prediction
task, a central application of KG embedding. Various KG embedding models are
also already implemented. Special attention has been paid to code efficiency
and simplicity, documentation and API consistency. It is distributed using PyPI
under BSD license. Source code and pointers to documentation and deployment can
be found at https://github.com/torchkge-team/torchkge.
| 2,020 | Computation and Language |
Uncovering the Corona Virus Map Using Deep Entities and Relationship
Models | We extract entities and relationships related to COVID-19 from a corpus of
articles related to Corona virus by employing a novel entities and relationship
model. The entity recognition and relationship discovery models are trained
with a multi-task learning objective on a large annotated corpus. We employ a
concept masking paradigm to prevent the evolution of neural networks
functioning as an associative memory and induce right inductive bias guiding
the network to make inference using only the context. We uncover several import
subnetworks, highlight important terms and concepts and elucidate several
treatment modalities employed in related ailments in the past.
| 2,020 | Computation and Language |
Robust Spoken Language Understanding with RL-based Value Error Recovery | Spoken Language Understanding (SLU) aims to extract structured semantic
representations (e.g., slot-value pairs) from speech recognized texts, which
suffers from errors of Automatic Speech Recognition (ASR). To alleviate the
problem caused by ASR-errors, previous works may apply input adaptations to the
speech recognized texts, or correct ASR errors in predicted values by searching
the most similar candidates in pronunciation. However, these two methods are
applied separately and independently. In this work, we propose a new robust SLU
framework to guide the SLU input adaptation with a rule-based value error
recovery module. The framework consists of a slot tagging model and a
rule-based value error recovery module. We pursue on an adapted slot tagging
model which can extract potential slot-value pairs mentioned in ASR hypotheses
and is suitable for the existing value error recovery module. After the value
error recovery, we can achieve a supervision signal (reward) by comparing
refined slot-value pairs with annotations. Since operations of the value error
recovery are non-differentiable, we exploit policy gradient based Reinforcement
Learning (RL) to optimize the SLU model. Extensive experiments on the public
CATSLU dataset show the effectiveness of our proposed approach, which can
improve the robustness of SLU and outperform the baselines by significant
margins.
| 2,020 | Computation and Language |
Why Not Simply Translate? A First Swedish Evaluation Benchmark for
Semantic Similarity | This paper presents the first Swedish evaluation benchmark for textual
semantic similarity. The benchmark is compiled by simply running the English
STS-B dataset through the Google machine translation API. This paper discusses
potential problems with using such a simple approach to compile a Swedish
evaluation benchmark, including translation errors, vocabulary variation, and
productive compounding. Despite some obvious problems with the resulting
dataset, we use the benchmark to compare the majority of the currently existing
Swedish text representations, demonstrating that native models outperform
multilingual ones, and that simple bag of words performs remarkably well.
| 2,020 | Computation and Language |
COVCOR20 at WNUT-2020 Task 2: An Attempt to Combine Deep Learning and
Expert rules | In the scope of WNUT-2020 Task 2, we developed various text classification
systems, using deep learning models and one using linguistically informed
rules. While both of the deep learning systems outperformed the system using
the linguistically informed rules, we found that through the integration of
(the output of) the three systems a better performance could be achieved than
the standalone performance of each approach in a cross-validation setting.
However, on the test data the performance of the integration was slightly lower
than our best performing deep learning model. These results hardly indicate any
progress in line of integrating machine learning and expert rules driven
systems. We expect that the release of the annotation manuals and gold labels
of the test data after this workshop will shed light on these perplexing
results.
| 2,020 | Computation and Language |
NLP-CIC at SemEval-2020 Task 9: Analysing sentiment in code-switching
language using a simple deep-learning classifier | Code-switching is a phenomenon in which two or more languages are used in the
same message. Nowadays, it is quite common to find messages with languages
mixed in social media. This phenomenon presents a challenge for sentiment
analysis. In this paper, we use a standard convolutional neural network model
to predict the sentiment of tweets in a blend of Spanish and English languages.
Our simple approach achieved a F1-score of 0.71 on test set on the competition.
We analyze our best model capabilities and perform error analysis to expose
important difficulties for classifying sentiment in a code-switching setting.
| 2,020 | Computation and Language |
Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust
Elderly Speech Emotion Recognition | Acoustic and linguistic analysis for elderly emotion recognition is an
under-studied and challenging research direction, but essential for the
creation of digital assistants for the elderly, as well as unobtrusive
telemonitoring of elderly in their residences for mental healthcare purposes.
This paper presents our contribution to the INTERSPEECH 2020 Computational
Paralinguistics Challenge (ComParE) - Elderly Emotion Sub-Challenge, which is
comprised of two ternary classification tasks for arousal and valence
recognition. We propose a bi-modal framework, where these tasks are modeled
using state-of-the-art acoustic and linguistic features, respectively. In this
study, we demonstrate that exploiting task-specific dictionaries and resources
can boost the performance of linguistic models, when the amount of labeled data
is small. Observing a high mismatch between development and test set
performances of various models, we also propose alternative training and
decision fusion strategies to better estimate and improve the generalization
performance.
| 2,020 | Computation and Language |
kk2018 at SemEval-2020 Task 9: Adversarial Training for Code-Mixing
Sentiment Classification | Code switching is a linguistic phenomenon that may occur within a
multilingual setting where speakers share more than one language. With the
increasing communication between groups with different languages, this
phenomenon is more and more popular. However, there are little research and
data in this area, especially in code-mixing sentiment classification. In this
work, the domain transfer learning from state-of-the-art uni-language model
ERNIE is tested on the code-mixing dataset, and surprisingly, a strong baseline
is achieved. Furthermore, the adversarial training with a multi-lingual model
is used to achieve 1st place of SemEval-2020 Task 9 Hindi-English sentiment
classification competition.
| 2,020 | Computation and Language |
Simple is Better! Lightweight Data Augmentation for Low Resource Slot
Filling and Intent Classification | Neural-based models have achieved outstanding performance on slot filling and
intent classification, when fairly large in-domain training data are available.
However, as new domains are frequently added, creating sizeable data is
expensive. We show that lightweight augmentation, a set of augmentation methods
involving word span and sentence level operations, alleviates data scarcity
problems. Our experiments on limited data settings show that lightweight
augmentation yields significant performance improvement on slot filling on the
ATIS and SNIPS datasets, and achieves competitive performance with respect to
more complex, state-of-the-art, augmentation approaches. Furthermore,
lightweight augmentation is also beneficial when combined with pre-trained
LM-based models, as it improves BERT-based joint intent and slot filling
models.
| 2,020 | Computation and Language |
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by
Pre-trained Language Model | This paper describes the system designed by ERNIE Team which achieved the
first place in SemEval-2020 Task 10: Emphasis Selection For Written Text in
Visual Media. Given a sentence, we are asked to find out the most important
words as the suggestion for automated design. We leverage the unsupervised
pre-training model and finetune these models on our task. After our
investigation, we found that the following models achieved an excellent
performance in this task: ERNIE 2.0, XLM-ROBERTA, ROBERTA and ALBERT. We
combine a pointwise regression loss and a pairwise ranking loss which is more
close to the final M atchm metric to finetune our models. And we also find that
additional feature engineering and data augmentation can help improve the
performance. Our best model achieves the highest score of 0.823 and ranks first
for all kinds of metrics
| 2,020 | Computation and Language |
LynyrdSkynyrd at WNUT-2020 Task 2: Semi-Supervised Learning for
Identification of Informative COVID-19 English Tweets | We describe our system for WNUT-2020 shared task on the identification of
informative COVID-19 English tweets. Our system is an ensemble of various
machine learning methods, leveraging both traditional feature-based classifiers
as well as recent advances in pre-trained language models that help in
capturing the syntactic, semantic, and contextual features from the tweets. We
further employ pseudo-labelling to incorporate the unlabelled Twitter data
released on the pandemic. Our best performing model achieves an F1-score of
0.9179 on the provided validation set and 0.8805 on the blind test-set.
| 2,020 | Computation and Language |
Quantifying the Causal Effects of Conversational Tendencies | Understanding what leads to effective conversations can aid the design of
better computer-mediated communication platforms. In particular, prior
observational work has sought to identify behaviors of individuals that
correlate to their conversational efficiency. However, translating such
correlations to causal interpretations is a necessary step in using them in a
prescriptive fashion to guide better designs and policies.
In this work, we formally describe the problem of drawing causal links
between conversational behaviors and outcomes. We focus on the task of
determining a particular type of policy for a text-based crisis counseling
platform: how best to allocate counselors based on their behavioral tendencies
exhibited in their past conversations. We apply arguments derived from causal
inference to underline key challenges that arise in conversational settings
where randomized trials are hard to implement. Finally, we show how to
circumvent these inference challenges in our particular domain, and illustrate
the potential benefits of an allocation policy informed by the resulting
prescriptive information.
| 2,020 | Computation and Language |
Covid-Transformer: Detecting COVID-19 Trending Topics on Twitter Using
Universal Sentence Encoder | The novel corona-virus disease (also known as COVID-19) has led to a
pandemic, impacting more than 200 countries across the globe. With its global
impact, COVID-19 has become a major concern of people almost everywhere, and
therefore there are a large number of tweets coming out from every corner of
the world, about COVID-19 related topics. In this work, we try to analyze the
tweets and detect the trending topics and major concerns of people on Twitter,
which can enable us to better understand the situation, and devise better
planning. More specifically we propose a model based on the universal sentence
encoder to detect the main topics of Tweets in recent months. We used universal
sentence encoder in order to derive the semantic representation and the
similarity of tweets. We then used the sentence similarity and their
embeddings, and feed them to K-means clustering algorithm to group similar
tweets (in semantic sense). After that, the cluster summary is obtained using a
text summarization algorithm based on deep learning, which can uncover the
underlying topics of each cluster. Through experimental results, we show that
our model can detect very informative topics, by processing a large number of
tweets on sentence level (which can preserve the overall meaning of the
tweets). Since this framework has no restriction on specific data distribution,
it can be used to detect trending topics from any other social media and any
other context rather than COVID-19. Experimental results show superiority of
our proposed approach to other baselines, including TF-IDF, and latent
Dirichlet allocation (LDA).
| 2,020 | Computation and Language |
Probabilistic Predictions of People Perusing: Evaluating Metrics of
Language Model Performance for Psycholinguistic Modeling | By positing a relationship between naturalistic reading times and
information-theoretic surprisal, surprisal theory (Hale, 2001; Levy, 2008)
provides a natural interface between language models and psycholinguistic
models. This paper re-evaluates a claim due to Goodkind and Bicknell (2018)
that a language model's ability to model reading times is a linear function of
its perplexity. By extending Goodkind and Bicknell's analysis to modern neural
architectures, we show that the proposed relation does not always hold for Long
Short-Term Memory networks, Transformers, and pre-trained models. We introduce
an alternate measure of language modeling performance called predictability
norm correlation based on Cloze probabilities measured from human subjects. Our
new metric yields a more robust relationship between language model quality and
psycholinguistic modeling performance that allows for comparison between models
with different training configurations.
| 2,021 | Computation and Language |
Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function | In this paper, we study bidirectional LSTM network for the task of text
classification using both supervised and semi-supervised approaches. Several
prior works have suggested that either complex pretraining schemes using
unsupervised methods such as language modeling (Dai and Le 2015; Miyato, Dai,
and Goodfellow 2016) or complicated models (Johnson and Zhang 2017) are
necessary to achieve a high classification accuracy. However, we develop a
training strategy that allows even a simple BiLSTM model, when trained with
cross-entropy loss, to achieve competitive results compared with more complex
approaches. Furthermore, in addition to cross-entropy loss, by using a
combination of entropy minimization, adversarial, and virtual adversarial
losses for both labeled and unlabeled data, we report state-of-the-art results
for text classification task on several benchmark datasets. In particular, on
the ACL-IMDB sentiment analysis and AG-News topic classification datasets, our
method outperforms current approaches by a substantial margin. We also show the
generality of the mixed objective function by improving the performance on
relation extraction task.
| 2,020 | Computation and Language |
Quantifying the Effects of COVID-19 on Mental Health Support Forums | The COVID-19 pandemic, like many of the disease outbreaks that have preceded
it, is likely to have a profound effect on mental health. Understanding its
impact can inform strategies for mitigating negative consequences. In this
work, we seek to better understand the effects of COVID-19 on mental health by
examining discussions within mental health support communities on Reddit.
First, we quantify the rate at which COVID-19 is discussed in each community,
or subreddit, in order to understand levels of preoccupation with the pandemic.
Next, we examine the volume of activity in order to determine whether the
quantity of people seeking online mental health support has risen. Finally, we
analyze how COVID-19 has influenced language use and topics of discussion
within each subreddit.
| 2,020 | Computation and Language |
Central Yup'ik and Machine Translation of Low-Resource Polysynthetic
Languages | Machine translation tools do not yet exist for the Yup'ik language, a
polysynthetic language spoken by around 8,000 people who live primarily in
Southwest Alaska. We compiled a parallel text corpus for Yup'ik and English and
developed a morphological parser for Yup'ik based on grammar rules. We trained
a seq2seq neural machine translation model with attention to translate Yup'ik
input into English. We then compared the influence of different tokenization
methods, namely rule-based, unsupervised (byte pair encoding), and unsupervised
morphological (Morfessor) parsing, on BLEU score accuracy for Yup'ik to English
translation. We find that using tokenized input increases the translation
accuracy compared to that of unparsed input. Although overall Morfessor did
best with a vocabulary size of 30k, our first experiments show that BPE
performed best with a reduced vocabulary size.
| 2,020 | Computation and Language |
Comparative Study of Language Models on Cross-Domain Data with Model
Agnostic Explainability | With the recent influx of bidirectional contextualized transformer language
models in the NLP, it becomes a necessity to have a systematic comparative
study of these models on variety of datasets. Also, the performance of these
language models has not been explored on non-GLUE datasets. The study presented
in paper compares the state-of-the-art language models - BERT, ELECTRA and its
derivatives which include RoBERTa, ALBERT and DistilBERT. We conducted
experiments by finetuning these models for cross domain and disparate data and
penned an in-depth analysis of model's performances. Moreover, an
explainability of language models coherent with pretraining is presented which
verifies the context capturing capabilities of these models through a model
agnostic approach. The experimental results establish new state-of-the-art for
Yelp 2013 rating classification task and Financial Phrasebank sentiment
detection task with 69% accuracy and 88.2% accuracy respectively. Finally, the
study conferred here can greatly assist industry researchers in choosing the
language model effectively in terms of performance or compute efficiency.
| 2,020 | Computation and Language |
Impact of News on the Commodity Market: Dataset and Results | Over the last few years, machine learning based methods have been applied to
extract information from news flow in the financial domain. However, this
information has mostly been in the form of the financial sentiments contained
in the news headlines, primarily for the stock prices. In our current work, we
propose that various other dimensions of information can be extracted from news
headlines, which will be of interest to investors, policy-makers and other
practitioners. We propose a framework that extracts information such as past
movements and expected directionality in prices, asset comparison and other
general information that the news is referring to. We apply this framework to
the commodity "Gold" and train the machine learning models using a dataset of
11,412 human-annotated news headlines (released with this study), collected
from the period 2000-2019. We experiment to validate the causal effect of news
flow on gold prices and observe that the information produced from our
framework significantly impacts the future gold price.
| 2,020 | Computation and Language |
On SkipGram Word Embedding Models with Negative Sampling: Unified
Framework and Impact of Noise Distributions | SkipGram word embedding models with negative sampling, or SGN in short, is an
elegant family of word embedding models. In this paper, we formulate a
framework for word embedding, referred to as Word-Context Classification (WCC),
that generalizes SGN to a wide family of models. The framework, utilizing some
"noise examples", is justified through a theoretical analysis. The impact of
noise distribution on the learning of the WCC embedding models is studied
experimentally, suggesting that the best noise distribution is in fact the data
distribution, in terms of both the embedding performance and the speed of
convergence during training. Along our way, we discover several novel embedding
models that outperform the existing WCC models.
| 2,020 | Computation and Language |
Aspect Classification for Legal Depositions | Attorneys and others have a strong interest in having a digital library with
suitable services (e.g., summarizing, searching, and browsing) to help them
work with large corpora of legal depositions. Their needs often involve
understanding the semantics of such documents. That depends in part on the role
of the deponent, e.g., plaintiff, defendant, law enforcement personnel, expert,
etc. In the case of tort litigation associated with property and casualty
insurance claims, such as relating to an injury, it is important to know not
only about liability, but also about events, accidents, physical conditions,
and treatments.
We hypothesize that a legal deposition consists of various aspects that are
discussed as part of the deponent testimony. Accordingly, we developed an
ontology of aspects in a legal deposition for accident and injury cases. Using
that, we have developed a classifier that can identify portions of text for
each of the aspects of interest. Doing so was complicated by the peculiarities
of this genre, e.g., that deposition transcripts generally consist of data in
the form of question-answer (QA) pairs. Accordingly, our automated system
starts with pre-processing, and then transforms the QA pairs into a canonical
form made up of declarative sentences. Classifying the declarative sentences
that are generated, according to the aspect, can then help with downstream
tasks such as summarization, segmentation, question-answering, and information
retrieval.
Our methods have achieved a classification F1 score of 0.83. Having the
aspects classified with a good accuracy will help in choosing QA pairs that can
be used as candidate summary sentences, and to generate an informative summary
for legal professionals or insurance claim agents. Our methodology could be
extended to legal depositions of other kinds, and to aid services like
searching.
| 2,020 | Computation and Language |
Discovering Textual Structures: Generative Grammar Induction using
Template Trees | Natural language generation provides designers with methods for automatically
generating text, e.g. for creating summaries, chatbots and game content. In
practise, text generators are often either learned and hard to interpret, or
created by hand using techniques such as grammars and templates. In this paper,
we introduce a novel grammar induction algorithm for learning interpretable
grammars for generative purposes, called Gitta. We also introduce the novel
notion of template trees to discover latent templates in corpora to derive
these generative grammars. By using existing human-created grammars, we found
that the algorithm can reasonably approximate these grammars using only a few
examples. These results indicate that Gitta could be used to automatically
learn interpretable and easily modifiable grammars, and thus provide a stepping
stone for human-machine co-creation of generative models.
| 2,020 | Computation and Language |
Emora: An Inquisitive Social Chatbot Who Cares For You | Inspired by studies on the overwhelming presence of experience-sharing in
human-human conversations, Emora, the social chatbot developed by Emory
University, aims to bring such experience-focused interaction to the current
field of conversational AI. The traditional approach of information-sharing
topic handlers is balanced with a focus on opinion-oriented exchanges that
Emora delivers, and new conversational abilities are developed that support
dialogues that consist of a collaborative understanding and learning process of
the partner's life experiences. We present a curated dialogue system that
leverages highly expressive natural language templates, powerful intent
classification, and ontology resources to provide an engaging and interesting
conversational experience to every user.
| 2,020 | Computation and Language |
Improving Coreference Resolution by Leveraging Entity-Centric Features
with Graph Neural Networks and Second-order Inference | One of the major challenges in coreference resolution is how to make use of
entity-level features defined over clusters of mentions rather than mention
pairs. However, coreferent mentions usually spread far apart in an entire text,
which makes it extremely difficult to incorporate entity-level features. We
propose a graph neural network-based coreference resolution method that can
capture the entity-centric information by encouraging the sharing of features
across all mentions that probably refer to the same real-world entity. Mentions
are linked to each other via the edges modeling how likely two linked mentions
point to the same entity. Modeling by such graphs, the features between
mentions can be shared by message passing operations in an entity-centric
manner. A global inference algorithm up to second-order features is also
presented to optimally cluster mentions into consistent groups. Experimental
results show our graph neural network-based method combing with the
second-order decoding algorithm (named GNNCR) achieved close to
state-of-the-art performance on the English CoNLL-2012 Shared Task dataset.
| 2,023 | Computation and Language |
Learning Universal Representations from Word to Sentence | Despite the well-developed cut-edge representation learning for language,
most language representation models usually focus on specific level of
linguistic unit, which cause great inconvenience when being confronted with
handling multiple layers of linguistic objects in a unified way. Thus this work
introduces and explores the universal representation learning, i.e., embeddings
of different levels of linguistic unit in a uniform vector space through a
task-independent evaluation. We present our approach of constructing analogy
datasets in terms of words, phrases and sentences and experiment with multiple
representation models to examine geometric properties of the learned vector
space. Then we empirically verify that well pre-trained Transformer models
incorporated with appropriate training settings may effectively yield universal
representation. Especially, our implementation of fine-tuning ALBERT on NLI and
PPDB datasets achieves the highest accuracy on analogy tasks in different
language levels. Further experiments on the insurance FAQ task show
effectiveness of universal representation models in real-world applications.
| 2,020 | Computation and Language |
Do Response Selection Models Really Know What's Next? Utterance
Manipulation Strategies for Multi-turn Response Selection | In this paper, we study the task of selecting the optimal response given a
user and system utterance history in retrieval-based multi-turn dialog systems.
Recently, pre-trained language models (e.g., BERT, RoBERTa, and ELECTRA) showed
significant improvements in various natural language processing tasks. This and
similar response selection tasks can also be solved using such language models
by formulating the tasks as dialog--response binary classification tasks.
Although existing works using this approach successfully obtained
state-of-the-art results, we observe that language models trained in this
manner tend to make predictions based on the relatedness of history and
candidates, ignoring the sequential nature of multi-turn dialog systems. This
suggests that the response selection task alone is insufficient for learning
temporal dependencies between utterances. To this end, we propose utterance
manipulation strategies (UMS) to address this problem. Specifically, UMS
consist of several strategies (i.e., insertion, deletion, and search), which
aid the response selection model towards maintaining dialog coherence. Further,
UMS are self-supervised methods that do not require additional annotation and
thus can be easily incorporated into existing approaches. Extensive evaluation
across multiple languages and models shows that UMS are highly effective in
teaching dialog consistency, which leads to models pushing the state-of-the-art
with significant margins on multiple public benchmark datasets.
| 2,020 | Computation and Language |
On Target Segmentation for Direct Speech Translation | Recent studies on direct speech translation show continuous improvements by
means of data augmentation techniques and bigger deep learning models. While
these methods are helping to close the gap between this new approach and the
more traditional cascaded one, there are many incongruities among different
studies that make it difficult to assess the state of the art. Surprisingly,
one point of discussion is the segmentation of the target text. Character-level
segmentation has been initially proposed to obtain an open vocabulary, but it
results on long sequences and long training time. Then, subword-level
segmentation became the state of the art in neural machine translation as it
produces shorter sequences that reduce the training time, while being superior
to word-level models. As such, recent works on speech translation started using
target subwords despite the initial use of characters and some recent claims of
better results at the character level. In this work, we perform an extensive
comparison of the two methods on three benchmarks covering 8 language
directions and multilingual training. Subword-level segmentation compares
favorably in all settings, outperforming its character-level counterpart in a
range of 1 to 3 BLEU points.
| 2,020 | Computation and Language |
Analyze the Effects of Weighting Functions on Cost Function in the Glove
Model | When dealing with the large vocabulary size and corpus size, the run-time for
training Glove model is long, it can even be up to several dozen hours for
data, which is approximately 500MB in size. As a result, finding and selecting
the optimal parameters for the weighting function create many difficulties for
weak hardware. Of course, to get the best results, we need to test benchmarks
many times. In order to solve this problem, we derive a weighting function,
which can save time for choosing parameters and making benchmarks. It also
allows one to obtain nearly similar accuracy at the same given time without
concern for experimentation.
| 2,020 | Computation and Language |
Brain2Word: Decoding Brain Activity for Language Generation | Brain decoding, understood as the process of mapping brain activities to the
stimuli that generated them, has been an active research area in the last
years. In the case of language stimuli, recent studies have shown that it is
possible to decode fMRI scans into an embedding of the word a subject is
reading. However, such word embeddings are designed for natural language
processing tasks rather than for brain decoding. Therefore, they limit our
ability to recover the precise stimulus. In this work, we propose to directly
classify an fMRI scan, mapping it to the corresponding word within a fixed
vocabulary. Unlike existing work, we evaluate on scans from previously unseen
subjects. We argue that this is a more realistic setup and we present a model
that can decode fMRI data from unseen subjects. Our model achieves 5.22% Top-1
and 13.59% Top-5 accuracy in this challenging task, significantly outperforming
all the considered competitive baselines. Furthermore, we use the decoded words
to guide language generation with the GPT-2 model. This way, we advance the
quest for a system that translates brain activities into coherent text.
| 2,020 | Computation and Language |
The Grievance Dictionary: Understanding Threatening Language Use | This paper introduces the Grievance Dictionary, a psycholinguistic dictionary
which can be used to automatically understand language use in the context of
grievance-fuelled violence threat assessment. We describe the development the
dictionary, which was informed by suggestions from experienced threat
assessment practitioners. These suggestions and subsequent human and
computational word list generation resulted in a dictionary of 20,502 words
annotated by 2,318 participants. The dictionary was validated by applying it to
texts written by violent and non-violent individuals, showing strong evidence
for a difference between populations in several dictionary categories. Further
classification tasks showed promising performance, but future improvements are
still needed. Finally, we provide instructions and suggestions for the use of
the Grievance Dictionary by security professionals and (violence) researchers.
| 2,020 | Computation and Language |
Meta-Learning with Sparse Experience Replay for Lifelong Language
Learning | Lifelong learning requires models that can continuously learn from sequential
streams of data without suffering catastrophic forgetting due to shifts in data
distributions. Deep learning models have thrived in the non-sequential learning
paradigm; however, when used to learn a sequence of tasks, they fail to retain
past knowledge and learn incrementally. We propose a novel approach to lifelong
learning of language tasks based on meta-learning with sparse experience replay
that directly optimizes to prevent forgetting. We show that under the realistic
setting of performing a single pass on a stream of tasks and without any task
identifiers, our method obtains state-of-the-art results on lifelong text
classification and relation extraction. We analyze the effectiveness of our
approach and further demonstrate its low computational and space complexity.
| 2,021 | Computation and Language |
Classification of descriptions and summary using multiple passes of
statistical and natural language toolkits | This document describes a possible approach that can be used to check the
relevance of a summary / definition of an entity with respect to its name. This
classifier focuses on the relevancy of an entity's name to its summary /
definition, in other words, it is a name relevance check. The percentage score
obtained from this approach can be used either on its own or used to supplement
scores obtained from other metrics to arrive upon a final classification; at
the end of the document, potential improvements have also been outlined. The
dataset that this document focuses on achieving an objective score is a list of
package names and their respective summaries (sourced from pypi.org).
| 2,020 | Computation and Language |
Modern Methods for Text Generation | Synthetic text generation is challenging and has limited success. Recently, a
new architecture, called Transformers, allow machine learning models to
understand better sequential data, such as translation or summarization. BERT
and GPT-2, using Transformers in their cores, have shown a great performance in
tasks such as text classification, translation and NLI tasks. In this article,
we analyse both algorithms and compare their output quality in text generation
tasks.
| 2,020 | Computation and Language |
Dialogue-adaptive Language Model Pre-training From Quality Estimation | Pre-trained language models (PrLMs) have achieved great success on a wide
range of natural language processing tasks by virtue of the universal language
representation ability obtained by self-supervised learning on a large corpus.
These models are pre-trained on standard plain texts with general language
model (LM) training objectives, which would be insufficient to model
dialogue-exclusive attributes like specificity and informativeness reflected in
these tasks that are not explicitly captured by the pre-trained universal
language representations. In this work, we propose dialogue-adaptive
pre-training objectives (DAPO) derived from quality estimation to simulate
dialogue-specific features, namely coherence, specificity, and informativeness.
As the foundation for model pre-training, we synthesize a new dialogue corpus
and build our training set with two unsupervised methods: 1) coherence-oriented
context corruption, including utterance ordering, insertion, and replacement,
to help the model capture the coherence inside the dialogue contexts; and 2)
specificity-oriented automatic rescoring, which encourages the model to measure
the quality of the synthesized data for dialogue-adaptive pre-training by
considering specificity and informativeness. Experimental results on widely
used open-domain response selection and quality estimation benchmarks show that
DAPO significantly improves the baseline models and achieves state-of-the-art
performance on the MuTual leaderboard, verifying the effectiveness of
estimating quality evaluation factors into pre-training.
| 2,022 | Computation and Language |
Multi-modal embeddings using multi-task learning for emotion recognition | General embeddings like word2vec, GloVe and ELMo have shown a lot of success
in natural language tasks. The embeddings are typically extracted from models
that are built on general tasks such as skip-gram models and natural language
generation. In this paper, we extend the work from natural language
understanding to multi-modal architectures that use audio, visual and textual
information for machine learning tasks. The embeddings in our network are
extracted using the encoder of a transformer model trained using multi-task
training. We use person identification and automatic speech recognition as the
tasks in our embedding generation framework. We tune and evaluate the
embeddings on the downstream task of emotion recognition and demonstrate that
on the CMU-MOSEI dataset, the embeddings can be used to improve over previous
state of the art results.
| 2,020 | Computation and Language |
Investigating Gender Bias in BERT | Contextual language models (CLMs) have pushed the NLP benchmarks to a new
height. It has become a new norm to utilize CLM provided word embeddings in
downstream tasks such as text classification. However, unless addressed, CLMs
are prone to learn intrinsic gender-bias in the dataset. As a result,
predictions of downstream NLP models can vary noticeably by varying gender
words, such as replacing "he" to "she", or even gender-neutral words. In this
paper, we focus our analysis on a popular CLM, i.e., BERT. We analyse the
gender-bias it induces in five downstream tasks related to emotion and
sentiment intensity prediction. For each task, we train a simple regressor
utilizing BERT's word embeddings. We then evaluate the gender-bias in
regressors using an equity evaluation corpus. Ideally and from the specific
design, the models should discard gender informative features from the input.
However, the results show a significant dependence of the system's predictions
on gender-particular words and phrases. We claim that such biases can be
reduced by removing genderspecific features from word embedding. Hence, for
each layer in BERT, we identify directions that primarily encode gender
information. The space formed by such directions is referred to as the gender
subspace in the semantic space of word embeddings. We propose an algorithm that
finds fine-grained gender directions, i.e., one primary direction for each BERT
layer. This obviates the need of realizing gender subspace in multiple
dimensions and prevents other crucial information from being omitted.
Experiments show that removing embedding components in such directions achieves
great success in reducing BERT-induced bias in the downstream tasks.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.