Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Neural language models for text classification in evidence-based
medicine
|
The COVID-19 has brought about a significant challenge to the whole of
humanity, but with a special burden upon the medical community. Clinicians must
keep updated continuously about symptoms, diagnoses, and effectiveness of
emergent treatments under a never-ending flood of scientific literature. In
this context, the role of evidence-based medicine (EBM) for curating the most
substantial evidence to support public health and clinical practice turns
essential but is being challenged as never before due to the high volume of
research articles published and pre-prints posted daily. Artificial
Intelligence can have a crucial role in this situation. In this article, we
report the results of an applied research project to classify scientific
articles to support Epistemonikos, one of the most active foundations worldwide
conducting EBM. We test several methods, and the best one, based on the XLNet
neural language model, improves the current approach by 93\% on average
F1-score, saving valuable time from physicians who volunteer to curate COVID-19
research articles manually.
| 2,020 |
Computation and Language
|
Extracting Synonyms from Bilingual Dictionaries
|
We present our progress in developing a novel algorithm to extract synonyms
from bilingual dictionaries. Identification and usage of synonyms play a
significant role in improving the performance of information access
applications. The idea is to construct a translation graph from translation
pairs, then to extract and consolidate cyclic paths to form bilingual sets of
synonyms. The initial evaluation of this algorithm illustrates promising
results in extracting Arabic-English bilingual synonyms. In the evaluation, we
first converted the synsets in the Arabic WordNet into translation pairs (i.e.,
losing word-sense memberships). Next, we applied our algorithm to rebuild these
synsets. We compared the original and extracted synsets obtaining an F-Measure
of 82.3% and 82.1% for Arabic and English synsets extraction, respectively.
| 2,021 |
Computation and Language
|
CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims
|
We introduce CLIMATE-FEVER, a new publicly available dataset for verification
of climate change-related claims. By providing a dataset for the research
community, we aim to facilitate and encourage work on improving algorithms for
retrieving evidential support for climate-specific claims, addressing the
underlying language understanding challenges, and ultimately help alleviate the
impact of misinformation on climate change. We adapt the methodology of FEVER
[1], the largest dataset of artificially designed claims, to real-life claims
collected from the Internet. While during this process, we could rely on the
expertise of renowned climate scientists, it turned out to be no easy task. We
discuss the surprising, subtle complexity of modeling real-world
climate-related claims within the \textsc{fever} framework, which we believe
provides a valuable challenge for general natural language understanding. We
hope that our work will mark the beginning of a new exciting long-term joint
effort by the climate science and AI community.
| 2,021 |
Computation and Language
|
Meta-Embeddings for Natural Language Inference and Semantic Similarity
tasks
|
Word Representations form the core component for almost all advanced Natural
Language Processing (NLP) applications such as text mining, question-answering,
and text summarization, etc. Over the last two decades, immense research is
conducted to come up with one single model to solve all major NLP tasks. The
major problem currently is that there are a plethora of choices for different
NLP tasks. Thus for NLP practitioners, the task of choosing the right model to
be used itself becomes a challenge. Thus combining multiple pre-trained word
embeddings and forming meta embeddings has become a viable approach to improve
tackle NLP tasks. Meta embedding learning is a process of producing a single
word embedding from a given set of pre-trained input word embeddings. In this
paper, we propose to use Meta Embedding derived from few State-of-the-Art
(SOTA) models to efficiently tackle mainstream NLP tasks like classification,
semantic relatedness, and text similarity. We have compared both ensemble and
dynamic variants to identify an efficient approach. The results obtained show
that even the best State-of-the-Art models can be bettered. Thus showing us
that meta-embeddings can be used for several NLP tasks by harnessing the power
of several individual representations.
| 2,020 |
Computation and Language
|
Intrinsic analysis for dual word embedding space models
|
Recent word embeddings techniques represent words in a continuous vector
space, moving away from the atomic and sparse representations of the past. Each
such technique can further create multiple varieties of embeddings based on
different settings of hyper-parameters like embedding dimension size, context
window size and training method. One additional variety appears when we
especially consider the Dual embedding space techniques which generate not one
but two-word embeddings as output. This gives rise to an interesting question -
"is there one or a combination of the two word embeddings variety, which works
better for a specific task?". This paper tries to answer this question by
considering all of these variations. Herein, we compare two classical embedding
methods belonging to two different methodologies - Word2Vec from window-based
and Glove from count-based. For an extensive evaluation after considering all
variations, a total of 84 different models were compared against semantic,
association and analogy evaluations tasks which are made up of 9 open-source
linguistics datasets. The final Word2vec reports showcase the preference of
non-default model for 2 out of 3 tasks. In case of Glove, non-default models
outperform in all 3 evaluation tasks.
| 2,020 |
Computation and Language
|
StructFormer: Joint Unsupervised Induction of Dependency and
Constituency Structure from Masked Language Modeling
|
There are two major classes of natural language grammar -- the dependency
grammar that models one-to-one correspondences between words and the
constituency grammar that models the assembly of one or several corresponded
words. While previous unsupervised parsing methods mostly focus on only
inducing one class of grammars, we introduce a novel model, StructFormer, that
can simultaneously induce dependency and constituency structure. To achieve
this, we propose a new parsing framework that can jointly generate a
constituency tree and dependency graph. Then we integrate the induced
dependency relations into the transformer, in a differentiable manner, through
a novel dependency-constrained self-attention mechanism. Experimental results
show that our model can achieve strong results on unsupervised constituency
parsing, unsupervised dependency parsing, and masked language modeling at the
same time.
| 2,021 |
Computation and Language
|
Automatically Identifying Language Family from Acoustic Examples in Low
Resource Scenarios
|
Existing multilingual speech NLP works focus on a relatively small subset of
languages, and thus current linguistic understanding of languages predominantly
stems from classical approaches. In this work, we propose a method to analyze
language similarity using deep learning. Namely, we train a model on the
Wilderness dataset and investigate how its latent space compares with classical
language family findings. Our approach provides a new direction for
cross-lingual data augmentation in any speech-based NLP task.
| 2,021 |
Computation and Language
|
Evaluating Explanations: How much do explanations from the teacher aid
students?
|
While many methods purport to explain predictions by highlighting salient
features, what aims these explanations serve and how they ought to be evaluated
often go unstated. In this work, we introduce a framework to quantify the value
of explanations via the accuracy gains that they confer on a student model
trained to simulate a teacher model. Crucially, the explanations are available
to the student during training, but are not available at test time. Compared to
prior proposals, our approach is less easily gamed, enabling principled,
automatic, model-agnostic evaluation of attributions. Using our framework, we
compare numerous attribution methods for text classification and question
answering, and observe quantitative differences that are consistent (to a
moderate to high degree) across different student model architectures and
learning strategies.
| 2,021 |
Computation and Language
|
Federated Marginal Personalization for ASR Rescoring
|
We introduce federated marginal personalization (FMP), a novel method for
continuously updating personalized neural network language models (NNLMs) on
private devices using federated learning (FL). Instead of fine-tuning the
parameters of NNLMs on personal data, FMP regularly estimates global and
personalized marginal distributions of words, and adjusts the probabilities
from NNLMs by an adaptation factor that is specific to each word. Our presented
approach can overcome the limitations of federated fine-tuning and efficiently
learn personalized NNLMs on devices. We study the application of FMP on
second-pass ASR rescoring tasks. Experiments on two speech evaluation datasets
show modest word error rate (WER) reductions. We also demonstrate that FMP
could offer reasonable privacy with only a negligible cost in speech
recognition accuracy.
| 2,020 |
Computation and Language
|
Automatic Extraction of Ranked SNP-Phenotype Associations from
Literature through Detecting Neural Candidates, Negation and Modality Markers
|
Genome-wide association (GWA) constitutes a prominent portion of studies
which have been conducted on personalized medicine and pharmacogenomics.
Recently, very few methods have been developed for extracting mutation-diseases
associations. However, there is no available method for extracting the
association of SNP-phenotype from text which considers degree of confidence in
associations. In this study, first a relation extraction method relying on
linguistic-based negation detection and neutral candidates is proposed. The
experiments show that negation cues and scope as well as detecting neutral
candidates can be employed for implementing a superior relation extraction
method which outperforms the kernel-based counterparts due to a uniform innate
polarity of sentences and small number of complex sentences in the corpus.
Moreover, a modality based approach is proposed to estimate the confidence
level of the extracted association which can be used to assess the reliability
of the reported association. Keywords: SNP, Phenotype, Biomedical Relation
Extraction, Negation Detection.
| 2,020 |
Computation and Language
|
How Can We Know When Language Models Know? On the Calibration of
Language Models for Question Answering
|
Recent works have shown that language models (LM) capture different types of
knowledge regarding facts or common sense. However, because no model is
perfect, they still fail to provide appropriate answers in many cases. In this
paper, we ask the question "how can we know when language models know, with
confidence, the answer to a particular query?" We examine this question from
the point of view of calibration, the property of a probabilistic model's
predicted probabilities actually being well correlated with the probabilities
of correctness. We examine three strong generative models -- T5, BART, and
GPT-2 -- and study whether their probabilities on QA tasks are well calibrated,
finding the answer is a relatively emphatic no. We then examine methods to
calibrate such models to make their confidence scores correlate better with the
likelihood of correctness through fine-tuning, post-hoc probability
modification, or adjustment of the predicted outputs or inputs. Experiments on
a diverse range of datasets demonstrate the effectiveness of our methods. We
also perform analysis to study the strengths and limitations of these methods,
shedding light on further improvements that may be made in methods for
calibrating LMs. We have released the code at
https://github.com/jzbjyb/lm-calibration.
| 2,021 |
Computation and Language
|
Interactive Teaching for Conversational AI
|
Current conversational AI systems aim to understand a set of pre-designed
requests and execute related actions, which limits them to evolve naturally and
adapt based on human interactions. Motivated by how children learn their first
language interacting with adults, this paper describes a new Teachable AI
system that is capable of learning new language nuggets called concepts,
directly from end users using live interactive teaching sessions. The proposed
setup uses three models to: a) Identify gaps in understanding automatically
during live conversational interactions, b) Learn the respective
interpretations of such unknown concepts from live interactions with users, and
c) Manage a classroom sub-dialogue specifically tailored for interactive
teaching sessions. We propose state-of-the-art transformer based neural
architectures of models, fine-tuned on top of pre-trained models, and show
accuracy improvements on the respective components. We demonstrate that this
method is very promising in leading way to build more adaptive and personalized
language understanding models.
| 2,020 |
Computation and Language
|
Extracting COVID-19 Diagnoses and Symptoms From Clinical Text: A New
Annotated Corpus and Neural Event Extraction Framework
|
Coronavirus disease 2019 (COVID-19) is a global pandemic. Although much has
been learned about the novel coronavirus since its emergence, there are many
open questions related to tracking its spread, describing symptomology,
predicting the severity of infection, and forecasting healthcare utilization.
Free-text clinical notes contain critical information for resolving these
questions. Data-driven, automatic information extraction models are needed to
use this text-encoded information in large-scale studies. This work presents a
new clinical corpus, referred to as the COVID-19 Annotated Clinical Text (CACT)
Corpus, which comprises 1,472 notes with detailed annotations characterizing
COVID-19 diagnoses, testing, and clinical presentation. We introduce a
span-based event extraction model that jointly extracts all annotated
phenomena, achieving high performance in identifying COVID-19 and symptom
events with associated assertion values (0.83-0.97 F1 for events and 0.73-0.79
F1 for assertions). In a secondary use application, we explored the prediction
of COVID-19 test results using structured patient data (e.g. vital signs and
laboratory results) and automatically extracted symptom information. The
automatically extracted symptoms improve prediction performance, beyond
structured data alone.
| 2,021 |
Computation and Language
|
Classification of Multimodal Hate Speech -- The Winning Solution of
Hateful Memes Challenge
|
Hateful Memes is a new challenge set for multimodal classification, focusing
on detecting hate speech in multimodal memes. Difficult examples are added to
the dataset to make it hard to rely on unimodal signals, which means only
multimodal models can succeed. According to Kiela,the state-of-the-art methods
perform poorly compared to humans (64.73% vs. 84.7% accuracy) on Hateful Memes.
I propose a new model that combined multimodal with rules, which achieve the
first ranking of accuracy and AUROC of 86.8% and 0.923 respectively. These
rules are extracted from training set, and focus on improving the
classification accuracy of difficult samples.
| 2,020 |
Computation and Language
|
It's a Thin Line Between Love and Hate: Using the Echo in Modeling
Dynamics of Racist Online Communities
|
The (((echo))) symbol -- triple parenthesis surrounding a name, made it to
mainstream social networks in early 2016, with the intensification of the U.S.
Presidential race. It was used by members of the alt-right, white supremacists
and internet trolls to tag people of Jewish heritage -- a modern incarnation of
the infamous yellow badge (Judenstern) used in Nazi-Germany. Tracking this
trending meme, its meaning, and its function has proved elusive for its
semantic ambiguity (e.g., a symbol for a virtual hug).
In this paper we report of the construction of an appropriate dataset
allowing the reconstruction of networks of racist communities and the way they
are embedded in the broader community. We combine natural language processing
and structural network analysis to study communities promoting hate. In order
to overcome dog-whistling and linguistic ambiguity, we propose a multi-modal
neural architecture based on a BERT transformer and a BiLSTM network on the
tweet level, while also taking into account the users ego-network and meta
features. Our multi-modal neural architecture outperforms a set of strong
baselines. We further show how the the use of language and network structure in
tandem allows the detection of the leaders of the hate communities. We further
study the ``intersectionality'' of hate and show that the antisemitic echo
correlates with hate speech that targets other minority and protected groups.
Finally, we analyze the role IRA trolls assumed in this network as part of the
Russian interference campaign. Our findings allow a better understanding of
recent manifestations of racism and the dynamics that facilitate it.
| 2,020 |
Computation and Language
|
Retrieving and ranking short medical questions with two stages neural
matching model
|
Internet hospital is a rising business thanks to recent advances in mobile
web technology and high demand of health care services. Online medical services
become increasingly popular and active. According to US data in 2018, 80
percent of internet users have asked health-related questions online. Numerous
data is generated in unprecedented speed and scale. Those representative
questions and answers in medical fields are valuable raw data sources for
medical data mining. Automated machine interpretation on those sheer amount of
data gives an opportunity to assist doctors to answer frequently asked
medical-related questions from the perspective of information retrieval and
machine learning approaches. In this work, we propose a novel two-stage
framework for the semantic matching of query-level medical questions.
| 2,020 |
Computation and Language
|
Meta-KD: A Meta Knowledge Distillation Framework for Language Model
Compression across Domains
|
Pre-trained language models have been applied to various NLP tasks with
considerable performance gains. However, the large model sizes, together with
the long inference time, limit the deployment of such models in real-time
applications. One line of model compression approaches considers knowledge
distillation to distill large teacher models into small student models. Most of
these studies focus on single-domain only, which ignores the transferable
knowledge from other domains. We notice that training a teacher with
transferable knowledge digested across domains can achieve better
generalization capability to help knowledge distillation. Hence we propose a
Meta-Knowledge Distillation (Meta-KD) framework to build a meta-teacher model
that captures transferable knowledge across domains and passes such knowledge
to students. Specifically, we explicitly force the meta-teacher to capture
transferable knowledge at both instance-level and feature-level from multiple
domains, and then propose a meta-distillation algorithm to learn single-domain
student models with guidance from the meta-teacher. Experiments on public
multi-domain NLP tasks show the effectiveness and superiority of the proposed
Meta-KD framework. Further, we also demonstrate the capability of Meta-KD in
the settings where the training data is scarce.
| 2,022 |
Computation and Language
|
Supertagging the Long Tail with Tree-Structured Decoding of Complex
Categories
|
Although current CCG supertaggers achieve high accuracy on the standard WSJ
test set, few systems make use of the categories' internal structure that will
drive the syntactic derivation during parsing. The tagset is traditionally
truncated, discarding the many rare and complex category types in the long
tail. However, supertags are themselves trees. Rather than give up on rare
tags, we investigate constructive models that account for their internal
structure, including novel methods for tree-structured prediction. Our best
tagger is capable of recovering a sizeable fraction of the long-tail supertags
and even generates CCG categories that have never been seen in training, while
approximating the prior state of the art in overall tag accuracy with fewer
parameters. We further investigate how well different approaches generalize to
out-of-domain evaluation sets.
| 2,020 |
Computation and Language
|
A Computational Approach to Measuring the Semantic Divergence of
Cognates
|
Meaning is the foundation stone of intercultural communication. Languages are
continuously changing, and words shift their meanings for various reasons.
Semantic divergence in related languages is a key concern of historical
linguistics. In this paper we investigate semantic divergence across languages
by measuring the semantic similarity of cognate sets in multiple languages. The
method that we propose is based on cross-lingual word embeddings. In this paper
we implement and evaluate our method on English and five Romance languages, but
it can be extended easily to any language pair, requiring only large
monolingual corpora for the involved languages and a small bilingual dictionary
for the pair. This language-agnostic method facilitates a quantitative analysis
of cognates divergence -- by computing degrees of semantic similarity between
cognate pairs -- and provides insights for identifying false friends. As a
second contribution, we formulate a straightforward method for detecting false
friends, and introduce the notion of "soft false friend" and "hard false
friend", as well as a measure of the degree of "falseness" of a false friends
pair. Additionally, we propose an algorithm that can output suggestions for
correcting false friends, which could result in a very helpful tool for
language learning or translation.
| 2,019 |
Computation and Language
|
Generating Descriptions for Sequential Images with Local-Object
Attention and Global Semantic Context Modelling
|
In this paper, we propose an end-to-end CNN-LSTM model for generating
descriptions for sequential images with a local-object attention mechanism. To
generate coherent descriptions, we capture global semantic context using a
multi-layer perceptron, which learns the dependencies between sequential
images. A paralleled LSTM network is exploited for decoding the sequence
descriptions. Experimental results show that our model outperforms the baseline
across three different evaluation metrics on the datasets published by
Microsoft.
| 2,020 |
Computation and Language
|
Learning from others' mistakes: Avoiding dataset biases without modeling
them
|
State-of-the-art natural language processing (NLP) models often learn to
model dataset biases and surface form correlations instead of features that
target the intended underlying task. Previous work has demonstrated effective
methods to circumvent these issues when knowledge of the bias is available. We
consider cases where the bias issues may not be explicitly identified, and show
a method for training models that learn to ignore these problematic
correlations. Our approach relies on the observation that models with limited
capacity primarily learn to exploit biases in the dataset. We can leverage the
errors of such limited capacity models to train a more robust model in a
product of experts, thus bypassing the need to hand-craft a biased model. We
show the effectiveness of this method to retain improvements in
out-of-distribution settings even if no particular bias is targeted by the
biased model.
| 2,020 |
Computation and Language
|
Analyzing Stylistic Variation across Different Political Regimes
|
In this article we propose a stylistic analysis of texts written across two
different periods, which differ not only temporally, but politically and
culturally: communism and democracy in Romania. We aim to analyze the stylistic
variation between texts written during these two periods, and determine at what
levels the variation is more apparent (if any): at the stylistic level, at the
topic level etc. We take a look at the stylistic profile of these texts
comparatively, by performing clustering and classification experiments on the
texts, using traditional authorship attribution methods and features. To
confirm the stylistic variation is indeed an effect of the change in political
and cultural environment, and not merely reflective of a natural change in the
author's style with time, we look at various stylistic metrics over time and
show that the change in style between the two periods is statistically
significant. We also perform an analysis of the variation in topic between the
two epochs, to compare with the variation at the style level. These analyses
show that texts from the two periods can indeed be distinguished, both from the
point of view of style and from that of semantic content (topic).
| 2,018 |
Computation and Language
|
End-to-End QA on COVID-19: Domain Adaptation with Synthetic Training
|
End-to-end question answering (QA) requires both information retrieval (IR)
over a large document collection and machine reading comprehension (MRC) on the
retrieved passages. Recent work has successfully trained neural IR systems
using only supervised question answering (QA) examples from open-domain
datasets. However, despite impressive performance on Wikipedia, neural IR lags
behind traditional term matching approaches such as BM25 in more specific and
specialized target domains such as COVID-19. Furthermore, given little or no
labeled data, effective adaptation of QA systems can also be challenging in
such target domains. In this work, we explore the application of synthetically
generated QA examples to improve performance on closed-domain retrieval and
MRC. We combine our neural IR and MRC systems and show significant improvements
in end-to-end QA on the CORD-19 collection over a state-of-the-art open-domain
QA baseline.
| 2,020 |
Computation and Language
|
ArCorona: Analyzing Arabic Tweets in the Early Days of Coronavirus
(COVID-19) Pandemic
|
Over the past few months, there were huge numbers of circulating tweets and
discussions about Coronavirus (COVID-19) in the Arab region. It is important
for policy makers and many people to identify types of shared tweets to better
understand public behavior, topics of interest, requests from governments,
sources of tweets, etc. It is also crucial to prevent spreading of rumors and
misinformation about the virus or bad cures. To this end, we present the
largest manually annotated dataset of Arabic tweets related to COVID-19. We
describe annotation guidelines, analyze our dataset and build effective machine
learning and transformer based models for classification.
| 2,021 |
Computation and Language
|
TAN-NTM: Topic Attention Networks for Neural Topic Modeling
|
Topic models have been widely used to learn text representations and gain
insight into document corpora. To perform topic discovery, most existing neural
models either take document bag-of-words (BoW) or sequence of tokens as input
followed by variational inference and BoW reconstruction to learn topic-word
distribution. However, leveraging topic-word distribution for learning better
features during document encoding has not been explored much. To this end, we
develop a framework TAN-NTM, which processes document as a sequence of tokens
through a LSTM whose contextual outputs are attended in a topic-aware manner.
We propose a novel attention mechanism which factors in topic-word distribution
to enable the model to attend on relevant words that convey topic related cues.
The output of topic attention module is then used to carry out variational
inference. We perform extensive ablations and experiments resulting in ~9-15
percentage improvement over score of existing SOTA topic models in NPMI
coherence on several benchmark datasets - 20Newsgroups, Yelp Review Polarity
and AGNews. Further, we show that our method learns better latent
document-topic features compared to existing topic models through improvement
on two downstream tasks: document classification and topic guided keyphrase
generation.
| 2,021 |
Computation and Language
|
SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical
Semantic Change
|
This paper describes SChME (Semantic Change Detection with Model Ensemble), a
method usedin SemEval-2020 Task 1 on unsupervised detection of lexical semantic
change. SChME usesa model ensemble combining signals of distributional models
(word embeddings) and wordfrequency models where each model casts a vote
indicating the probability that a word sufferedsemantic change according to
that feature. More specifically, we combine cosine distance of wordvectors
combined with a neighborhood-based metric we named Mapped Neighborhood
Distance(MAP), and a word frequency differential metric as input signals to our
model. Additionally,we explore alignment-based methods to investigate the
importance of the landmarks used in thisprocess. Our results show evidence that
the number of landmarks used for alignment has a directimpact on the predictive
performance of the model. Moreover, we show that languages that sufferless
semantic change tend to benefit from using a large number of landmarks, whereas
languageswith more semantic change benefit from a more careful choice of
landmark number for alignment.
| 2,020 |
Computation and Language
|
Circles are like Ellipses, or Ellipses are like Circles? Measuring the
Degree of Asymmetry of Static and Contextual Embeddings and the Implications
to Representation Learning
|
Human judgments of word similarity have been a popular method of evaluating
the quality of word embedding. But it fails to measure the geometry properties
such as asymmetry. For example, it is more natural to say "Ellipses are like
Circles" than "Circles are like Ellipses". Such asymmetry has been observed
from a psychoanalysis test called word evocation experiment, where one word is
used to recall another. Although useful, such experimental data have been
significantly understudied for measuring embedding quality. In this paper, we
use three well-known evocation datasets to gain insights into asymmetry
encoding of embedding. We study both static embedding as well as contextual
embedding, such as BERT. Evaluating asymmetry for BERT is generally hard due to
the dynamic nature of embedding. Thus, we probe BERT's conditional
probabilities (as a language model) using a large number of Wikipedia contexts
to derive a theoretically justifiable Bayesian asymmetry score. The result
shows that contextual embedding shows randomness than static embedding on
similarity judgments while performing well on asymmetry judgment, which aligns
with its strong performance on "extrinsic evaluations" such as text
classification. The asymmetry judgment and the Bayesian approach provides a new
perspective to evaluate contextual embedding on intrinsic evaluation, and its
comparison to similarity evaluation concludes our work with a discussion on the
current state and the future of representation learning.
| 2,020 |
Computation and Language
|
Federated Learning for Personalized Humor Recognition
|
Computational understanding of humor is an important topic under creative
language understanding and modeling. It can play a key role in complex human-AI
interactions. The challenge here is that human perception of humorous content
is highly subjective. The same joke may receive different funniness ratings
from different readers. This makes it highly challenging for humor recognition
models to achieve personalization in practical scenarios. Existing approaches
are generally designed based on the assumption that users have a consensus on
whether a given text is humorous or not. Thus, they cannot handle diverse humor
preferences well. In this paper, we propose the FedHumor approach for the
recognition of humorous content in a personalized manner through Federated
Learning (FL). Extending a pre-trained language model, FedHumor guides the
fine-tuning process by considering diverse distributions of humor preferences
from individuals. It incorporates a diversity adaptation strategy into the FL
paradigm to train a personalized humor recognition model. To the best of our
knowledge, FedHumor is the first text-based personalized humor recognition
model through federated learning. Extensive experiments demonstrate the
advantage of FedHumor in recognizing humorous texts compared to nine
state-of-the-art humor recognition approaches with superior capability for
handling the diversity in humor labels produced by users with diverse
preferences.
| 2,022 |
Computation and Language
|
Adapt-and-Adjust: Overcoming the Long-Tail Problem of Multilingual
Speech Recognition
|
One crucial challenge of real-world multilingual speech recognition is the
long-tailed distribution problem, where some resource-rich languages like
English have abundant training data, but a long tail of low-resource languages
have varying amounts of limited training data. To overcome the long-tail
problem, in this paper, we propose Adapt-and-Adjust (A2), a transformer-based
multi-task learning framework for end-to-end multilingual speech recognition.
The A2 framework overcomes the long-tail problem via three techniques: (1)
exploiting a pretrained multilingual language model (mBERT) to improve the
performance of low-resource languages; (2) proposing dual adapters consisting
of both language-specific and language-agnostic adaptation with minimal
additional parameters; and (3) overcoming the class imbalance, either by
imposing class priors in the loss during training or adjusting the logits of
the softmax output during inference. Extensive experiments on the CommonVoice
corpus show that A2 significantly outperforms conventional approaches.
| 2,020 |
Computation and Language
|
Multilingual Neural RST Discourse Parsing
|
Text discourse parsing plays an important role in understanding information
flow and argumentative structure in natural language. Previous research under
the Rhetorical Structure Theory (RST) has mostly focused on inducing and
evaluating models from the English treebank. However, the parsing tasks for
other languages such as German, Dutch, and Portuguese are still challenging due
to the shortage of annotated data. In this work, we investigate two approaches
to establish a neural, cross-lingual discourse parser via: (1) utilizing
multilingual vector representations; and (2) adopting segment-level translation
of the source content. Experiment results show that both methods are effective
even with limited training data, and achieve state-of-the-art performance on
cross-lingual, document-level discourse parsing on all sub-tasks.
| 2,020 |
Computation and Language
|
Leveraging Abstract Meaning Representation for Knowledge Base Question
Answering
|
Knowledge base question answering (KBQA)is an important task in Natural
Language Processing. Existing approaches face significant challenges including
complex question understanding, necessity for reasoning, and lack of large
end-to-end training datasets. In this work, we propose Neuro-Symbolic Question
Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning
Representation (AMR) parses for task-independent question understanding; (2) a
simple yet effective graph transformation approach to convert AMR parses into
candidate logical queries that are aligned to the KB; (3) a pipeline-based
approach which integrates multiple, reusable modules that are trained
specifically for their individual tasks (semantic parser, entity
andrelationship linkers, and neuro-symbolic reasoner) and do not require
end-to-end training data. NSQA achieves state-of-the-art performance on two
prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD1.0). Furthermore,
our analysis emphasizes that AMR is a powerful tool for KBQA systems.
| 2,021 |
Computation and Language
|
Learning Class-Transductive Intent Representations for Zero-shot Intent
Detection
|
Zero-shot intent detection (ZSID) aims to deal with the continuously emerging
intents without annotated training data. However, existing ZSID systems suffer
from two limitations: 1) They are not good at modeling the relationship between
seen and unseen intents. 2) They cannot effectively recognize unseen intents
under the generalized intent detection (GZSID) setting. A critical problem
behind these limitations is that the representations of unseen intents cannot
be learned in the training stage. To address this problem, we propose a novel
framework that utilizes unseen class labels to learn Class-Transductive Intent
Representations (CTIR). Specifically, we allow the model to predict unseen
intents during training, with the corresponding label names serving as input
utterances. On this basis, we introduce a multi-task learning objective, which
encourages the model to learn the distinctions among intents, and a similarity
scorer, which estimates the connections among intents more accurately. CTIR is
easy to implement and can be integrated with existing methods. Experiments on
two real-world datasets show that CTIR brings considerable improvement to the
baseline systems.
| 2,021 |
Computation and Language
|
Bengali Abstractive News Summarization(BANS): A Neural Attention
Approach
|
Abstractive summarization is the process of generating novel sentences based
on the information extracted from the original text document while retaining
the context. Due to abstractive summarization's underlying complexities, most
of the past research work has been done on the extractive summarization
approach. Nevertheless, with the triumph of the sequence-to-sequence (seq2seq)
model, abstractive summarization becomes more viable. Although a significant
number of notable research has been done in the English language based on
abstractive summarization, only a couple of works have been done on Bengali
abstractive news summarization (BANS). In this article, we presented a seq2seq
based Long Short-Term Memory (LSTM) network model with attention at
encoder-decoder. Our proposed system deploys a local attention-based model that
produces a long sequence of words with lucid and human-like generated sentences
with noteworthy information of the original document. We also prepared a
dataset of more than 19k articles and corresponding human-written summaries
collected from bangla.bdnews24.com1 which is till now the most extensive
dataset for Bengali news document summarization and publicly published in
Kaggle2. We evaluated our model qualitatively and quantitatively and compared
it with other published results. It showed significant improvement in terms of
human evaluation scores with state-of-the-art approaches for BANS.
| 2,020 |
Computation and Language
|
DialogBERT: Discourse-Aware Response Generation via Learning to Recover
and Rank Utterances
|
Recent advances in pre-trained language models have significantly improved
neural response generation. However, existing methods usually view the dialogue
context as a linear sequence of tokens and learn to generate the next word
through token-level self-attention. Such token-level encoding hinders the
exploration of discourse-level coherence among utterances. This paper presents
DialogBERT, a novel conversational response generation model that enhances
previous PLM-based dialogue models. DialogBERT employs a hierarchical
Transformer architecture. To efficiently capture the discourse-level coherence
among utterances, we propose two training objectives, including masked
utterance regression and distributed utterance order ranking in analogy to the
original BERT training. Experiments on three multi-turn conversation datasets
show that our approach remarkably outperforms the baselines, such as BART and
DialoGPT, in terms of quantitative evaluation. The human evaluation suggests
that DialogBERT generates more coherent, informative, and human-like responses
than the baselines with significant margins.
| 2,021 |
Computation and Language
|
Self-Explaining Structures Improve NLP Models
|
Existing approaches to explaining deep learning models in NLP usually suffer
from two major drawbacks: (1) the main model and the explaining model are
decoupled: an additional probing or surrogate model is used to interpret an
existing model, and thus existing explaining tools are not self-explainable;
(2) the probing model is only able to explain a model's predictions by
operating on low-level features by computing saliency scores for individual
words but are clumsy at high-level text units such as phrases, sentences, or
paragraphs. To deal with these two issues, in this paper, we propose a simple
yet general and effective self-explaining framework for deep learning models in
NLP. The key point of the proposed framework is to put an additional layer, as
is called by the interpretation layer, on top of any existing NLP model. This
layer aggregates the information for each text span, which is then associated
with a specific weight, and their weighted combination is fed to the softmax
function for the final prediction. The proposed model comes with the following
merits: (1) span weights make the model self-explainable and do not require an
additional probing model for interpretation; (2) the proposed model is general
and can be adapted to any existing deep learning structures in NLP; (3) the
weight associated with each text span provides direct importance scores for
higher-level text units such as phrases and sentences. We for the first time
show that interpretability does not come at the cost of performance: a neural
model of self-explaining features obtains better performances than its
counterpart without the self-explaining nature, achieving a new SOTA
performance of 59.1 on SST-5 and a new SOTA performance of 92.3 on SNLI.
| 2,020 |
Computation and Language
|
Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries
|
Despite end-to-end neural systems making significant progress in the last
decade for task-oriented as well as chit-chat based dialogue systems, most
dialogue systems rely on hybrid approaches which use a combination of
rule-based, retrieval and generative approaches for generating a set of ranked
responses. Such dialogue systems need to rely on a fallback mechanism to
respond to out-of-domain or novel user queries which are not answerable within
the scope of the dialog system. While, dialog systems today rely on static and
unnatural responses like "I don't know the answer to that question" or "I'm not
sure about that", we design a neural approach which generates responses which
are contextually aware with the user query as well as say no to the user. Such
customized responses provide paraphrasing ability and contextualization as well
as improve the interaction with the user and reduce dialogue monotonicity. Our
simple approach makes use of rules over dependency parses and a text-to-text
transformer fine-tuned on synthetic data of question-response pairs generating
highly relevant, grammatical as well as diverse questions. We perform automatic
and manual evaluations to demonstrate the efficacy of the system.
| 2,021 |
Computation and Language
|
Label Enhanced Event Detection with Heterogeneous Graph Attention
Networks
|
Event Detection (ED) aims to recognize instances of specified types of event
triggers in text. Different from English ED, Chinese ED suffers from the
problem of word-trigger mismatch due to the uncertain word boundaries. Existing
approaches injecting word information into character-level models have achieved
promising progress to alleviate this problem, but they are limited by two
issues. First, the interaction between characters and lexicon words is not
fully exploited. Second, they ignore the semantic information provided by event
labels. We thus propose a novel architecture named Label enhanced Heterogeneous
Graph Attention Networks (L-HGAT). Specifically, we transform each sentence
into a graph, where character nodes and word nodes are connected with different
types of edges, so that the interaction between words and characters is fully
reserved. A heterogeneous graph attention networks is then introduced to
propagate relational message and enrich information interaction. Furthermore,
we convert each label into a trigger-prototype-based embedding, and design a
margin loss to guide the model distinguish confusing event labels. Experiments
on two benchmark datasets show that our model achieves significant improvement
over a range of competitive baseline methods.
| 2,023 |
Computation and Language
|
CUT: Controllable Unsupervised Text Simplification
|
In this paper, we focus on the challenge of learning controllable text
simplifications in unsupervised settings. While this problem has been
previously discussed for supervised learning algorithms, the literature on the
analogies in unsupervised methods is scarse. We propose two unsupervised
mechanisms for controlling the output complexity of the generated texts,
namely, back translation with control tokens (a learning-based approach) and
simplicity-aware beam search (decoding-based approach). We show that by nudging
a back-translation algorithm to understand the relative simplicity of a text in
comparison to its noisy translation, the algorithm self-supervises itself to
produce the output of the desired complexity. This approach achieves
competitive performance on well-established benchmarks: SARI score of 46.88%
and FKGL of 3.65% on the Newsela dataset.
| 2,020 |
Computation and Language
|
On Extending NLP Techniques from the Categorical to the Latent Space: KL
Divergence, Zipf's Law, and Similarity Search
|
Despite the recent successes of deep learning in natural language processing
(NLP), there remains widespread usage of and demand for techniques that do not
rely on machine learning. The advantage of these techniques is their
interpretability and low cost when compared to frequently opaque and expensive
machine learning models. Although they may not be be as performant in all
cases, they are often sufficient for common and relatively simple problems. In
this paper, we aim to modernize these older methods while retaining their
advantages by extending approaches from categorical or bag-of-words
representations to word embeddings representations in the latent space. First,
we show that entropy and Kullback-Leibler divergence can be efficiently
estimated using word embeddings and use this estimation to compare text across
several categories. Next, we recast the heavy-tailed distribution known as
Zipf's law that is frequently observed in the categorical space to the latent
space. Finally, we look to improve the Jaccard similarity measure for sentence
suggestion by introducing a new method of identifying similar sentences based
on the set cover problem. We compare the performance of this algorithm against
several baselines including Word Mover's Distance and the Levenshtein distance.
| 2,020 |
Computation and Language
|
Clustering-based Automatic Construction of Legal Entity Knowledge Base
from Contracts
|
In contract analysis and contract automation, a knowledge base (KB) of legal
entities is fundamental for performing tasks such as contract verification,
contract generation and contract analytic. However, such a KB does not always
exist nor can be produced in a short time. In this paper, we propose a
clustering-based approach to automatically generate a reliable knowledge base
of legal entities from given contracts without any supplemental references. The
proposed method is robust to different types of errors brought by
pre-processing such as Optical Character Recognition (OCR) and Named Entity
Recognition (NER), as well as editing errors such as typos. We evaluate our
method on a dataset that consists of 800 real contracts with various qualities
from 15 clients. Compared to the collected ground-truth data, our method is
able to recall 84\% of the knowledge.
| 2,021 |
Computation and Language
|
Drugs4Covid: Drug-driven Knowledge Exploitation based on Scientific
Publications
|
In the absence of sufficient medication for COVID patients due to the
increased demand, disused drugs have been employed or the doses of those
available were modified by hospital pharmacists. Some evidences for the use of
alternative drugs can be found in the existing scientific literature that could
assist in such decisions. However, exploiting large corpus of documents in an
efficient manner is not easy, since drugs may not appear explicitly related in
the texts and could be mentioned under different brand names. Drugs4Covid
combines word embedding techniques and semantic web technologies to enable a
drug-oriented exploration of large medical literature. Drugs and diseases are
identified according to the ATC classification and MeSH categories
respectively. More than 60K articles and 2M paragraphs have been processed from
the CORD-19 corpus with information of COVID-19, SARS, and other related
coronaviruses. An open catalogue of drugs has been created and results are
publicly available through a drug browser, a keyword-guided text explorer, and
a knowledge graph.
| 2,020 |
Computation and Language
|
End to End ASR System with Automatic Punctuation Insertion
|
Recent Automatic Speech Recognition systems have been moving towards
end-to-end systems that can be trained together. Numerous techniques that have
been proposed recently enabled this trend, including feature extraction with
CNNs, context capturing and acoustic feature modeling with RNNs, automatic
alignment of input and output sequences using Connectionist Temporal
Classifications, as well as replacing traditional n-gram language models with
RNN Language Models. Historically, there has been a lot of interest in
automatic punctuation in textual or speech to text context. However, there
seems to be little interest in incorporating automatic punctuation into the
emerging neural network based end-to-end speech recognition systems, partially
due to the lack of English speech corpus with punctuated transcripts. In this
study, we propose a method to generate punctuated transcript for the TEDLIUM
dataset using transcripts available from ted.com. We also propose an end-to-end
ASR system that outputs words and punctuations concurrently from speech
signals. Combining Damerau Levenshtein Distance and slot error rate into
DLev-SER, we enable measurement of punctuation error rate when the hypothesis
text is not perfectly aligned with the reference. Compared with previous
methods, our model reduces slot error rate from 0.497 to 0.341.
| 2,020 |
Computation and Language
|
Context in Informational Bias Detection
|
Informational bias is bias conveyed through sentences or clauses that provide
tangential, speculative or background information that can sway readers'
opinions towards entities. By nature, informational bias is context-dependent,
but previous work on informational bias detection has not explored the role of
context beyond the sentence. In this paper, we explore four kinds of context
for informational bias in English news articles: neighboring sentences, the
full article, articles on the same event from other news publishers, and
articles from the same domain (but potentially different events). We find that
integrating event context improves classification performance over a very
strong baseline. In addition, we perform the first error analysis of models on
this task. We find that the best-performing context-inclusive model outperforms
the baseline on longer sentences, and sentences from politically centrist
articles.
| 2,020 |
Computation and Language
|
Ontology-based and User-focused Automatic Text Summarization (OATS):
Using COVID-19 Risk Factors as an Example
|
This paper proposes a novel Ontology-based and user-focused Automatic Text
Summarization (OATS) system, in the setting where the goal is to automatically
generate text summarization from unstructured text by extracting sentences
containing the information that aligns to the user's focus. OATS consists of
two modules: ontology-based topic identification and user-focused text
summarization; it first utilizes an ontology-based approach to identify
relevant documents to user's interest, and then takes advantage of the answers
extracted from a question answering model using questions specified from users
for the generation of text summarization. To support the fight against the
COVID-19 pandemic, we used COVID-19 risk factors as an example to demonstrate
the proposed OATS system with the aim of helping the medical community
accurately identify relevant scientific literature and efficiently review the
information that addresses risk factors related to COVID-19.
| 2,020 |
Computation and Language
|
Predicting Early Indicators of Cognitive Decline from Verbal Utterances
|
Dementia is a group of irreversible, chronic, and progressive
neurodegenerative disorders resulting in impaired memory, communication, and
thought processes. In recent years, clinical research advances in brain aging
have focused on the earliest clinically detectable stage of incipient dementia,
commonly known as mild cognitive impairment (MCI). Currently, these disorders
are diagnosed using a manual analysis of neuropsychological examinations. We
measure the feasibility of using the linguistic characteristics of verbal
utterances elicited during neuropsychological exams of elderly subjects to
distinguish between elderly control groups, people with MCI, people diagnosed
with possible Alzheimer's disease (AD), and probable AD. We investigated the
performance of both theory-driven psycholinguistic features and data-driven
contextual language embeddings in identifying different clinically diagnosed
groups. Our experiments show that a combination of contextual and
psycholinguistic features extracted by a Support Vector Machine improved
distinguishing the verbal utterances of elderly controls, people with MCI,
possible AD, and probable AD. This is the first work to identify four clinical
diagnosis groups of dementia in a highly imbalanced dataset. Our work shows
that machine learning algorithms built on contextual and psycholinguistic
features can learn the linguistic biomarkers from verbal utterances and assist
clinical diagnosis of different stages and types of dementia, even with limited
data.
| 2,021 |
Computation and Language
|
Data-Informed Global Sparseness in Attention Mechanisms for Deep Neural
Networks
|
The attention mechanism is a key component of the neural revolution in
Natural Language Processing (NLP). As the size of attention-based models has
been scaling with the available computational resources, a number of pruning
techniques have been developed to detect and to exploit sparseness in such
models in order to make them more efficient. The majority of such efforts have
focused on looking for attention patterns and then hard-coding them to achieve
sparseness, or pruning the weights of the attention mechanisms based on
statistical information from the training data. Here, we marry these two lines
of research by proposing Attention Pruning (AP): a novel pruning framework that
collects observations about the attention patterns in a fixed dataset and then
induces a global sparseness mask for the model. This can save 90% of the
attention computation for language modelling and about 50% for machine
translation and for solving GLUE tasks, while maintaining the quality of the
results. Moreover, using our method, we discovered important distinctions
between self- and cross-attention patterns, which could guide future NLP
research in attention-based modelling. Our framework can in principle speed up
any model that uses attention mechanism, thus helping develop better models for
existing or for new NLP applications. Our implementation is available at
https://github.com/irugina/AP.
| 2,021 |
Computation and Language
|
Modelling Compositionality and Structure Dependence in Natural Language
|
Human beings possess the most sophisticated computational machinery in the
known universe. We can understand language of rich descriptive power, and
communicate in the same environment with astonishing clarity. Two of the many
contributors to the interest in natural language - the properties of
Compositionality and Structure Dependence, are well documented, and offer a
vast space to ask interesting modelling questions. The first step to begin
answering these questions is to ground verbal theory in formal terms. Drawing
on linguistics and set theory, a formalisation of these ideas is presented in
the first half of this thesis. We see how cognitive systems that process
language need to have certain functional constraints, viz. time based,
incremental operations that rely on a structurally defined domain. The
observations that result from analysing this formal setup are examined as part
of a modelling exercise. Using the advances of word embedding techniques, a
model of relational learning is simulated with a custom dataset to demonstrate
how a time based role-filler binding mechanism satisfies some of the
constraints described in the first section. The model's ability to map
structure, along with its symbolic-connectionist architecture makes for a
cognitively plausible implementation. The formalisation and simulation are
together an attempt to recognise the constraints imposed by linguistic theory,
and explore the opportunities presented by a cognitive model of relation
learning to realise these constraints.
| 2,021 |
Computation and Language
|
GottBERT: a pure German Language Model
|
Lately, pre-trained language models advanced the field of natural language
processing (NLP). The introduction of Bidirectional Encoders for Transformers
(BERT) and its optimized version RoBERTa have had significant impact and
increased the relevance of pre-trained models. First, research in this field
mainly started on English data followed by models trained with multilingual
text corpora. However, current research shows that multilingual models are
inferior to monolingual models. Currently, no German single language RoBERTa
model is yet published, which we introduce in this work (GottBERT). The German
portion of the OSCAR data set was used as text corpus. In an evaluation we
compare its performance on the two Named Entity Recognition (NER) tasks Conll
2003 and GermEval 2014 as well as on the text classification tasks GermEval
2018 (fine and coarse) and GNAD with existing German single language BERT
models and two multilingual ones. GottBERT was pre-trained related to the
original RoBERTa model using fairseq. All downstream tasks were trained using
hyperparameter presets taken from the benchmark of German BERT. The experiments
were setup utilizing FARM. Performance was measured by the $F_{1}$ score.
GottBERT was successfully pre-trained on a 256 core TPU pod using the RoBERTa
BASE architecture. Even without extensive hyper-parameter optimization, in all
NER and one text classification task, GottBERT already outperformed all other
tested German and multilingual models. In order to support the German NLP
field, we publish GottBERT under the AGPLv3 license.
| 2,020 |
Computation and Language
|
BERT-hLSTMs: BERT and Hierarchical LSTMs for Visual Storytelling
|
Visual storytelling is a creative and challenging task, aiming to
automatically generate a story-like description for a sequence of images. The
descriptions generated by previous visual storytelling approaches lack
coherence because they use word-level sequence generation methods and do not
adequately consider sentence-level dependencies. To tackle this problem, we
propose a novel hierarchical visual storytelling framework which separately
models sentence-level and word-level semantics. We use the transformer-based
BERT to obtain embeddings for sentences and words. We then employ a
hierarchical LSTM network: the bottom LSTM receives as input the sentence
vector representation from BERT, to learn the dependencies between the
sentences corresponding to images, and the top LSTM is responsible for
generating the corresponding word vector representations, taking input from the
bottom LSTM. Experimental results demonstrate that our model outperforms most
closely related baselines under automatic evaluation metrics BLEU and CIDEr,
and also show the effectiveness of our method with human evaluation.
| 2,020 |
Computation and Language
|
Do We Really Need That Many Parameters In Transformer For Extractive
Summarization? Discourse Can Help !
|
The multi-head self-attention of popular transformer models is widely used
within Natural Language Processing (NLP), including for the task of extractive
summarization. With the goal of analyzing and pruning the parameter-heavy
self-attention mechanism, there are multiple approaches proposing more
parameter-light self-attention alternatives. In this paper, we present a novel
parameter-lean self-attention mechanism using discourse priors. Our new tree
self-attention is based on document-level discourse information, extending the
recently proposed "Synthesizer" framework with another lightweight alternative.
We show empirical results that our tree self-attention approach achieves
competitive ROUGE-scores on the task of extractive summarization. When compared
to the original single-head transformer model, the tree attention approach
reaches similar performance on both, EDU and sentence level, despite the
significant reduction of parameters in the attention component. We further
significantly outperform the 8-head transformer model on sentence level when
applying a more balanced hyper-parameter setting, requiring an order of
magnitude less parameters.
| 2,020 |
Computation and Language
|
Evolving Character-level Convolutional Neural Networks for Text
Classification
|
Character-level convolutional neural networks (char-CNN) require no knowledge
of the semantic or syntactic structure of the language they classify. This
property simplifies its implementation but reduces its classification accuracy.
Increasing the depth of char-CNN architectures does not result in breakthrough
accuracy improvements. Research has not established which char-CNN
architectures are optimal for text classification tasks. Manually designing and
training char-CNNs is an iterative and time-consuming process that requires
expert domain knowledge. Evolutionary deep learning (EDL) techniques, including
surrogate-based versions, have demonstrated success in automatically searching
for performant CNN architectures for image analysis tasks. Researchers have not
applied EDL techniques to search the architecture space of char-CNNs for text
classification tasks. This article demonstrates the first work in evolving
char-CNN architectures using a novel EDL algorithm based on genetic
programming, an indirect encoding and surrogate models, to search for
performant char-CNN architectures automatically. The algorithm is evaluated on
eight text classification datasets and benchmarked against five manually
designed CNN architecture and one long short-term memory (LSTM) architecture.
Experiment results indicate that the algorithm can evolve architectures that
outperform the LSTM in terms of classification accuracy and five of the
manually designed CNN architectures in terms of classification accuracy and
parameter count.
| 2,020 |
Computation and Language
|
Evolving Character-Level DenseNet Architectures using Genetic
Programming
|
DenseNet architectures have demonstrated impressive performance in image
classification tasks, but limited research has been conducted on using
character-level DenseNet (char-DenseNet) architectures for text classification
tasks. It is not clear what DenseNet architectures are optimal for text
classification tasks. The iterative task of designing, training and testing of
char-DenseNets is an NP-Hard problem that requires expert domain knowledge.
Evolutionary deep learning (EDL) has been used to automatically design CNN
architectures for the image classification domain, thereby mitigating the need
for expert domain knowledge. This study demonstrates the first work on using
EDL to evolve char-DenseNet architectures for text classification tasks. A
novel genetic programming-based algorithm (GP-Dense) coupled with an
indirect-encoding scheme, facilitates the evolution of performant char DenseNet
architectures. The algorithm is evaluated on two popular text datasets, and the
best-evolved models are benchmarked against four current state-of-the-art
character-level CNN and DenseNet models. Results indicate that the algorithm
evolves performant models for both datasets that outperform two of the
state-of-the-art models in terms of model accuracy and three of the
state-of-the-art models in terms of parameter size.
| 2,020 |
Computation and Language
|
Few-Shot Event Detection with Prototypical Amortized Conditional Random
Field
|
Event detection tends to struggle when it needs to recognize novel event
types with a few samples. The previous work attempts to solve this problem in
the identify-then-classify manner but ignores the trigger discrepancy between
event types, thus suffering from the error propagation. In this paper, we
present a novel unified model which converts the task to a few-shot tagging
problem with a double-part tagging scheme. To this end, we first propose the
Prototypical Amortized Conditional Random Field (PA-CRF) to model the label
dependency in the few-shot scenario, which approximates the transition scores
between labels based on the label prototypes. Then Gaussian distribution is
introduced for modeling of the transition scores to alleviate the uncertain
estimation resulting from insufficient data. Experimental results show that the
unified models work better than existing identify-then-classify models and our
PA-CRF further achieves the best results on the benchmark dataset FewEvent. Our
code and data are available at http://github.com/congxin95/PA-CRF.
| 2,021 |
Computation and Language
|
Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation
|
Patients with low health literacy usually have difficulty understanding
medical jargon and the complex structure of professional medical language.
Although some studies are proposed to automatically translate expert language
into layperson-understandable language, only a few of them focus on both
accuracy and readability aspects simultaneously in the clinical domain. Thus,
simplification of the clinical language is still a challenging task, but
unfortunately, it is not yet fully addressed in previous work. To benchmark
this task, we construct a new dataset named MedLane to support the development
and evaluation of automated clinical language simplification approaches.
Besides, we propose a new model called DECLARE that follows the human
annotation procedure and achieves state-of-the-art performance compared with
eight strong baselines. To fairly evaluate the performance, we also propose
three specific evaluation metrics. Experimental results demonstrate the utility
of the annotated MedLane dataset and the effectiveness of the proposed model
DECLARE.
| 2,022 |
Computation and Language
|
Fine-tuning BERT for Low-Resource Natural Language Understanding via
Active Learning
|
Recently, leveraging pre-trained Transformer based language models in down
stream, task specific models has advanced state of the art results in natural
language understanding tasks. However, only a little research has explored the
suitability of this approach in low resource settings with less than 1,000
training data points. In this work, we explore fine-tuning methods of BERT -- a
pre-trained Transformer based language model -- by utilizing pool-based active
learning to speed up training while keeping the cost of labeling new data
constant. Our experimental results on the GLUE data set show an advantage in
model performance by maximizing the approximate knowledge gain of the model
when querying from the pool of unlabeled data. Finally, we demonstrate and
analyze the benefits of freezing layers of the language model during
fine-tuning to reduce the number of trainable parameters, making it more
suitable for low-resource settings.
| 2,020 |
Computation and Language
|
Data Processing and Annotation Schemes for FinCausal Shared Task
|
This document explains the annotation schemes used to label the data for the
FinCausal Shared Task (Mariko et al., 2020). This task is associated to the
Joint Workshop on Financial Narrative Processing and MultiLing Financial
Summarisation (FNP-FNS 2020), to be held at The 28th International Conference
on Computational Linguistics (COLING'2020), on December 12, 2020.
| 2,020 |
Computation and Language
|
Financial Document Causality Detection Shared Task (FinCausal 2020)
|
We present the FinCausal 2020 Shared Task on Causality Detection in Financial
Documents and the associated FinCausal dataset, and discuss the participating
systems and results. Two sub-tasks are proposed: a binary classification task
(Task 1) and a relation extraction task (Task 2). A total of 16 teams submitted
runs across the two Tasks and 13 of them contributed with a system description
paper. This workshop is associated to the Joint Workshop on Financial Narrative
Processing and MultiLing Financial Summarisation (FNP-FNS 2020), held at The
28th International Conference on Computational Linguistics (COLING'2020),
Barcelona, Spain on September 12, 2020.
| 2,020 |
Computation and Language
|
Coarse-to-Fine Entity Representations for Document-level Relation
Extraction
|
Document-level Relation Extraction (RE) requires extracting relations
expressed within and across sentences. Recent works show that graph-based
methods, usually constructing a document-level graph that captures
document-aware interactions, can obtain useful entity representations thus
helping tackle document-level RE. These methods either focus more on the entire
graph, or pay more attention to a part of the graph, e.g., paths between the
target entity pair. However, we find that document-level RE may benefit from
focusing on both of them simultaneously. Therefore, to obtain more
comprehensive entity representations, we propose the Coarse-to-Fine Entity
Representation model (CFER) that adopts a coarse-to-fine strategy involving two
phases. First, CFER uses graph neural networks to integrate global information
in the entire graph at a coarse level. Next, CFER utilizes the global
information as a guidance to selectively aggregate path information between the
target entity pair at a fine level. In classification, we combine the entity
representations from both two levels into more comprehensive representations
for relation extraction. Experimental results on two document-level RE
datasets, DocRED and CDR, show that CFER outperforms existing models and is
robust to the uneven label distribution.
| 2,021 |
Computation and Language
|
CUED_speech at TREC 2020 Podcast Summarisation Track
|
In this paper, we describe our approach for the Podcast Summarisation
challenge in TREC 2020. Given a podcast episode with its transcription, the
goal is to generate a summary that captures the most important information in
the content. Our approach consists of two steps: (1) Filtering redundant or
less informative sentences in the transcription using the attention of a
hierarchical model; (2) Applying a state-of-the-art text summarisation system
(BART) fine-tuned on the Podcast data using a sequence-level reward function.
Furthermore, we perform ensembles of three and nine models for our submission
runs. We also fine-tune the BART model on the Podcast data as our baseline. The
human evaluation by NIST shows that our best submission achieves 1.777 in the
EGFB scale, while the score of creator-provided description is 1.291. Our
system won the Spotify Podcast Summarisation Challenge in the TREC2020 Podcast
Track in both human and automatic evaluation.
| 2,021 |
Computation and Language
|
DDRel: A New Dataset for Interpersonal Relation Classification in Dyadic
Dialogues
|
Interpersonal language style shifting in dialogues is an interesting and
almost instinctive ability of human. Understanding interpersonal relationship
from language content is also a crucial step toward further understanding
dialogues. Previous work mainly focuses on relation extraction between named
entities in texts. In this paper, we propose the task of relation
classification of interlocutors based on their dialogues. We crawled movie
scripts from IMSDb, and annotated the relation labels for each session
according to 13 pre-defined relationships. The annotated dataset DDRel consists
of 6300 dyadic dialogue sessions between 694 pair of speakers with 53,126
utterances in total. We also construct session-level and pair-level relation
classification tasks with widely-accepted baselines. The experimental results
show that this task is challenging for existing models and the dataset will be
useful for future research.
| 2,020 |
Computation and Language
|
Pre-trained language models as knowledge bases for Automotive Complaint
Analysis
|
Recently it has been shown that large pre-trained language models like BERT
(Devlin et al., 2018) are able to store commonsense factual knowledge captured
in its pre-training corpus (Petroni et al., 2019). In our work we further
evaluate this ability with respect to an application from industry creating a
set of probes specifically designed to reveal technical quality issues captured
as described incidents out of unstructured customer feedback in the automotive
industry. After probing the out-of-the-box versions of the pre-trained models
with fill-in-the-mask tasks we dynamically provide it with more knowledge via
continual pre-training on the Office of Defects Investigation (ODI) Complaints
data set. In our experiments the models exhibit performance regarding queries
on domain-specific topics compared to when queried on factual knowledge itself,
as Petroni et al. (2019) have done. For most of the evaluated architectures the
correct token is predicted with a $Precision@1$ ($P@1$) of above 60\%, while
for $P@5$ and $P@10$ even values of well above 80\% and up to 90\% respectively
are reached. These results show the potential of using language models as a
knowledge base for structured analysis of customer feedback.
| 2,020 |
Computation and Language
|
Automated Detection of Cyberbullying Against Women and Immigrants and
Cross-domain Adaptability
|
Cyberbullying is a prevalent and growing social problem due to the surge of
social media technology usage. Minorities, women, and adolescents are among the
common victims of cyberbullying. Despite the advancement of NLP technologies,
the automated cyberbullying detection remains challenging. This paper focuses
on advancing the technology using state-of-the-art NLP techniques. We use a
Twitter dataset from SemEval 2019 - Task 5(HatEval) on hate speech against
women and immigrants. Our best performing ensemble model based on DistilBERT
has achieved 0.73 and 0.74 of F1 score in the task of classifying hate speech
(Task A) and aggressiveness and target (Task B) respectively. We adapt the
ensemble model developed for Task A to classify offensive language in external
datasets and achieved ~0.7 of F1 score using three benchmark datasets, enabling
promising results for cross-domain adaptability. We conduct a qualitative
analysis of misclassified tweets to provide insightful recommendations for
future cyberbullying research.
| 2,020 |
Computation and Language
|
Ve'rdd. Narrowing the Gap between Paper Dictionaries, Low-Resource NLP
and Community Involvement
|
We present an open-source online dictionary editing system, Ve'rdd, that
offers a chance to re-evaluate and edit grassroots dictionaries that have been
exposed to multiple amateur editors. The idea is to incorporate community
activities into a state-of-the-art finite-state language description of a
seriously endangered minority language, Skolt Sami. Problems involve getting
the community to take part in things above the pencil-and-paper level. At
times, it seems that the native speakers and the dictionary oriented are
lacking technical understanding to utilize the infrastructures which might make
their work more meaningful in the future, i.e. multiple reuse of all of their
input. Therefore, our system integrates with the existing tools and
infrastructures for Uralic language masking the technical complexities behind a
user-friendly UI.
| 2,020 |
Computation and Language
|
To Schedule or not to Schedule: Extracting Task Specific Temporal
Entities and Associated Negation Constraints
|
State of the art research for date-time entity extraction from text is task
agnostic. Consequently, while the methods proposed in literature perform well
for generic date-time extraction from texts, they don't fare as well on task
specific date-time entity extraction where only a subset of the date-time
entities present in the text are pertinent to solving the task. Furthermore,
some tasks require identifying negation constraints associated with the
date-time entities to correctly reason over time. We showcase a novel model for
extracting task-specific date-time entities along with their negation
constraints. We show the efficacy of our method on the task of date-time
understanding in the context of scheduling meetings for an email-based digital
AI scheduling assistant. Our method achieves an absolute gain of 19\% f-score
points compared to baseline methods in detecting the date-time entities
relevant to scheduling meetings and a 4\% improvement over baseline methods for
detecting negation constraints over date-time entities.
| 2,020 |
Computation and Language
|
FinnSentiment -- A Finnish Social Media Corpus for Sentiment Polarity
Annotation
|
Sentiment analysis and opinion mining is an important task with obvious
application areas in social media, e.g. when indicating hate speech and fake
news. In our survey of previous work, we note that there is no large-scale
social media data set with sentiment polarity annotations for Finnish. This
publications aims to remedy this shortcoming by introducing a 27,000 sentence
data set annotated independently with sentiment polarity by three native
annotators. We had the same three annotators for the whole data set, which
provides a unique opportunity for further studies of annotator behaviour over
time. We analyse their inter-annotator agreement and provide two baselines to
validate the usefulness of the data set.
| 2,020 |
Computation and Language
|
Event Guided Denoising for Multilingual Relation Learning
|
General purpose relation extraction has recently seen considerable gains in
part due to a massively data-intensive distant supervision technique from
Soares et al. (2019) that produces state-of-the-art results across many
benchmarks. In this work, we present a methodology for collecting high quality
training data for relation extraction from unlabeled text that achieves a
near-recreation of their zero-shot and few-shot results at a fraction of the
training cost. Our approach exploits the predictable distributional structure
of date-marked news articles to build a denoised corpus -- the extraction
process filters out low quality examples. We show that a smaller multilingual
encoder trained on this corpus performs comparably to the current
state-of-the-art (when both receive little to no fine-tuning) on few-shot and
standard relation benchmarks in English and Spanish despite using many fewer
examples (50k vs. 300mil+).
| 2,020 |
Computation and Language
|
Delexicalized Paraphrase Generation
|
We present a neural model for paraphrasing and train it to generate
delexicalized sentences. We achieve this by creating training data in which
each input is paired with a number of reference paraphrases. These sets of
reference paraphrases represent a weak type of semantic equivalence based on
annotated slots and intents. To understand semantics from different types of
slots, other than anonymizing slots, we apply convolutional neural networks
(CNN) prior to pooling on slot values and use pointers to locate slots in the
output. We show empirically that the generated paraphrases are of high quality,
leading to an additional 1.29% exact match on live utterances. We also show
that natural language understanding (NLU) tasks, such as intent classification
and named entity recognition, can benefit from data augmentation using
automatically generated paraphrases.
| 2,020 |
Computation and Language
|
On-Device Sentence Similarity for SMS Dataset
|
Determining the sentence similarity between Short Message Service (SMS)
texts/sentences plays a significant role in mobile device industry. Gauging the
similarity between SMS data is thus necessary for various applications like
enhanced searching and navigation, clubbing together SMS of similar type when
given a custom label or tag is provided by user irrespective of their sender
etc. The problem faced with SMS data is its incomplete structure and
grammatical inconsistencies. In this paper, we propose a unique pipeline for
evaluating the text similarity between SMS texts. We use Part of Speech (POS)
model for keyword extraction by taking advantage of the partial structure
embedded in SMS texts and similarity comparisons are carried out using
statistical methods. The proposed pipeline deals with major semantic variations
across SMS data as well as makes it effective for its application on-device
(mobile phone). To showcase the capabilities of our work, our pipeline has been
designed with an inclination towards one of the possible applications of SMS
text similarity discussed in one of the following sections but nonetheless
guarantees scalability for other applications as well.
| 2,022 |
Computation and Language
|
Inductive Bias and Language Expressivity in Emergent Communication
|
Referential games and reconstruction games are the most common game types for
studying emergent languages. We investigate how the type of the language game
affects the emergent language in terms of: i) language compositionality and ii)
transfer of an emergent language to a task different from its origin, which we
refer to as language expressivity. With empirical experiments on a handcrafted
symbolic dataset, we show that languages emerged from different games have
different compositionality and further different expressivity.
| 2,020 |
Computation and Language
|
Data-Efficient Methods for Dialogue Systems
|
Conversational User Interface (CUI) has become ubiquitous in everyday life,
in consumer-focused products like Siri and Alexa or business-oriented
solutions. Deep learning underlies many recent breakthroughs in dialogue
systems but requires very large amounts of training data, often annotated by
experts. Trained with smaller data, these methods end up severely lacking
robustness (e.g. to disfluencies and out-of-domain input), and often just have
too little generalisation power. In this thesis, we address the above issues by
introducing a series of methods for training robust dialogue systems from
minimal data. Firstly, we study two orthogonal approaches to dialogue:
linguistically informed and machine learning-based - from the data efficiency
perspective. We outline the steps to obtain data-efficient solutions with
either approach. We then introduce two data-efficient models for dialogue
response generation: the Dialogue Knowledge Transfer Network based on latent
variable dialogue representations, and the hybrid Generative-Retrieval
Transformer model (ranked first at the DSTC 8 Fast Domain Adaptation task).
Next, we address the problem of robustness given minimal data. As such, propose
a multitask LSTM-based model for domain-general disfluency detection. For the
problem of out-of-domain input, we present Turn Dropout, a data augmentation
technique for anomaly detection only using in-domain data, and introduce
autoencoder-augmented models for efficient training with Turn Dropout. Finally,
we focus on social dialogue and introduce a neural model for response ranking
in social conversation used in Alana, the 3rd place winner in the Amazon Alexa
Prize 2017 and 2018. We employ a novel technique of predicting the dialogue
length as the main ranking objective and show that this approach improves upon
the ratings-based counterpart in terms of data efficiency while matching it in
performance.
| 2,020 |
Computation and Language
|
Does Yoga Make You Happy? Analyzing Twitter User Happiness using Textual
and Temporal Information
|
Although yoga is a multi-component practice to hone the body and mind and be
known to reduce anxiety and depression, there is still a gap in understanding
people's emotional state related to yoga in social media. In this study, we
investigate the causal relationship between practicing yoga and being happy by
incorporating textual and temporal information of users using Granger
causality. To find out causal features from the text, we measure two variables
(i) Yoga activity level based on content analysis and (ii) Happiness level
based on emotional state. To understand users' yoga activity, we propose a
joint embedding model based on the fusion of neural networks with attention
mechanism by leveraging users' social and textual information. For measuring
the emotional state of yoga users (target domain), we suggest a transfer
learning approach to transfer knowledge from an attention-based neural network
model trained on a source domain. Our experiment on Twitter dataset
demonstrates that there are 1447 users where "yoga Granger-causes happiness".
| 2,021 |
Computation and Language
|
Cross-Domain Sentiment Classification with In-Domain Contrastive
Learning
|
Contrastive learning (CL) has been successful as a powerful representation
learning method. In this paper, we propose a contrastive learning framework for
cross-domain sentiment classification. We aim to induce domain invariant
optimal classifiers rather than distribution matching. To this end, we
introduce in-domain contrastive learning and entropy minimization. Also, we
find through ablation studies that these two techniques behaviour differently
in case of large label distribution shift and conclude that the best practice
is to choose one of them adaptively according to label distribution shift. The
new state-of-the-art results our model achieves on standard benchmarks show the
efficacy of the proposed method.
| 2,020 |
Computation and Language
|
Data Boost: Text Data Augmentation Through Reinforcement Learning Guided
Conditional Generation
|
Data augmentation is proven to be effective in many NLU tasks, especially for
those suffering from data scarcity. In this paper, we present a powerful and
easy to deploy text augmentation framework, Data Boost, which augments data
through reinforcement learning guided conditional generation. We evaluate Data
Boost on three diverse text classification tasks under five different
classifier architectures. The result shows that Data Boost can boost the
performance of classifiers especially in low-resource data scenarios. For
instance, Data Boost improves F1 for the three tasks by 8.7% on average when
given only 10% of the whole data for training. We also compare Data Boost with
six prior text augmentation methods. Through human evaluations (N=178), we
confirm that Data Boost augmentation has comparable quality as the original
data with respect to readability and class consistency.
| 2,020 |
Computation and Language
|
Enhanced Offensive Language Detection Through Data Augmentation
|
Detecting offensive language on social media is an important task. The
ICWSM-2020 Data Challenge Task 2 is aimed at identifying offensive content
using a crowd-sourced dataset containing 100k labelled tweets. The dataset,
however, suffers from class imbalance, where certain labels are extremely rare
compared with other classes (e.g, the hateful class is only 5% of the data). In
this work, we present Dager (Data Augmenter), a generation-based data
augmentation method, that improves the performance of classification on
imbalanced and low-resource data such as the offensive language dataset. Dager
extracts the lexical features of a given class, and uses these features to
guide the generation of a conditional generator built on GPT-2. The generated
text can then be added to the training set as augmentation data. We show that
applying Dager can increase the F1 score of the data challenge by 11% when we
use 1% of the whole dataset for training (using BERT for classification);
moreover, the generated data also preserves the original labels very well. We
test Dager on four different classifiers (BERT, CNN, Bi-LSTM with attention,
and Transformer), observing universal improvement on the detection, indicating
our method is effective and classifier-agnostic.
| 2,020 |
Computation and Language
|
Leveraging Order-Free Tag Relations for Context-Aware Recommendation
|
Tag recommendation relies on either a ranking function for top-$k$ tags or an
autoregressive generation method. However, the previous methods neglect one of
two seemingly conflicting yet desirable characteristics of a tag set:
orderlessness and inter-dependency. While the ranking approach fails to address
the inter-dependency among tags when they are ranked, the autoregressive
approach fails to take orderlessness into account because it is designed to
utilize sequential relations among tokens. We propose a sequence-oblivious
generation method for tag recommendation, in which the next tag to be generated
is independent of the order of the generated tags and the order of the ground
truth tags occurring in training data. Empirical results on two different
domains, Instagram and Stack Overflow, show that our method is significantly
superior to the previous approaches.
| 2,021 |
Computation and Language
|
Reciprocal Supervised Learning Improves Neural Machine Translation
|
Despite the recent success on image classification, self-training has only
achieved limited gains on structured prediction tasks such as neural machine
translation (NMT). This is mainly due to the compositionality of the target
space, where the far-away prediction hypotheses lead to the notorious
reinforced mistake problem. In this paper, we revisit the utilization of
multiple diverse models and present a simple yet effective approach named
Reciprocal-Supervised Learning (RSL). RSL first exploits individual models to
generate pseudo parallel data, and then cooperatively trains each model on the
combined synthetic corpus. RSL leverages the fact that different parameterized
models have different inductive biases, and better predictions can be made by
jointly exploiting the agreement among each other. Unlike the previous
knowledge distillation methods built upon a much stronger teacher, RSL is
capable of boosting the accuracy of one model by introducing other comparable
or even weaker models. RSL can also be viewed as a more efficient alternative
to ensemble. Extensive experiments demonstrate the superior performance of RSL
on several benchmarks with significant margins.
| 2,020 |
Computation and Language
|
On-Device Tag Generation for Unstructured Text
|
With the overwhelming transition to smart phones, storing important
information in the form of unstructured text has become habitual to users of
mobile devices. From grocery lists to drafts of emails and important speeches,
users store a lot of data in the form of unstructured text (for eg: in the
Notes application) on their devices, leading to cluttering of data. This not
only prevents users from efficient navigation in the applications but also
precludes them from perceiving the relations that could be present across data
in those applications. This paper proposes a novel pipeline to generate a set
of tags using world knowledge based on the keywords and concepts present in
unstructured textual data. These tags can then be used to summarize, categorize
or search for the desired information thus enhancing user experience by
allowing them to have a holistic outlook of the kind of information stored in
the form of unstructured text. In the proposed system, we use an on-device
(mobile phone) efficient CNN model with pruned ConceptNet resource to achieve
our goal. The architecture also presents a novel ranking algorithm to extract
the top n tags from any given text.
| 2,022 |
Computation and Language
|
Codeswitched Sentence Creation using Dependency Parsing
|
Codeswitching has become one of the most common occurrences across
multilingual speakers of the world, especially in countries like India which
encompasses around 23 official languages with the number of bilingual speakers
being around 300 million. The scarcity of Codeswitched data becomes a
bottleneck in the exploration of this domain with respect to various Natural
Language Processing (NLP) tasks. We thus present a novel algorithm which
harnesses the syntactic structure of English grammar to develop grammatically
sensible Codeswitched versions of English-Hindi, English-Marathi and
English-Kannada data. Apart from maintaining the grammatical sanity to a great
extent, our methodology also guarantees abundant generation of data from a
minuscule snapshot of given data. We use multiple datasets to showcase the
capabilities of our algorithm while at the same time we assess the quality of
generated Codeswitched data using some qualitative metrics along with providing
baseline results for couple of NLP tasks.
| 2,022 |
Computation and Language
|
Over a Decade of Social Opinion Mining: A Systematic Review
|
Social media popularity and importance is on the increase due to people using
it for various types of social interaction across multiple channels. This
systematic review focuses on the evolving research area of Social Opinion
Mining, tasked with the identification of multiple opinion dimensions, such as
subjectivity, sentiment polarity, emotion, affect, sarcasm and irony, from
user-generated content represented across multiple social media platforms and
in various media formats, like text, image, video and audio. Through Social
Opinion Mining, natural language can be understood in terms of the different
opinion dimensions, as expressed by humans. This contributes towards the
evolution of Artificial Intelligence which in turn helps the advancement of
several real-world use cases, such as customer service and decision making. A
thorough systematic review was carried out on Social Opinion Mining research
which totals 485 published studies and spans a period of twelve years between
2007 and 2018. The in-depth analysis focuses on the social media platforms,
techniques, social datasets, language, modality, tools and technologies, and
other aspects derived. Social Opinion Mining can be utilised in many
application areas, ranging from marketing, advertising and sales for
product/service management, and in multiple domains and industries, such as
politics, technology, finance, healthcare, sports and government. The latest
developments in Social Opinion Mining beyond 2018 are also presented together
with future research directions, with the aim of leaving a wider academic and
societal impact in several real-world applications.
| 2,021 |
Computation and Language
|
Modeling and Utilizing User's Internal State in Movie Recommendation
Dialogue
|
Intelligent dialogue systems are expected as a new interface between humans
and machines. Such an intelligent dialogue system should estimate the user's
internal state (UIS) in dialogues and change its response appropriately
according to the estimation result. In this paper, we model the UIS in
dialogues, taking movie recommendation dialogues as examples, and construct a
dialogue system that changes its response based on the UIS. Based on the
dialogue data analysis, we model the UIS as three elements: knowledge,
interest, and engagement. We train the UIS estimators on a dialogue corpus with
the modeled UIS's annotations. The estimators achieved high estimation
accuracy. We also design response change rules that change the system's
responses according to each UIS. We confirmed that response changes using the
result of the UIS estimators improved the system utterances' naturalness in
both dialogue-wise evaluation and utterance-wise evaluation.
| 2,020 |
Computation and Language
|
A Two-Systems Perspective for Computational Thinking
|
Computational Thinking (CT) has emerged as one of the vital thinking skills
in recent times, especially for Science, Technology, Engineering and Management
(STEM) graduates. Educators are in search of underlying cognitive models
against which CT can be analyzed and evaluated. This paper suggests adopting
Kahneman's two-systems model as a framework to understand the computational
thought process. Kahneman's two-systems model postulates that human thinking
happens at two levels, i.e. fast and slow thinking. This paper illustrates
through examples that CT activities can be represented and analyzed using
Kahneman's two-systems model. The potential benefits of adopting Kahneman's
two-systems perspective are that it helps us to fix the biases that cause
errors in our reasoning. Further, it also provides a set of heuristics to speed
up reasoning activities.
| 2,020 |
Computation and Language
|
Competition in Cross-situational Word Learning: A Computational Study
|
Children learn word meanings by tapping into the commonalities across
different situations in which words are used and overcome the high level of
uncertainty involved in early word learning experiences. We propose a modeling
framework to investigate the role of mutual exclusivity bias - asserting
one-to-one mappings between words and their meanings - in reducing uncertainty
in word learning. In a set of computational studies, we show that to
successfully learn word meanings in the face of uncertainty, a learner needs to
use two types of competition: words competing for association to a referent
when learning from an observation and referents competing for a word when the
word is used. Our work highlights the importance of an algorithmic-level
analysis to shed light on the utility of different mechanisms that can
implement the same computational-level theory.
| 2,021 |
Computation and Language
|
From syntactic structure to semantic relationship: hypernym extraction
from definitions by recurrent neural networks using the part of speech
information
|
The hyponym-hypernym relation is an essential element in the semantic
network. Identifying the hypernym from a definition is an important task in
natural language processing and semantic analysis. While a public dictionary
such as WordNet works for common words, its application in domain-specific
scenarios is limited. Existing tools for hypernym extraction either rely on
specific semantic patterns or focus on the word representation, which all
demonstrate certain limitations.
| 2,020 |
Computation and Language
|
An Empirical Survey of Unsupervised Text Representation Methods on
Twitter Data
|
The field of NLP has seen unprecedented achievements in recent years. Most
notably, with the advent of large-scale pre-trained Transformer-based language
models, such as BERT, there has been a noticeable improvement in text
representation. It is, however, unclear whether these improvements translate to
noisy user-generated text, such as tweets. In this paper, we present an
experimental survey of a wide range of well-known text representation
techniques for the task of text clustering on noisy Twitter data. Our results
indicate that the more advanced models do not necessarily work best on tweets
and that more exploration in this area is needed.
| 2,020 |
Computation and Language
|
Document Graph for Neural Machine Translation
|
Previous works have shown that contextual information can improve the
performance of neural machine translation (NMT). However, most existing
document-level NMT methods only consider a few number of previous sentences.
How to make use of the whole document as global contexts is still a challenge.
To address this issue, we hypothesize that a document can be represented as a
graph that connects relevant contexts regardless of their distances. We employ
several types of relations, including adjacency, syntactic dependency, lexical
consistency, and coreference, to construct the document graph. Then, we
incorporate both source and target graphs into the conventional Transformer
architecture with graph convolutional networks. Experiments on various NMT
benchmarks, including IWSLT English--French, Chinese-English, WMT
English--German and Opensubtitle English--Russian, demonstrate that using
document graphs can significantly improve the translation quality. Extensive
analysis verifies that the document graph is beneficial for capturing discourse
phenomena.
| 2,021 |
Computation and Language
|
Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting
Summarization
|
Meeting summarization is a challenging task due to its dynamic interaction
nature among multiple speakers and lack of sufficient training data. Existing
methods view the meeting as a linear sequence of utterances while ignoring the
diverse relations between each utterance. Besides, the limited labeled data
further hinders the ability of data-hungry neural models. In this paper, we try
to mitigate the above challenges by introducing dialogue-discourse relations.
First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to
explicitly model the interaction between utterances in a meeting by modeling
different discourse relations. The core module is a relational graph encoder,
where the utterances and discourse relations are modeled in a graph interaction
manner. Moreover, we devise a Dialogue Discourse-Aware Data Augmentation
(DDADA) strategy to construct a pseudo-summarization corpus from existing input
meetings, which is 20 times larger than the original dataset and can be used to
pretrain DDAMS. Experimental results on AMI and ICSI meeting datasets show that
our full system can achieve SOTA performance. Our codes will be available at:
https://github.com/xcfcode/DDAMS.
| 2,021 |
Computation and Language
|
H-FND: Hierarchical False-Negative Denoising for Distant Supervision
Relation Extraction
|
Although distant supervision automatically generates training data for
relation extraction, it also introduces false-positive (FP) and false-negative
(FN) training instances to the generated datasets. Whereas both types of errors
degrade the final model performance, previous work on distant supervision
denoising focuses more on suppressing FP noise and less on resolving the FN
problem. We here propose H-FND, a hierarchical false-negative denoising
framework for robust distant supervision relation extraction, as an FN
denoising solution. H-FND uses a hierarchical policy which first determines
whether non-relation (NA) instances should be kept, discarded, or revised
during the training process. For those learning instances which are to be
revised, the policy further reassigns them appropriate relations, making them
better training inputs. Experiments on SemEval-2010 and TACRED were conducted
with controlled FN ratios that randomly turn the relations of training and
validation instances into negatives to generate FN instances. In this setting,
H-FND can revise FN instances correctly and maintains high F1 scores even when
50% of the instances have been turned into negatives. Experiment on NYT10 is
further conducted to shows that H-FND is applicable in a realistic setting.
| 2,020 |
Computation and Language
|
UBAR: Towards Fully End-to-End Task-Oriented Dialog Systems with GPT-2
|
This paper presents our task-oriented dialog system UBAR which models
task-oriented dialogs on a dialog session level. Specifically, UBAR is acquired
by fine-tuning the large pre-trained unidirectional language model GPT-2 on the
sequence of the entire dialog session which is composed of user utterance,
belief state, database result, system act, and system response of every dialog
turn. Additionally, UBAR is evaluated in a more realistic setting, where its
dialog context has access to user utterances and all content it generated such
as belief states, system acts, and system responses. Experimental results on
the MultiWOZ datasets show that UBAR achieves state-of-the-art performances in
multiple settings, improving the combined score of response generation, policy
optimization, and end-to-end modeling by 4.7, 3.5, and 9.4 points respectively.
Thorough analyses demonstrate that the session-level training sequence
formulation and the generated dialog context are essential for UBAR to operate
as a fully end-to-end task-oriented dialog system in real life. We also examine
the transfer ability of UBAR to new domains with limited data and provide
visualization and a case study to illustrate the advantages of UBAR in modeling
on a dialog session level.
| 2,021 |
Computation and Language
|
KgPLM: Knowledge-guided Language Model Pre-training via Generative and
Discriminative Learning
|
Recent studies on pre-trained language models have demonstrated their ability
to capture factual knowledge and applications in knowledge-aware downstream
tasks. In this work, we present a language model pre-training framework guided
by factual knowledge completion and verification, and use the generative and
discriminative approaches cooperatively to learn the model. Particularly, we
investigate two learning schemes, named two-tower scheme and pipeline scheme,
in training the generator and discriminator with shared parameter. Experimental
results on LAMA, a set of zero-shot cloze-style question answering tasks, show
that our model contains richer factual knowledge than the conventional
pre-trained language models. Furthermore, when fine-tuned and evaluated on the
MRQA shared tasks which consists of several machine reading comprehension
datasets, our model achieves the state-of-the-art performance, and gains large
improvements on NewsQA (+1.26 F1) and TriviaQA (+1.56 F1) over RoBERTa.
| 2,020 |
Computation and Language
|
PPKE: Knowledge Representation Learning by Path-based Pre-training
|
Entities may have complex interactions in a knowledge graph (KG), such as
multi-step relationships, which can be viewed as graph contextual information
of the entities. Traditional knowledge representation learning (KRL) methods
usually treat a single triple as a training unit, and neglect most of the graph
contextual information exists in the topological structure of KGs. In this
study, we propose a Path-based Pre-training model to learn Knowledge
Embeddings, called PPKE, which aims to integrate more graph contextual
information between entities into the KRL model. Experiments demonstrate that
our model achieves state-of-the-art results on several benchmark datasets for
link prediction and relation prediction tasks, indicating that our model
provides a feasible way to take advantage of graph contextual information in
KGs.
| 2,020 |
Computation and Language
|
Structural Text Segmentation of Legal Documents
|
The growing complexity of legal cases has lead to an increasing interest in
legal information retrieval systems that can effectively satisfy user-specific
information needs. However, such downstream systems typically require documents
to be properly formatted and segmented, which is often done with relatively
simple pre-processing steps, disregarding topical coherence of segments.
Systems generally rely on representations of individual sentences or
paragraphs, which may lack crucial context, or document-level representations,
which are too long for meaningful search results. To address this issue, we
propose a segmentation system that can predict topical coherence of sequential
text segments spanning several paragraphs, effectively segmenting a document
and providing a more balanced representation for downstream applications. We
build our model on top of popular transformer networks and formulate structural
text segmentation as topical change detection, by performing a series of
independent classifications that allow for efficient fine-tuning on
task-specific data. We crawl a novel dataset consisting of roughly $74,000$
online Terms-of-Service documents, including hierarchical topic annotations,
which we use for training. Results show that our proposed system significantly
outperforms baselines, and adapts well to structural peculiarities of legal
documents. We release both data and trained models to the research community
for future work.https://github.com/dennlinger/TopicalChange
| 2,021 |
Computation and Language
|
An Enhanced MeanSum Method For Generating Hotel Multi-Review
Summarizations
|
Multi-document summaritazion is the process of taking multiple texts as input
and producing a short summary text based on the content of input texts. Up
until recently, multi-document summarizers are mostly supervised extractive.
However, supervised methods require datasets of large, paired document-summary
examples which are rare and expensive to produce. In 2018, an unsupervised
multi-document abstractive summarization method(Meansum) was proposed by Chu
and Liu, and demonstrated competitive performances comparing to extractive
methods. Despite good evaluation results on automatic metrics, Meansum has
multiple limitations, notably the inability of dealing with multiple aspects.
The aim of this work was to use Multi-Aspect Masker(MAM) as content selector to
address the issue with multi-aspect. Moreover, we propose a regularizer to
control the length of the generated summaries. Through a series of experiments
on the hotel dataset from Trip Advisor, we validate our assumption and show
that our improved model achieves higher ROUGE, Sentiment Accuracy than the
original Meansum method and also beats/ comprarable/close to the supervised
baseline.
| 2,021 |
Computation and Language
|
Reference Knowledgeable Network for Machine Reading Comprehension
|
Multi-choice Machine Reading Comprehension (MRC) as a challenge requires
models to select the most appropriate answer from a set of candidates with a
given passage and question. Most of the existing researches focus on the
modeling of specific tasks or complex networks, without explicitly referring to
relevant and credible external knowledge sources, which are supposed to greatly
make up for the deficiency of the given passage. Thus we propose a novel
reference-based knowledge enhancement model called Reference Knowledgeable
Network (RekNet), which simulates human reading strategies to refine critical
information from the passage and quote explicit knowledge in necessity. In
detail, RekNet refines finegrained critical information and defines it as
Reference Span, then quotes explicit knowledge quadruples by the co-occurrence
information of Reference Span and candidates. The proposed RekNet is evaluated
on three multi-choice MRC benchmarks: RACE, DREAM and Cosmos QA, obtaining
consistent and remarkable performance improvement with observable statistical
significance level over strong baselines. Our code is available at
https://github.com/Yilin1111/RekNet.
| 2,022 |
Computation and Language
|
Using previous acoustic context to improve Text-to-Speech synthesis
|
Many speech synthesis datasets, especially those derived from audiobooks,
naturally comprise sequences of utterances. Nevertheless, such data are
commonly treated as individual, unordered utterances both when training a model
and at inference time. This discards important prosodic phenomena above the
utterance level. In this paper, we leverage the sequential nature of the data
using an acoustic context encoder that produces an embedding of the previous
utterance audio. This is input to the decoder in a Tacotron 2 model. The
embedding is also used for a secondary task, providing additional supervision.
We compare two secondary tasks: predicting the ordering of utterance pairs, and
predicting the embedding of the current utterance audio. Results show that the
relation between consecutive utterances is informative: our proposed model
significantly improves naturalness over a Tacotron 2 baseline.
| 2,020 |
Computation and Language
|
What Meaning-Form Correlation Has to Compose With
|
Compositionality is a widely discussed property of natural languages,
although its exact definition has been elusive. We focus on the proposal that
compositionality can be assessed by measuring meaning-form correlation. We
analyze meaning-form correlation on three sets of languages: (i) artificial toy
languages tailored to be compositional, (ii) a set of English dictionary
definitions, and (iii) a set of English sentences drawn from literature. We
find that linguistic phenomena such as synonymy and ungrounded stop-words weigh
on MFC measurements, and that straightforward methods to mitigate their effects
have widely varying results depending on the dataset they are applied to. Data
and code are made publicly available.
| 2,020 |
Computation and Language
|
Stylometry for Noisy Medieval Data: Evaluating Paul Meyer's Hagiographic
Hypothesis
|
Stylometric analysis of medieval vernacular texts is still a significant
challenge: the importance of scribal variation, be it spelling or more
substantial, as well as the variants and errors introduced in the tradition,
complicate the task of the would-be stylometrist. Basing the analysis on the
study of the copy from a single hand of several texts can partially mitigate
these issues (Camps and Cafiero, 2013), but the limited availability of
complete diplomatic transcriptions might make this difficult. In this paper, we
use a workflow combining handwritten text recognition and stylometric analysis,
applied to the case of the hagiographic works contained in MS BnF, fr. 412. We
seek to evaluate Paul Meyer's hypothesis about the constitution of groups of
hagiographic works, as well as to examine potential authorial groupings in a
vastly anonymous corpus.
| 2,020 |
Computation and Language
|
The Lab vs The Crowd: An Investigation into Data Quality for Neural
Dialogue Models
|
Challenges around collecting and processing quality data have hampered
progress in data-driven dialogue models. Previous approaches are moving away
from costly, resource-intensive lab settings, where collection is slow but
where the data is deemed of high quality. The advent of crowd-sourcing
platforms, such as Amazon Mechanical Turk, has provided researchers with an
alternative cost-effective and rapid way to collect data. However, the
collection of fluid, natural spoken or textual interaction can be challenging,
particularly between two crowd-sourced workers. In this study, we compare the
performance of dialogue models for the same interaction task but collected in
two different settings: in the lab vs. crowd-sourced. We find that fewer lab
dialogues are needed to reach similar accuracy, less than half the amount of
lab data as crowd-sourced data. We discuss the advantages and disadvantages of
each data collection method.
| 2,020 |
Computation and Language
|
Evaluating Cross-Lingual Transfer Learning Approaches in Multilingual
Conversational Agent Models
|
With the recent explosion in popularity of voice assistant devices, there is
a growing interest in making them available to user populations in additional
countries and languages. However, to provide the highest accuracy and best
performance for specific user populations, most existing voice assistant models
are developed individually for each region or language, which requires linear
investment of effort. In this paper, we propose a general multilingual model
framework for Natural Language Understanding (NLU) models, which can help
bootstrap new language models faster and reduce the amount of effort required
to develop each language separately. We explore how different deep learning
architectures affect multilingual NLU model performance. Our experimental
results show that these multilingual models can reach same or better
performance compared to monolingual models across language-specific test data
while require less effort in creating features and model maintenance.
| 2,020 |
Computation and Language
|
Benchmarking Commercial Intent Detection Services with Practice-Driven
Evaluations
|
Intent detection is a key component of modern goal-oriented dialog systems
that accomplish a user task by predicting the intent of users' text input.
There are three primary challenges in designing robust and accurate intent
detection models. First, typical intent detection models require a large amount
of labeled data to achieve high accuracy. Unfortunately, in practical scenarios
it is more common to find small, unbalanced, and noisy datasets. Secondly, even
with large training data, the intent detection models can see a different
distribution of test data when being deployed in the real world, leading to
poor accuracy. Finally, a practical intent detection model must be
computationally efficient in both training and single query inference so that
it can be used continuously and re-trained frequently. We benchmark intent
detection methods on a variety of datasets. Our results show that Watson
Assistant's intent detection model outperforms other commercial solutions and
is comparable to large pretrained language models while requiring only a
fraction of computational resources and training data. Watson Assistant
demonstrates a higher degree of robustness when the training and test
distributions differ.
| 2,021 |
Computation and Language
|
CX DB8: A queryable extractive summarizer and semantic search engine
|
Competitive Debate's increasingly technical nature has left competitors
looking for tools to accelerate evidence production. We find that the unique
type of extractive summarization performed by competitive debaters -
summarization with a bias towards a particular target meaning - can be
performed using the latest innovations in unsupervised pre-trained text
vectorization models. We introduce CX_DB8, a queryable word-level extractive
summarizer and evidence creation framework, which allows for rapid, biasable
summarization of arbitarily sized texts. CX_DB8s usage of the embedding
framework Flair means that as the underlying models improve, CX_DB8 will also
improve. We observe that CX_DB8 also functions as a semantic search engine, and
has application as a supplement to traditional "find" functionality in programs
and webpages. CX_DB8 is currently used by competitive debaters and is made
available to the public at https://github.com/Hellisotherpeople/CX_DB8
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.