Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Multimodal Word Sense Disambiguation in Creative Practice | Language is ambiguous; many terms and expressions can convey the same idea.
This is especially true in creative practice, where ideas and design intents
are highly subjective. We present a dataset, Ambiguous Descriptions of Art
Images (ADARI), of contemporary workpieces, which aims to provide a
foundational resource for subjective image description and multimodal word
disambiguation in the context of creative practice. The dataset contains a
total of 240k images labeled with 260k descriptive sentences. It is
additionally organized into sub-domains of architecture, art, design, fashion,
furniture, product design and technology. In subjective image description,
labels are not deterministic: for example, the ambiguous label dynamic might
correspond to hundreds of different images. To understand this complexity, we
analyze the ambiguity and relevance of text with respect to images using the
state-of-the-art pre-trained BERT model for sentence classification. We provide
a baseline for multi-label classification tasks and demonstrate the potential
of multimodal approaches for understanding ambiguity in design intentions. We
hope that ADARI dataset and baselines constitute a first step towards
subjective label classification.
| 2,021 | Computation and Language |
AdapterHub: A Framework for Adapting Transformers | The current modus operandi in NLP involves downloading and fine-tuning
pre-trained models consisting of millions or billions of parameters. Storing
and sharing such large trained models is expensive, slow, and time-consuming,
which impedes progress towards more general and versatile NLP methods that
learn from and for many tasks. Adapters -- small learnt bottleneck layers
inserted within each layer of a pre-trained model -- ameliorate this issue by
avoiding full fine-tuning of the entire model. However, sharing and integrating
adapter layers is not straightforward. We propose AdapterHub, a framework that
allows dynamic "stitching-in" of pre-trained adapters for different tasks and
languages. The framework, built on top of the popular HuggingFace Transformers
library, enables extremely easy and quick adaptations of state-of-the-art
pre-trained models (e.g., BERT, RoBERTa, XLM-R) across tasks and languages.
Downloading, sharing, and training adapters is as seamless as possible using
minimal changes to the training scripts and a specialized infrastructure. Our
framework enables scalable and easy access to sharing of task-specific models,
particularly in low-resource scenarios. AdapterHub includes all recent adapter
architectures and can be found at https://AdapterHub.ml.
| 2,020 | Computation and Language |
Fine-Tune Longformer for Jointly Predicting Rumor Stance and Veracity | Increased usage of social media caused the popularity of news and events
which are not even verified, resulting in spread of rumors allover the web. Due
to widely available social media platforms and increased usage caused the data
to be available in huge amounts.The manual methods to process such large data
is costly and time-taking, so there has been an increased attention to process
and verify such content automatically for the presence of rumors. A lot of
research studies reveal that to identify the stances of posts in the discussion
thread of such events and news is an important preceding step before identify
the rumor veracity. In this paper,we propose a multi-task learning framework
for jointly predicting rumor stance and veracity on the dataset released at
SemEval 2019 RumorEval: Determining rumor veracity and support for
rumors(SemEval 2019 Task 7), which includes social media rumors stem from a
variety of breaking news stories from Reddit as well as Twit-ter. Our framework
consists of two parts: a) The bottom part of our framework classifies the
stance for each post in the conversation thread discussing a rumor via
modelling the multi-turn conversation and make each post aware of its
neighboring posts. b) The upper part predicts the rumor veracity of the
conversation thread with stance evolution obtained from the bottom part.
Experimental results on SemEval 2019 Task 7 dataset show that our method
outperforms previous methods on both rumor stance classification and veracity
prediction
| 2,020 | Computation and Language |
InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language
Model Pre-Training | In this work, we present an information-theoretic framework that formulates
cross-lingual language model pre-training as maximizing mutual information
between multilingual-multi-granularity texts. The unified view helps us to
better understand the existing methods for learning cross-lingual
representations. More importantly, inspired by the framework, we propose a new
pre-training task based on contrastive learning. Specifically, we regard a
bilingual sentence pair as two views of the same meaning and encourage their
encoded representations to be more similar than the negative examples. By
leveraging both monolingual and parallel corpora, we jointly train the pretext
tasks to improve the cross-lingual transferability of pre-trained models.
Experimental results on several benchmarks show that our approach achieves
considerably better performance. The code and pre-trained models are available
at https://aka.ms/infoxlm.
| 2,021 | Computation and Language |
Align then Summarize: Automatic Alignment Methods for Summarization
Corpus Creation | Summarizing texts is not a straightforward task. Before even considering text
summarization, one should determine what kind of summary is expected. How much
should the information be compressed? Is it relevant to reformulate or should
the summary stick to the original phrasing? State-of-the-art on automatic text
summarization mostly revolves around news articles. We suggest that considering
a wider variety of tasks would lead to an improvement in the field, in terms of
generalization and robustness. We explore meeting summarization: generating
reports from automatic transcriptions. Our work consists in segmenting and
aligning transcriptions with respect to reports, to get a suitable dataset for
neural summarization. Using a bootstrapping approach, we provide pre-alignments
that are corrected by human annotators, making a validation set against which
we evaluate automatic models. This consistently reduces annotators' efforts by
providing iteratively better pre-alignment and maximizes the corpus size by
using annotations from our automatic alignment models. Evaluation is conducted
on \publicmeetings, a novel corpus of aligned public meetings. We report
automatic alignment and summarization performances on this corpus and show that
automatic alignment is relevant for data annotation since it leads to large
improvement of almost +4 on all ROUGE scores on the summarization task.
| 2,020 | Computation and Language |
Sinhala Language Corpora and Stopwords from a Decade of Sri Lankan
Facebook | This paper presents two colloquial Sinhala language corpora from the language
efforts of the Data, Analysis and Policy team of LIRNEasia, as well as a list
of algorithmically derived stopwords. The larger of the two corpora spans 2010
to 2020 and contains 28,825,820 to 29,549,672 words of multilingual text posted
by 533 Sri Lankan Facebook pages, including politics, media, celebrities, and
other categories; the smaller corpus amounts to 5,402,76 words of only Sinhala
text extracted from the larger. Both corpora have markers for their date of
creation, page of origin, and content type.
| 2,020 | Computation and Language |
Overview of CheckThat! 2020: Automatic Identification and Verification
of Claims in Social Media | We present an overview of the third edition of the CheckThat! Lab at CLEF
2020. The lab featured five tasks in two different languages: English and
Arabic. The first four tasks compose the full pipeline of claim verification in
social media: Task 1 on check-worthiness estimation, Task 2 on retrieving
previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on
claim verification. The lab is completed with Task 5 on check-worthiness
estimation in political debates and speeches. A total of 67 teams registered to
participate in the lab (up from 47 at CLEF 2019), and 23 of them actually
submitted runs (compared to 14 at CLEF 2019). Most teams used deep neural
networks based on BERT, LSTMs, or CNNs, and achieved sizable improvements over
the baselines on all tasks. Here we describe the tasks setup, the evaluation
results, and a summary of the approaches used by the participants, and we
discuss some lessons learned. Last but not least, we release to the research
community all datasets from the lab as well as the evaluation scripts, which
should enable further research in the important tasks of check-worthiness
estimation and automatic claim verification.
| 2,020 | Computation and Language |
A Survey on Computational Propaganda Detection | Propaganda campaigns aim at influencing people's mindset with the purpose of
advancing a specific agenda. They exploit the anonymity of the Internet, the
micro-profiling ability of social networks, and the ease of automatically
creating and managing coordinated networks of accounts, to reach millions of
social network users with persuasive messages, specifically targeted to topics
each individual user is sensitive to, and ultimately influencing the outcome on
a targeted issue. In this survey, we review the state of the art on
computational propaganda detection from the perspective of Natural Language
Processing and Network Analysis, arguing about the need for combined efforts
between these communities. We further discuss current challenges and future
research directions.
| 2,020 | Computation and Language |
Towards Debiasing Sentence Representations | As natural language processing methods are increasingly deployed in
real-world scenarios such as healthcare, legal systems, and social science, it
becomes necessary to recognize the role they potentially play in shaping social
biases and stereotypes. Previous work has revealed the presence of social
biases in widely used word embeddings involving gender, race, religion, and
other social constructs. While some methods were proposed to debias these
word-level embeddings, there is a need to perform debiasing at the
sentence-level given the recent shift towards new contextualized sentence
representations such as ELMo and BERT. In this paper, we investigate the
presence of social biases in sentence-level representations and propose a new
method, Sent-Debias, to reduce these biases. We show that Sent-Debias is
effective in removing biases, and at the same time, preserves performance on
sentence-level downstream tasks such as sentiment analysis, linguistic
acceptability, and natural language understanding. We hope that our work will
inspire future research on characterizing and removing social biases from
widely adopted sentence representations for fairer NLP.
| 2,020 | Computation and Language |
LogiQA: A Challenge Dataset for Machine Reading Comprehension with
Logical Reasoning | Machine reading is a fundamental task for testing the capability of natural
language understanding, which is closely related to human cognition in many
aspects. With the rising of deep learning techniques, algorithmic models rival
human performances on simple QA, and thus increasingly challenging machine
reading datasets have been proposed. Though various challenges such as evidence
integration and commonsense knowledge have been integrated, one of the
fundamental capabilities in human reading, namely logical reasoning, is not
fully investigated. We build a comprehensive dataset, named LogiQA, which is
sourced from expert-written questions for testing human Logical reasoning. It
consists of 8,678 QA instances, covering multiple types of deductive reasoning.
Results show that state-of-the-art neural models perform by far worse than
human ceiling. Our dataset can also serve as a benchmark for reinvestigating
logical AI under the deep learning NLP setting. The dataset is freely available
at https://github.com/lgw863/LogiQA-dataset
| 2,020 | Computation and Language |
Coupling Distant Annotation and Adversarial Training for Cross-Domain
Chinese Word Segmentation | Fully supervised neural approaches have achieved significant progress in the
task of Chinese word segmentation (CWS). Nevertheless, the performance of
supervised models tends to drop dramatically when they are applied to
out-of-domain data. Performance degradation is caused by the distribution gap
across domains and the out of vocabulary (OOV) problem. In order to
simultaneously alleviate these two issues, this paper proposes to couple
distant annotation and adversarial training for cross-domain CWS. For distant
annotation, we rethink the essence of "Chinese words" and design an automatic
distant annotation mechanism that does not need any supervision or pre-defined
dictionaries from the target domain. The approach could effectively explore
domain-specific words and distantly annotate the raw texts for the target
domain. For adversarial training, we develop a sentence-level training
procedure to perform noise reduction and maximum utilization of the source
domain information. Experiments on multiple real-world datasets across various
domains show the superiority and robustness of our model, significantly
outperforming previous state-of-the-art cross-domain CWS methods.
| 2,020 | Computation and Language |
SLK-NER: Exploiting Second-order Lexicon Knowledge for Chinese NER | Although character-based models using lexicon have achieved promising results
for Chinese named entity recognition (NER) task, some lexical words would
introduce erroneous information due to wrongly matched words. Existing
researches proposed many strategies to integrate lexicon knowledge. However,
they performed with simple first-order lexicon knowledge, which provided
insufficient word information and still faced the challenge of matched word
boundary conflicts; or explored the lexicon knowledge with graph where
higher-order information introducing negative words may disturb the
identification. To alleviate the above limitations, we present new insight into
second-order lexicon knowledge (SLK) of each character in the sentence to
provide more lexical word information including semantic and word boundary
features. Based on these, we propose a SLK-based model with a novel strategy to
integrate the above lexicon knowledge. The proposed model can exploit more
discernible lexical words information with the help of global context.
Experimental results on three public datasets demonstrate the validity of SLK.
The proposed model achieves more excellent performance than the
state-of-the-art comparison methods.
| 2,020 | Computation and Language |
Investigating Pretrained Language Models for Graph-to-Text Generation | Graph-to-text generation aims to generate fluent texts from graph-based data.
In this paper, we investigate two recently proposed pretrained language models
(PLMs) and analyze the impact of different task-adaptive pretraining strategies
for PLMs in graph-to-text generation. We present a study across three graph
domains: meaning representations, Wikipedia knowledge graphs (KGs) and
scientific KGs. We show that the PLMs BART and T5 achieve new state-of-the-art
results and that task-adaptive pretraining strategies improve their performance
even further. In particular, we report new state-of-the-art BLEU scores of
49.72 on LDC2017T10, 59.70 on WebNLG, and 25.66 on AGENDA datasets - a relative
improvement of 31.8%, 4.5%, and 42.4%, respectively. In an extensive analysis,
we identify possible reasons for the PLMs' success on graph-to-text tasks. We
find evidence that their knowledge about true facts helps them perform well
even when the input graph representation is reduced to a simple bag of node and
edge labels.
| 2,021 | Computation and Language |
Hierarchical Interaction Networks with Rethinking Mechanism for
Document-level Sentiment Analysis | Document-level Sentiment Analysis (DSA) is more challenging due to vague
semantic links and complicate sentiment information. Recent works have been
devoted to leveraging text summarization and have achieved promising results.
However, these summarization-based methods did not take full advantage of the
summary including ignoring the inherent interactions between the summary and
document. As a result, they limited the representation to express major points
in the document, which is highly indicative of the key sentiment. In this
paper, we study how to effectively generate a discriminative representation
with explicit subject patterns and sentiment contexts for DSA. A Hierarchical
Interaction Networks (HIN) is proposed to explore bidirectional interactions
between the summary and document at multiple granularities and learn
subject-oriented document representations for sentiment classification.
Furthermore, we design a Sentiment-based Rethinking mechanism (SR) by refining
the HIN with sentiment label information to learn a more sentiment-aware
document representation. We extensively evaluate our proposed models on three
public datasets. The experimental results consistently demonstrate the
effectiveness of our proposed models and show that HIN-SR outperforms various
state-of-the-art methods.
| 2,022 | Computation and Language |
Unsupervised Text Generation by Learning from Search | In this work, we present TGLS, a novel framework to unsupervised Text
Generation by Learning from Search. We start by applying a strong search
algorithm (in particular, simulated annealing) towards a heuristically defined
objective that (roughly) estimates the quality of sentences. Then, a
conditional generative model learns from the search results, and meanwhile
smooth out the noise of search. The alternation between search and learning can
be repeated for performance bootstrapping. We demonstrate the effectiveness of
TGLS on two real-world natural language generation tasks, paraphrase generation
and text formalization. Our model significantly outperforms unsupervised
baseline methods in both tasks. Especially, it achieves comparable performance
with the state-of-the-art supervised methods in paraphrase generation.
| 2,020 | Computation and Language |
A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine
Translation | Multi-modal neural machine translation (NMT) aims to translate source
sentences into a target language paired with images. However, dominant
multi-modal NMT models do not fully exploit fine-grained semantic
correspondences between semantic units of different modalities, which have
potential to refine multi-modal representation learning. To deal with this
issue, in this paper, we propose a novel graph-based multi-modal fusion encoder
for NMT. Specifically, we first represent the input sentence and image using a
unified multi-modal graph, which captures various semantic relationships
between multi-modal semantic units (words and visual objects). We then stack
multiple graph-based multi-modal fusion layers that iteratively perform
semantic interactions to learn node representations. Finally, these
representations provide an attention-based context vector for the decoder. We
evaluate our proposed encoder on the Multi30K datasets. Experimental results
and in-depth analysis show the superiority of our multi-modal NMT model.
| 2,020 | Computation and Language |
Towards an Automated SOAP Note: Classifying Utterances from Medical
Conversations | Summaries generated from medical conversations can improve recall and
understanding of care plans for patients and reduce documentation burden for
doctors. Recent advancements in automatic speech recognition (ASR) and natural
language understanding (NLU) offer potential solutions to generate these
summaries automatically, but rigorous quantitative baselines for benchmarking
research in this domain are lacking. In this paper, we bridge this gap for two
tasks: classifying utterances from medical conversations according to (i) the
SOAP section and (ii) the speaker role. Both are fundamental building blocks
along the path towards an end-to-end, automated SOAP note for medical
conversations. We provide details on a dataset that contains human and ASR
transcriptions of medical conversations and corresponding machine learning
optimized SOAP notes. We then present a systematic analysis in which we adapt
an existing deep learning architecture to the two aforementioned tasks. The
results suggest that modelling context in a hierarchical manner, which captures
both word and utterance level context, yields substantial improvements on both
classification tasks. Additionally, we develop and analyze a modular method for
adapting our model to ASR output.
| 2,020 | Computation and Language |
Task-Level Curriculum Learning for Non-Autoregressive Neural Machine
Translation | Non-autoregressive translation (NAT) achieves faster inference speed but at
the cost of worse accuracy compared with autoregressive translation (AT). Since
AT and NAT can share model structure and AT is an easier task than NAT due to
the explicit dependency on previous target-side tokens, a natural idea is to
gradually shift the model training from the easier AT task to the harder NAT
task. To smooth the shift from AT training to NAT training, in this paper, we
introduce semi-autoregressive translation (SAT) as intermediate tasks. SAT
contains a hyperparameter k, and each k value defines a SAT task with different
degrees of parallelism. Specially, SAT covers AT and NAT as its special cases:
it reduces to AT when k = 1 and to NAT when k = N (N is the length of target
sentence). We design curriculum schedules to gradually shift k from 1 to N,
with different pacing functions and number of tasks trained at the same time.
We called our method as task-level curriculum learning for NAT (TCL-NAT).
Experiments on IWSLT14 De-En, IWSLT16 En-De, WMT14 En-De and De-En datasets
show that TCL-NAT achieves significant accuracy improvements over previous NAT
baselines and reduces the performance gap between NAT and AT models to 1-2 BLEU
points, demonstrating the effectiveness of our proposed method.
| 2,020 | Computation and Language |
SummPip: Unsupervised Multi-Document Summarization with Sentence Graph
Compression | Obtaining training data for multi-document summarization (MDS) is time
consuming and resource-intensive, so recent neural models can only be trained
for limited domains. In this paper, we propose SummPip: an unsupervised method
for multi-document summarization, in which we convert the original documents to
a sentence graph, taking both linguistic and deep representation into account,
then apply spectral clustering to obtain multiple clusters of sentences, and
finally compress each cluster to generate the final summary. Experiments on
Multi-News and DUC-2004 datasets show that our method is competitive to
previous unsupervised methods and is even comparable to the neural supervised
approaches. In addition, human evaluation shows our system produces consistent
and complete summaries compared to human written ones.
| 2,020 | Computation and Language |
Compositional Generalization in Semantic Parsing: Pre-training vs.
Specialized Architectures | While mainstream machine learning methods are known to have limited ability
to compositionally generalize, new architectures and techniques continue to be
proposed to address this limitation. We investigate state-of-the-art techniques
and architectures in order to assess their effectiveness in improving
compositional generalization in semantic parsing tasks based on the SCAN and
CFQ datasets. We show that masked language model (MLM) pre-training rivals
SCAN-inspired architectures on primitive holdout splits. On a more complex
compositional task, we show that pre-training leads to significant improvements
in performance vs. comparable non-pre-trained models, whereas architectures
proposed to encourage compositional generalization on SCAN or in the area of
algorithm learning fail to lead to significant improvements. We establish a new
state of the art on the CFQ compositional generalization benchmark using MLM
pre-training together with an intermediate representation.
| 2,021 | Computation and Language |
Constructing a Family Tree of Ten Indo-European Languages with
Delexicalized Cross-linguistic Transfer Patterns | It is reasonable to hypothesize that the divergence patterns formulated by
historical linguists and typologists reflect constraints on human languages,
and are thus consistent with Second Language Acquisition (SLA) in a certain
way. In this paper, we validate this hypothesis on ten Indo-European languages.
We formalize the delexicalized transfer as interpretable tree-to-string and
tree-to-tree patterns which can be automatically induced from web data by
applying neural syntactic parsing and grammar induction technologies. This
allows us to quantitatively probe cross-linguistic transfer and extend
inquiries of SLA. We extend existing works which utilize mixed features and
support the agreement between delexicalized cross-linguistic transfer and the
phylogenetic structure resulting from the historical-comparative paradigm.
| 2,020 | Computation and Language |
On a Novel Application of Wasserstein-Procrustes for Unsupervised
Cross-Lingual Learning | The emergence of unsupervised word embeddings, pre-trained on very large
monolingual text corpora, is at the core of the ongoing neural revolution in
Natural Language Processing (NLP). Initially introduced for English, such
pre-trained word embeddings quickly emerged for a number of other languages.
Subsequently, there have been a number of attempts to align the embedding
spaces across languages, which could enable a number of cross-language NLP
applications. Performing the alignment using unsupervised cross-lingual
learning (UCL) is especially attractive as it requires little data and often
rivals supervised and semi-supervised approaches. Here, we analyze popular
methods for UCL and we find that often their objectives are, intrinsically,
versions of the Wasserstein-Procrustes problem. Hence, we devise an approach to
solve Wasserstein-Procrustes in a direct way, which can be used to refine and
to improve popular UCL methods such as iterative closest point (ICP),
multilingual unsupervised and supervised embeddings (MUSE) and supervised
Procrustes methods. Our evaluation experiments on standard datasets show
sizable improvements over these approaches. We believe that our rethinking of
the Wasserstein-Procrustes problem could enable further research, thus helping
to develop better algorithms for aligning word embeddings across languages. Our
code and instructions to reproduce the experiments are available at
https://github.com/guillemram97/wp-hungarian.
| 2,020 | Computation and Language |
A novel approach to sentiment analysis in Persian using discourse and
external semantic information | Sentiment analysis attempts to identify, extract and quantify affective
states and subjective information from various types of data such as text,
audio, and video. Many approaches have been proposed to extract the sentiment
of individuals from documents written in natural languages in recent years. The
majority of these approaches have focused on English, while resource-lean
languages such as Persian suffer from the lack of research work and language
resources. Due to this gap in Persian, the current work is accomplished to
introduce new methods for sentiment analysis which have been applied on
Persian. The proposed approach in this paper is two-fold: The first one is
based on classifier combination, and the second one is based on deep neural
networks which benefits from word embedding vectors. Both approaches takes
advantage of local discourse information and external knowledge bases, and also
cover several language issues such as negation and intensification,
andaddresses different granularity levels, namely word, aspect, sentence,
phrase and document-levels. To evaluate the performance of the proposed
approach, a Persian dataset is collected from Persian hotel reviews referred as
hotel reviews. The proposed approach has been compared to counterpart methods
based on the benchmark dataset. The experimental results approve the
effectiveness of the proposed approach when compared to related works.
| 2,020 | Computation and Language |
Feature-level Rating System using Customer Reviews and Review Votes | This work studies how we can obtain feature-level ratings of the mobile
products from the customer reviews and review votes to influence decision
making, both for new customers and manufacturers. Such a rating system gives a
more comprehensive picture of the product than what a product-level rating
system offers. While product-level ratings are too generic, feature-level
ratings are particular; we exactly know what is good or bad about the product.
There has always been a need to know which features fall short or are doing
well according to the customer's perception. It keeps both the manufacturer and
the customer well-informed in the decisions to make in improving the product
and buying, respectively. Different customers are interested in different
features. Thus, feature-level ratings can make buying decisions personalized.
We analyze the customer reviews collected on an online shopping site (Amazon)
about various mobile products and the review votes. Explicitly, we carry out a
feature-focused sentiment analysis for this purpose. Eventually, our analysis
yields ratings to 108 features for 4k+ mobiles sold online. It helps in
decision making on how to improve the product (from the manufacturer's
perspective) and in making the personalized buying decisions (from the buyer's
perspective) a possibility. Our analysis has applications in recommender
systems, consumer research, etc.
| 2,020 | Computation and Language |
Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding | Mining a set of meaningful topics organized into a hierarchy is intuitively
appealing since topic correlations are ubiquitous in massive text corpora. To
account for potential hierarchical topic structures, hierarchical topic models
generalize flat topic models by incorporating latent topic hierarchies into
their generative modeling process. However, due to their purely unsupervised
nature, the learned topic hierarchy often deviates from users' particular needs
or interests. To guide the hierarchical topic discovery process with minimal
user supervision, we propose a new task, Hierarchical Topic Mining, which takes
a category tree described by category names only, and aims to mine a set of
representative terms for each category from a text corpus to help a user
comprehend his/her interested topics. We develop a novel joint tree and text
embedding method along with a principled optimization procedure that allows
simultaneous modeling of the category tree structure and the corpus generative
process in the spherical space for effective category-representative term
discovery. Our comprehensive experiments show that our model, named JoSH, mines
a high-quality set of hierarchical topics with high efficiency and benefits
weakly-supervised hierarchical text classification tasks.
| 2,020 | Computation and Language |
Understanding Spatial Relations through Multiple Modalities | Recognizing spatial relations and reasoning about them is essential in
multiple applications including navigation, direction giving and human-computer
interaction in general. Spatial relations between objects can either be
explicit -- expressed as spatial prepositions, or implicit -- expressed by
spatial verbs such as moving, walking, shifting, etc. Both these, but implicit
relations in particular, require significant common sense understanding. In
this paper, we introduce the task of inferring implicit and explicit spatial
relations between two entities in an image. We design a model that uses both
textual and visual information to predict the spatial relations, making use of
both positional and size information of objects and image embeddings. We
contrast our spatial model with powerful language models and show how our
modeling complements the power of these, improving prediction accuracy and
coverage and facilitates dealing with unseen subjects, objects and relations.
| 2,020 | Computation and Language |
From Spatial Relations to Spatial Configurations | Spatial Reasoning from language is essential for natural language
understanding. Supporting it requires a representation scheme that can capture
spatial phenomena encountered in language as well as in images and videos.
Existing spatial representations are not sufficient for describing spatial
configurations used in complex tasks. This paper extends the capabilities of
existing spatial representation languages and increases coverage of the
semantic aspects that are needed to ground the spatial meaning of natural
language text in the world. Our spatial relation language is able to represent
a large, comprehensive set of spatial concepts crucial for reasoning and is
designed to support the composition of static and dynamic spatial
configurations. We integrate this language with the Abstract Meaning
Representation(AMR) annotation schema and present a corpus annotated by this
extended AMR. To exhibit the applicability of our representation scheme, we
annotate text taken from diverse datasets and show how we extend the
capabilities of existing spatial representation languages with the fine-grained
decomposition of semantics and blend it seamlessly with AMRs of sentences and
discourse representations as a whole.
| 2,020 | Computation and Language |
Meta-learning for Few-shot Natural Language Processing: A Survey | Few-shot natural language processing (NLP) refers to NLP tasks that are
accompanied with merely a handful of labeled examples. This is a real-world
challenge that an AI system must learn to handle. Usually we rely on collecting
more auxiliary information or developing a more efficient learning algorithm.
However, the general gradient-based optimization in high capacity models, if
training from scratch, requires many parameter-updating steps over a large
number of labeled examples to perform well (Snell et al., 2017). If the target
task itself cannot provide more information, how about collecting more tasks
equipped with rich annotations to help the model learning? The goal of
meta-learning is to train a model on a variety of tasks with rich annotations,
such that it can solve a new task using only a few labeled samples. The key
idea is to train the model's initial parameters such that the model has maximal
performance on a new task after the parameters have been updated through zero
or a couple of gradient steps. There are already some surveys for
meta-learning, such as (Vilalta and Drissi, 2002; Vanschoren, 2018; Hospedales
et al., 2020). Nevertheless, this paper focuses on NLP domain, especially
few-shot applications. We try to provide clearer definitions, progress summary
and some common datasets of applying meta-learning to few-shot NLP.
| 2,020 | Computation and Language |
One-Shot Learning for Language Modelling | Humans can infer a great deal about the meaning of a word, using the syntax
and semantics of surrounding words even if it is their first time reading or
hearing it. We can also generalise the learned concept of the word to new
tasks. Despite great progress in achieving human-level performance in certain
tasks (Silver et al., 2016), learning from one or few examples remains a key
challenge in machine learning, and has not thoroughly been explored in Natural
Language Processing (NLP).
In this work we tackle the problem of oneshot learning for an NLP task by
employing ideas from recent developments in machine learning: embeddings,
attention mechanisms (softmax) and similarity measures (cosine, Euclidean,
Poincare, and Minkowski). We adapt the framework suggested in matching networks
(Vinyals et al., 2016), and explore the effectiveness of the aforementioned
methods in one, two and three-shot learning problems on the task of predicting
missing word explored in (Vinyals et al., 2016) by using the WikiText-2
dataset. Our work contributes in two ways: Our first contribution is that we
explore the effectiveness of different distance metrics on k-shot learning, and
show that there is no single best distance metric for k-shot learning, which
challenges common belief. We found that the performance of a distance metric
depends on the number of shots used during training. The second contribution of
our work is that we establish a benchmark for one, two, and three-shot learning
on a language task with a publicly available dataset that can be used to
benchmark against in future research.
| 2,020 | Computation and Language |
Mono vs Multilingual Transformer-based Models: a Comparison across
Several Language Tasks | BERT (Bidirectional Encoder Representations from Transformers) and ALBERT (A
Lite BERT) are methods for pre-training language models which can later be
fine-tuned for a variety of Natural Language Understanding tasks. These methods
have been applied to a number of such tasks (mostly in English), achieving
results that outperform the state-of-the-art. In this paper, our contribution
is twofold. First, we make available our trained BERT and Albert model for
Portuguese. Second, we compare our monolingual and the standard multilingual
models using experiments in semantic textual similarity, recognizing textual
entailment, textual category classification, sentiment analysis, offensive
comment detection, and fake news detection, to assess the effectiveness of the
generated language representations. The results suggest that both monolingual
and multilingual models are able to achieve state-of-the-art and the advantage
of training a single language model, if any, is small.
| 2,020 | Computation and Language |
An Overview of Natural Language State Representation for Reinforcement
Learning | A suitable state representation is a fundamental part of the learning process
in Reinforcement Learning. In various tasks, the state can either be described
by natural language or be natural language itself. This survey outlines the
strategies used in the literature to build natural language state
representations. We appeal for more linguistically interpretable and grounded
representations, careful justification of design decisions and evaluation of
the effectiveness of different approaches.
| 2,020 | Computation and Language |
Frustratingly Hard Evidence Retrieval for QA Over Books | A lot of progress has been made to improve question answering (QA) in recent
years, but the special problem of QA over narrative book stories has not been
explored in-depth. We formulate BookQA as an open-domain QA task given its
similar dependency on evidence retrieval. We further investigate how
state-of-the-art open-domain QA approaches can help BookQA. Besides achieving
state-of-the-art on the NarrativeQA benchmark, our study also reveals the
difficulty of evidence retrieval in books with a wealth of experiments and
analysis - which necessitates future effort on novel solutions for evidence
retrieval in BookQA.
| 2,020 | Computation and Language |
Multimodal Dialogue State Tracking By QA Approach with Data Augmentation | Recently, a more challenging state tracking task, Audio-Video Scene-Aware
Dialogue (AVSD), is catching an increasing amount of attention among
researchers. Different from purely text-based dialogue state tracking, the
dialogue in AVSD contains a sequence of question-answer pairs about a video and
the final answer to the given question requires additional understanding of the
video. This paper interprets the AVSD task from an open-domain Question
Answering (QA) point of view and proposes a multimodal open-domain QA system to
deal with the problem. The proposed QA system uses common encoder-decoder
framework with multimodal fusion and attention. Teacher forcing is applied to
train a natural language generator. We also propose a new data augmentation
approach specifically under QA assumption. Our experiments show that our model
and techniques bring significant improvements over the baseline model on the
DSTC7-AVSD dataset and demonstrate the potentials of our data augmentation
techniques.
| 2,020 | Computation and Language |
How are you? Introducing stress-based text tailoring | Can stress affect not only your life but also how you read and interpret a
text? Healthcare has shown evidence of such dynamics and in this short paper we
discuss customising texts based on user stress level, as it could represent a
critical factor when it comes to user engagement and behavioural change. We
first show a real-world example in which user behaviour is influenced by
stress, then, after discussing which tools can be employed to assess and
measure it, we propose an initial method for tailoring the document by
exploiting complexity reduction and affect enforcement. The result is a short
and encouraging text which requires less commitment to be read and understood.
We believe this work in progress can raise some interesting questions on a
topic that is often overlooked in NLG.
| 2,020 | Computation and Language |
Voice@SRIB at SemEval-2020 Task 9 and 12: Stacked Ensembling method for
Sentiment and Offensiveness detection in Social Media | In social-media platforms such as Twitter, Facebook, and Reddit, people
prefer to use code-mixed language such as Spanish-English, Hindi-English to
express their opinions. In this paper, we describe different models we used,
using the external dataset to train embeddings, ensembling methods for
Sentimix, and OffensEval tasks. The use of pre-trained embeddings usually helps
in multiple tasks such as sentence classification, and machine translation. In
this experiment, we haveused our trained code-mixed embeddings and twitter
pre-trained embeddings to SemEval tasks. We evaluate our models on macro
F1-score, precision, accuracy, and recall on the datasets. We intend to show
that hyper-parameter tuning and data pre-processing steps help a lot in
improving the scores. In our experiments, we are able to achieve 0.886 F1-Macro
on OffenEval Greek language subtask post-evaluation, whereas the highest is
0.852 during the Evaluation Period. We stood third in Spanglish competition
with our best F1-score of 0.756. Codalab username is asking28.
| 2,020 | Computation and Language |
Knowledge Graph Extraction from Videos | Nearly all existing techniques for automated video annotation (or captioning)
describe videos using natural language sentences. However, this has several
shortcomings: (i) it is very hard to then further use the generated natural
language annotations in automated data processing, (ii) generating natural
language annotations requires to solve the hard subtask of generating
semantically precise and syntactically correct natural language sentences,
which is actually unrelated to the task of video annotation, (iii) it is
difficult to quantitatively measure performance, as standard metrics (e.g.,
accuracy and F1-score) are inapplicable, and (iv) annotations are
language-specific. In this paper, we propose the new task of knowledge graph
extraction from videos, i.e., producing a description in the form of a
knowledge graph of the contents of a given video. Since no datasets exist for
this task, we also include a method to automatically generate them, starting
from datasets where videos are annotated with natural language. We then
describe an initial deep-learning model for knowledge graph extraction from
videos, and report results on MSVD* and MSR-VTT*, two datasets obtained from
MSVD and MSR-VTT using our method.
| 2,020 | Computation and Language |
Morphological Skip-Gram: Using morphological knowledge to improve word
representation | Natural language processing models have attracted much interest in the deep
learning community. This branch of study is composed of some applications such
as machine translation, sentiment analysis, named entity recognition, question
and answer, and others. Word embeddings are continuous word representations,
they are an essential module for those applications and are generally used as
input word representation to the deep learning models. Word2Vec and GloVe are
two popular methods to learn word embeddings. They achieve good word
representations, however, they learn representations with limited information
because they ignore the morphological information of the words and consider
only one representation vector for each word. This approach implies that
Word2Vec and GloVe are unaware of the word inner structure. To mitigate this
problem, the FastText model represents each word as a bag of characters
n-grams. Hence, each n-gram has a continuous vector representation, and the
final word representation is the sum of its characters n-grams vectors.
Nevertheless, the use of all n-grams character of a word is a poor approach
since some n-grams have no semantic relation with their words and increase the
amount of potentially useless information. This approach also increases the
training phase time. In this work, we propose a new method for training word
embeddings, and its goal is to replace the FastText bag of character n-grams
for a bag of word morphemes through the morphological analysis of the word.
Thus, words with similar context and morphemes are represented by vectors close
to each other. To evaluate our new approach, we performed intrinsic evaluations
considering 15 different tasks, and the results show a competitive performance
compared to FastText.
| 2,020 | Computation and Language |
COVID-19 SignSym: a fast adaptation of a general clinical NLP tool to
identify and normalize COVID-19 signs and symptoms to OMOP common data model | The COVID-19 pandemic swept across the world rapidly, infecting millions of
people. An efficient tool that can accurately recognize important clinical
concepts of COVID-19 from free text in electronic health records (EHRs) will be
valuable to accelerate COVID-19 clinical research. To this end, this study aims
at adapting the existing CLAMP natural language processing tool to quickly
build COVID-19 SignSym, which can extract COVID-19 signs/symptoms and their 8
attributes (body location, severity, temporal expression, subject, condition,
uncertainty, negation, and course) from clinical text. The extracted
information is also mapped to standard concepts in the Observational Medical
Outcomes Partnership common data model. A hybrid approach of combining deep
learning-based models, curated lexicons, and pattern-based rules was applied to
quickly build the COVID-19 SignSym from CLAMP, with optimized performance. Our
extensive evaluation using 3 external sites with clinical notes of COVID-19
patients, as well as the online medical dialogues of COVID-19, shows COVID-19
Sign-Sym can achieve high performance across data sources. The workflow used
for this study can be generalized to other use cases, where existing clinical
natural language processing tools need to be customized for specific
information needs within a short time. COVID-19 SignSym is freely accessible to
the research community as a downloadable package
(https://clamp.uth.edu/covid/nlp.php) and has been used by 16 healthcare
organizations to support clinical research of COVID-19.
| 2,021 | Computation and Language |
CoVoST 2 and Massively Multilingual Speech-to-Text Translation | Speech translation has recently become an increasingly popular topic of
research, partly due to the development of benchmark datasets. Nevertheless,
current datasets cover a limited number of languages. With the aim to foster
research in massive multilingual speech translation and speech translation for
low resource language pairs, we release CoVoST 2, a large-scale multilingual
speech translation corpus covering translations from 21 languages into English
and from English into 15 languages. This represents the largest open dataset
available to date from total volume and language coverage perspective. Data
sanity checks provide evidence about the quality of the data, which is released
under CC0 license. We also provide extensive speech recognition, bilingual and
multilingual machine translation and speech translation baselines with
open-source implementation.
| 2,020 | Computation and Language |
Check_square at CheckThat! 2020: Claim Detection in Social Media via
Fusion of Transformer and Syntactic Features | In this digital age of news consumption, a news reader has the ability to
react, express and share opinions with others in a highly interactive and fast
manner. As a consequence, fake news has made its way into our daily life
because of very limited capacity to verify news on the Internet by large
companies as well as individuals. In this paper, we focus on solving two
problems which are part of the fact-checking ecosystem that can help to
automate fact-checking of claims in an ever increasing stream of content on
social media. For the first problem, claim check-worthiness prediction, we
explore the fusion of syntactic features and deep transformer Bidirectional
Encoder Representations from Transformers (BERT) embeddings, to classify
check-worthiness of a tweet, i.e. whether it includes a claim or not. We
conduct a detailed feature analysis and present our best performing models for
English and Arabic tweets. For the second problem, claim retrieval, we explore
the pre-trained embeddings from a Siamese network transformer model
(sentence-transformers) specifically trained for semantic textual similarity,
and perform KD-search to retrieve verified claims with respect to a query
tweet.
| 2,020 | Computation and Language |
Neural Machine Translation with Error Correction | Neural machine translation (NMT) generates the next target token given as
input the previous ground truth target tokens during training while the
previous generated target tokens during inference, which causes discrepancy
between training and inference as well as error propagation, and affects the
translation accuracy. In this paper, we introduce an error correction mechanism
into NMT, which corrects the error information in the previous generated tokens
to better predict the next token. Specifically, we introduce two-stream
self-attention from XLNet into NMT decoder, where the query stream is used to
predict the next token, and meanwhile the content stream is used to correct the
error information from the previous predicted tokens. We leverage scheduled
sampling to simulate the prediction errors during training. Experiments on
three IWSLT translation datasets and two WMT translation datasets demonstrate
that our method achieves improvements over Transformer baseline and scheduled
sampling. Further experimental analyses also verify the effectiveness of our
proposed error correction mechanism to improve the translation quality.
| 2,020 | Computation and Language |
Human Abnormality Detection Based on Bengali Text | In the field of natural language processing and human-computer interaction,
human attitudes and sentiments have attracted the researchers. However, in the
field of human-computer interaction, human abnormality detection has not been
investigated extensively and most works depend on image-based information. In
natural language processing, effective meaning can potentially convey by all
words. Each word may bring out difficult encounters because of their semantic
connection with ideas or categories. In this paper, an efficient and effective
human abnormality detection model is introduced, that only uses Bengali text.
This proposed model can recognize whether the person is in a normal or abnormal
state by analyzing their typed Bengali text. To the best of our knowledge, this
is the first attempt in developing a text based human abnormality detection
system. We have created our Bengali dataset (contains 2000 sentences) that is
generated by voluntary conversations. We have performed the comparative
analysis by using Naive Bayes and Support Vector Machine as classifiers. Two
different feature extraction techniques count vector, and TF-IDF is used to
experiment on our constructed dataset. We have achieved a maximum 89% accuracy
and 92% F1-score with our constructed dataset in our experiment.
| 2,020 | Computation and Language |
BAKSA at SemEval-2020 Task 9: Bolstering CNN with Self-Attention for
Sentiment Analysis of Code Mixed Text | Sentiment Analysis of code-mixed text has diversified applications in opinion
mining ranging from tagging user reviews to identifying social or political
sentiments of a sub-population. In this paper, we present an ensemble
architecture of convolutional neural net (CNN) and self-attention based LSTM
for sentiment analysis of code-mixed tweets. While the CNN component helps in
the classification of positive and negative tweets, the self-attention based
LSTM, helps in the classification of neutral tweets, because of its ability to
identify correct sentiment among multiple sentiment bearing units. We achieved
F1 scores of 0.707 (ranked 5th) and 0.725 (ranked 13th) on Hindi-English
(Hinglish) and Spanish-English (Spanglish) datasets, respectively. The
submissions for Hinglish and Spanglish tasks were made under the usernames
ayushk and harsh_6 respectively.
| 2,020 | Computation and Language |
IITK at SemEval-2020 Task 10: Transformers for Emphasis Selection | This paper describes the system proposed for addressing the research problem
posed in Task 10 of SemEval-2020: Emphasis Selection For Written Text in Visual
Media. We propose an end-to-end model that takes as input the text and
corresponding to each word gives the probability of the word to be emphasized.
Our results show that transformer-based models are particularly effective in
this task. We achieved the best Matchm score (described in section 2.2) of
0.810 and were ranked third on the leaderboard.
| 2,020 | Computation and Language |
IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of
Internet Memes | Social media is abundant in visual and textual information presented together
or in isolation. Memes are the most popular form, belonging to the former
class. In this paper, we present our approaches for the Memotion Analysis
problem as posed in SemEval-2020 Task 8. The goal of this task is to classify
memes based on their emotional content and sentiment. We leverage techniques
from Natural Language Processing (NLP) and Computer Vision (CV) towards the
sentiment classification of internet memes (Subtask A). We consider Bimodal
(text and image) as well as Unimodal (text-only) techniques in our study
ranging from the Na\"ive Bayes classifier to Transformer-based approaches. Our
results show that a text-only approach, a simple Feed Forward Neural Network
(FFNN) with Word2vec embeddings as input, performs superior to all the others.
We stand first in the Sentiment analysis task with a relative improvement of
63% over the baseline macro-F1 score. Our work is relevant to any task
concerned with the combination of different modalities.
| 2,020 | Computation and Language |
newsSweeper at SemEval-2020 Task 11: Context-Aware Rich Feature
Representations For Propaganda Classification | This paper describes our submissions to SemEval 2020 Task 11: Detection of
Propaganda Techniques in News Articles for each of the two subtasks of Span
Identification and Technique Classification. We make use of pre-trained BERT
language model enhanced with tagging techniques developed for the task of Named
Entity Recognition (NER), to develop a system for identifying propaganda spans
in the text. For the second subtask, we incorporate contextual features in a
pre-trained RoBERTa model for the classification of propaganda techniques. We
were ranked 5th in the propaganda technique classification subtask.
| 2,020 | Computation and Language |
CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVE | In this paper, we describe our system for Task 4 of SemEval 2020, which
involves differentiating between natural language statements that confirm to
common sense and those that do not. The organizers propose three subtasks -
first, selecting between two sentences, the one which is against common sense.
Second, identifying the most crucial reason why a statement does not make
sense. Third, generating novel reasons for explaining the against common sense
statement. Out of the three subtasks, this paper reports the system description
of subtask A and subtask B. This paper proposes a model based on transformer
neural network architecture for addressing the subtasks. The novelty in work
lies in the architecture design, which handles the logical implication of
contradicting statements and simultaneous information extraction from both
sentences. We use a parallel instance of transformers, which is responsible for
a boost in the performance. We achieved an accuracy of 94.8% in subtask A and
89% in subtask B on the test set.
| 2,020 | Computation and Language |
IITK-RSA at SemEval-2020 Task 5: Detecting Counterfactuals | This paper describes our efforts in tackling Task 5 of SemEval-2020. The task
involved detecting a class of textual expressions known as counterfactuals and
separating them into their constituent elements. Counterfactual statements
describe events that have not or could not have occurred and the possible
implications of such events. While counterfactual reasoning is natural for
humans, understanding these expressions is difficult for artificial agents due
to a variety of linguistic subtleties. Our final submitted approaches were an
ensemble of various fine-tuned transformer-based and CNN-based models for the
first subtask and a transformer model with dependency tree information for the
second subtask. We ranked 4-th and 9-th in the overall leaderboard. We also
explored various other approaches that involved the use of classical methods,
other neural architectures and the incorporation of different linguistic
features.
| 2,020 | Computation and Language |
Connecting Embeddings for Knowledge Graph Entity Typing | Knowledge graph (KG) entity typing aims at inferring possible missing entity
type instances in KG, which is a very significant but still under-explored
subtask of knowledge graph completion. In this paper, we propose a novel
approach for KG entity typing which is trained by jointly utilizing local
typing knowledge from existing entity type assertions and global triple
knowledge from KGs. Specifically, we present two distinct knowledge-driven
effective mechanisms of entity type inference. Accordingly, we build two novel
embedding models to realize the mechanisms. Afterward, a joint model with them
is used to infer missing entity type instances, which favors inferences that
agree with both entity type instances and triple knowledge in KGs. Experimental
results on two real-world datasets (Freebase and YAGO) demonstrate the
effectiveness of our proposed mechanisms and models for improving KG entity
typing. The source code and data of this paper can be obtained from:
https://github.com/ Adam1679/ConnectE
| 2,020 | Computation and Language |
problemConquero at SemEval-2020 Task 12: Transformer and Soft
label-based approaches | In this paper, we present various systems submitted by our team
problemConquero for SemEval-2020 Shared Task 12 Multilingual Offensive Language
Identification in Social Media. We participated in all the three sub-tasks of
OffensEval-2020, and our final submissions during the evaluation phase included
transformer-based approaches and a soft label-based approach. BERT based
fine-tuned models were submitted for each language of sub-task A (offensive
tweet identification). RoBERTa based fine-tuned model for sub-task B (automatic
categorization of offense types) was submitted. We submitted two models for
sub-task C (offense target identification), one using soft labels and the other
using BERT based fine-tuned model. Our ranks for sub-task A were Greek-19 out
of 37, Turkish-22 out of 46, Danish-26 out of 39, Arabic-39 out of 53, and
English-20 out of 85. We achieved a rank of 28 out of 43 for sub-task B. Our
best rank for sub-task C was 20 out of 39 using BERT based fine-tuned model.
| 2,020 | Computation and Language |
XD at SemEval-2020 Task 12: Ensemble Approach to Offensive Language
Identification in Social Media Using Transformer Encoders | This paper presents six document classification models using the latest
transformer encoders and a high-performing ensemble model for a task of
offensive language identification in social media. For the individual models,
deep transformer layers are applied to perform multi-head attentions. For the
ensemble model, the utterance representations taken from those individual
models are concatenated and fed into a linear decoder to make the final
decisions. Our ensemble model outperforms the individual models and shows up to
8.6% improvement over the individual models on the development set. On the test
set, it achieves macro-F1 of 90.9% and becomes one of the high performing
systems among 85 participants in the sub-task A of this shared task. Our
analysis shows that although the ensemble model significantly improves the
accuracy on the development set, the improvement is not as evident on the test
set.
| 2,020 | Computation and Language |
Curriculum Vitae Recommendation Based on Text Mining | During the last years, the development in diverse areas related to computer
science and internet, allowed to generate new alternatives for decision making
in the selection of personnel for state and private companies. In order to
optimize this selection process, the recommendation systems are the most
suitable for working with explicit information related to the likes and
dislikes of employers or end users, since this information allows to generate
lists of recommendations based on collaboration or similarity of content.
Therefore, this research takes as a basis these characteristics contained in
the database of curricula and job offers, which correspond to the Peruvian
ambit, which highlights the experience, knowledge and skills of each candidate,
which are described in textual terms or words. This research focuses on the
problem: how we can take advantage from the growth of unstructured information
about job offers and curriculum vitae on different websites for CV
recommendation. So, we use the techniques from Text Mining and Natural Language
Processing. Then, as a relevant technique for the present study, we emphasize
the technique frequency of the Term - Inverse Frequency of the documents
(TF-IDF), which allows identifying the most relevant CVs in relation to a job
offer of website through the average values (TF-IDF). So, the weighted value
can be used as a qualification value of the relevant curriculum vitae for the
recommendation.
| 2,020 | Computation and Language |
Book Success Prediction with Pretrained Sentence Embeddings and
Readability Scores | Predicting the potential success of a book in advance is vital in many
applications. This could help both publishers and readers in their
decision-making process whether or not a book is worth publishing and reading,
respectively. In this paper, we propose a model that leverages pretrained
sentence embeddings along with various readability scores for book success
prediction. Unlike previous methods, the proposed method requires no
count-based, lexical, or syntactic features. Instead, we use a convolutional
neural network over pretrained sentence embeddings and leverage different
readability scores through a simple concatenation operation. Our proposed model
outperforms strong baselines for this task by as large as 6.4\% F1-score
points. Moreover, our experiments show that according to our model, only the
first 1K sentences are good enough to predict the potential success of books.
| 2,021 | Computation and Language |
When Classical Chinese Meets Machine Learning: Explaining the Relative
Performances of Word and Sentence Segmentation Tasks | We consider three major text sources about the Tang Dynasty of China in our
experiments that aim to segment text written in classical Chinese. These
corpora include a collection of Tang Tomb Biographies, the New Tang Book, and
the Old Tang Book. We show that it is possible to achieve satisfactory
segmentation results with the deep learning approach. More interestingly, we
found that some of the relative superiority that we observed among different
designs of experiments may be explainable. The relative relevance among the
training corpora provides hints/explanation for the observed differences in
segmentation results that were achieved when we employed different combinations
of corpora to train the classifiers.
| 2,020 | Computation and Language |
Exploratory Search with Sentence Embeddings | Exploratory search aims to guide users through a corpus rather than
pinpointing exact information. We propose an exploratory search system based on
hierarchical clusters and document summaries using sentence embeddings. With
sentence embeddings, we represent documents as the mean of their embedded
sentences, extract summaries containing sentences close to this document
representation and extract keyphrases close to the document representation. To
evaluate our search system, we scrape our personal search history over the past
year and report our experience with the system. We then discuss motivating use
cases of an exploratory search system of this nature and conclude with possible
directions of future work.
| 2,020 | Computation and Language |
IITK at the FinSim Task: Hypernym Detection in Financial Domain via
Context-Free and Contextualized Word Embeddings | In this paper, we present our approaches for the FinSim 2020 shared task on
"Learning Semantic Representations for the Financial Domain". The goal of this
task is to classify financial terms into the most relevant hypernym (or
top-level) concept in an external ontology. We leverage both context-dependent
and context-independent word embeddings in our analysis. Our systems deploy
Word2vec embeddings trained from scratch on the corpus (Financial Prospectus in
English) along with pre-trained BERT embeddings. We divide the test dataset
into two subsets based on a domain rule. For one subset, we use unsupervised
distance measures to classify the term. For the second subset, we use simple
supervised classifiers like Naive Bayes, on top of the embeddings, to arrive at
a final prediction. Finally, we combine both the results. Our system ranks 1st
based on both the metrics, i.e., mean rank and accuracy.
| 2,020 | Computation and Language |
Better Early than Late: Fusing Topics with Word Embeddings for Neural
Question Paraphrase Identification | Question paraphrase identification is a key task in Community Question
Answering (CQA) to determine if an incoming question has been previously asked.
Many current models use word embeddings to identify duplicate questions, but
the use of topic models in feature-engineered systems suggests that they can be
helpful for this task, too. We therefore propose two ways of merging topics
with word embeddings (early vs. late fusion) in a new neural architecture for
question paraphrase identification. Our results show that our system
outperforms neural baselines on multiple CQA datasets, while an ablation study
highlights the importance of topics and especially early topic-embedding fusion
in our architecture.
| 2,020 | Computation and Language |
Massive Multi-Document Summarization of Product Reviews with Weak
Supervision | Product reviews summarization is a type of Multi-Document Summarization (MDS)
task in which the summarized document sets are often far larger than in
traditional MDS (up to tens of thousands of reviews). We highlight this
difference and coin the term "Massive Multi-Document Summarization" (MMDS) to
denote an MDS task that involves hundreds of documents or more. Prior work on
product reviews summarization considered small samples of the reviews, mainly
due to the difficulty of handling massive document sets. We show that
summarizing small samples can result in loss of important information and
provide misleading evaluation results. We propose a schema for summarizing a
massive set of reviews on top of a standard summarization algorithm. Since
writing large volumes of reference summaries needed for advanced neural network
models is impractical, our solution relies on weak supervision. Finally, we
propose an evaluation scheme that is based on multiple crowdsourced reference
summaries and aims to capture the massive review collection. We show that an
initial implementation of our schema significantly improves over several
baselines in ROUGE scores, and exhibits strong coherence in a manual linguistic
quality assessment.
| 2,020 | Computation and Language |
To Be or Not To Be a Verbal Multiword Expression: A Quest for
Discriminating Features | Automatic identification of mutiword expressions (MWEs) is a pre-requisite
for semantically-oriented downstream applications. This task is challenging
because MWEs, especially verbal ones (VMWEs), exhibit surface variability.
However, this variability is usually more restricted than in regular (non-VMWE)
constructions, which leads to various variability profiles. We use this fact to
determine the optimal set of features which could be used in a supervised
classification setting to solve a subproblem of VMWE identification: the
identification of occurrences of previously seen VMWEs. Surprisingly, a simple
custom frequency-based feature selection method proves more efficient than
other standard methods such as Chi-squared test, information gain or decision
trees. An SVM classifier using the optimal set of only 6 features outperforms
the best systems from a recent shared task on the French seen data.
| 2,020 | Computation and Language |
SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection | Lexical Semantic Change detection, i.e., the task of identifying words that
change meaning over time, is a very active research area, with applications in
NLP, lexicography, and linguistics. Evaluation is currently the most pressing
problem in Lexical Semantic Change detection, as no gold standards are
available to the community, which hinders progress. We present the results of
the first shared task that addresses this gap by providing researchers with an
evaluation framework and manually annotated, high-quality datasets for English,
German, Latin, and Swedish. 33 teams submitted 186 systems, which were
evaluated on two subtasks.
| 2,020 | Computation and Language |
Effects of Language Relatedness for Cross-lingual Transfer Learning in
Character-Based Language Models | Character-based Neural Network Language Models (NNLM) have the advantage of
smaller vocabulary and thus faster training times in comparison to NNLMs based
on multi-character units. However, in low-resource scenarios, both the
character and multi-character NNLMs suffer from data sparsity. In such
scenarios, cross-lingual transfer has improved multi-character NNLM performance
by allowing information transfer from a source to the target language. In the
same vein, we propose to use cross-lingual transfer for character NNLMs applied
to low-resource Automatic Speech Recognition (ASR). However, applying
cross-lingual transfer to character NNLMs is not as straightforward. We observe
that relatedness of the source language plays an important role in
cross-lingual pretraining of character NNLMs. We evaluate this aspect on ASR
tasks for two target languages: Finnish (with English and Estonian as source)
and Swedish (with Danish, Norwegian, and English as source). Prior work has
observed no difference between using the related or unrelated language for
multi-character NNLMs. We, however, show that for character-based NNLMs, only
pretraining with a related language improves the ASR performance, and using an
unrelated language may deteriorate it. We also observe that the benefits are
larger when there is much lesser target data than source data.
| 2,020 | Computation and Language |
Analogical Reasoning for Visually Grounded Language Acquisition | Children acquire language subconsciously by observing the surrounding world
and listening to descriptions. They can discover the meaning of words even
without explicit language knowledge, and generalize to novel compositions
effortlessly. In this paper, we bring this ability to AI, by studying the task
of Visually grounded Language Acquisition (VLA). We propose a multimodal
transformer model augmented with a novel mechanism for analogical reasoning,
which approximates novel compositions by learning semantic mapping and
reasoning operations from previously seen compositions. Our proposed method,
Analogical Reasoning Transformer Networks (ARTNet), is trained on raw
multimedia data (video frames and transcripts), and after observing a set of
compositions such as "washing apple" or "cutting carrot", it can generalize and
recognize new compositions in new video frames, such as "washing carrot" or
"cutting apple". To this end, ARTNet refers to relevant instances in the
training data and uses their visual features and captions to establish
analogies with the query image. Then it chooses the suitable verb and noun to
create a new composition that describes the new image best. Extensive
experiments on an instructional video dataset demonstrate that the proposed
method achieves significantly better generalization capability and recognition
accuracy compared to state-of-the-art transformer models.
| 2,020 | Computation and Language |
Product Title Generation for Conversational Systems using BERT | Through recent advancements in speech technology and introduction of smart
devices, such as Amazon Alexa and Google Home, increasing number of users are
interacting with applications through voice. E-commerce companies typically
display short product titles on their webpages, either human-curated or
algorithmically generated, when brevity is required, but these titles are
dissimilar from natural spoken language. For example, "Lucky Charms Gluten Free
Break-fast Cereal, 20.5 oz a box Lucky Charms Gluten Free" is acceptable to
display on a webpage, but "a 20.5 ounce box of lucky charms gluten free cereal"
is easier to comprehend over a conversational system. As compared to display
devices, where images and detailed product information can be presented to
users, short titles for products are necessary when interfacing with voice
assistants. We propose a sequence-to-sequence approach using BERT to generate
short, natural, spoken language titles from input web titles. Our extensive
experiments on a real-world industry dataset and human evaluation of model
outputs, demonstrate that BERT summarization outperforms comparable baseline
models.
| 2,020 | Computation and Language |
Applying GPGPU to Recurrent Neural Network Language Model based Fast
Network Search in the Real-Time LVCSR | Recurrent Neural Network Language Models (RNNLMs) have started to be used in
various fields of speech recognition due to their outstanding performance.
However, the high computational complexity of RNNLMs has been a hurdle in
applying the RNNLM to a real-time Large Vocabulary Continuous Speech
Recognition (LVCSR). In order to accelerate the speed of RNNLM-based network
searches during decoding, we apply the General Purpose Graphic Processing Units
(GPGPUs). This paper proposes a novel method of applying GPGPUs to RNNLM-based
graph traversals. We have achieved our goal by reducing redundant computations
on CPUs and amount of transfer between GPGPUs and CPUs. The proposed approach
was evaluated on both WSJ corpus and in-house data. Experiments shows that the
proposed approach achieves the real-time speed in various circumstances while
maintaining the Word Error Rate (WER) to be relatively 10% lower than that of
n-gram models.
| 2,020 | Computation and Language |
AI4D -- African Language Dataset Challenge | As language and speech technologies become more advanced, the lack of
fundamental digital resources for African languages, such as data, spell
checkers and Part of Speech taggers, means that the digital divide between
these languages and others keeps growing. This work details the organisation of
the AI4D - African Language Dataset Challenge, an effort to incentivize the
creation, organization and discovery of African language datasets through a
competitive challenge. We particularly encouraged the submission of annotated
datasets which can be used for training task-specific supervised machine
learning models.
| 2,020 | Computation and Language |
Deep Learning based, end-to-end metaphor detection in Greek language
with Recurrent and Convolutional Neural Networks | This paper presents and benchmarks a number of end-to-end Deep Learning based
models for metaphor detection in Greek. We combine Convolutional Neural
Networks and Recurrent Neural Networks with representation learning to bear on
the metaphor detection problem for the Greek language. The models presented
achieve exceptional accuracy scores, significantly improving the previous state
of the art results, which had already achieved accuracy 0.82. Furthermore, no
special preprocessing, feature engineering or linguistic knowledge is used in
this work. The methods presented achieve accuracy of 0.92 and F-score 0.92 with
Convolutional Neural Networks (CNNs) and bidirectional Long Short Term Memory
networks (LSTMs). Comparable results of 0.91 accuracy and 0.91 F-score are also
achieved with bidirectional Gated Recurrent Units (GRUs) and Convolutional
Recurrent Neural Nets (CRNNs). The models are trained and evaluated only on the
basis of the training tuples, the sentences and their labels. The outcome is a
state of the art collection of metaphor detection models, trained on limited
labelled resources, which can be extended to other languages and similar tasks.
| 2,020 | Computation and Language |
HCMS at SemEval-2020 Task 9: A Neural Approach to Sentiment Analysis for
Code-Mixed Texts | Problems involving code-mixed language are often plagued by a lack of
resources and an absence of materials to perform sophisticated transfer
learning with. In this paper we describe our submission to the Sentimix
Hindi-English task involving sentiment classification of code-mixed texts, and
with an F1 score of 67.1%, we demonstrate that simple convolution and attention
may well produce reasonable results.
| 2,020 | Computation and Language |
NITS-Hinglish-SentiMix at SemEval-2020 Task 9: Sentiment Analysis For
Code-Mixed Social Media Text Using an Ensemble Model | Sentiment Analysis is the process of deciphering what a sentence emotes and
classifying them as either positive, negative, or neutral. In recent times,
India has seen a huge influx in the number of active social media users and
this has led to a plethora of unstructured text data. Since the Indian
population is generally fluent in both Hindi and English, they end up
generating code-mixed Hinglish social media text i.e. the expressions of Hindi
language, written in the Roman script alongside other English words. The
ability to adequately comprehend the notions in these texts is truly necessary.
Our team, rns2020 participated in Task 9 at SemEval2020 intending to design a
system to carry out the sentiment analysis of code-mixed social media text.
This work proposes a system named NITS-Hinglish-SentiMix to viably complete the
sentiment analysis of such code-mixed Hinglish text. The proposed framework has
recorded an F-Score of 0.617 on the test data.
| 2,020 | Computation and Language |
Health, Psychosocial, and Social issues emanating from COVID-19 pandemic
based on Social Media Comments using Natural Language Processing | The COVID-19 pandemic has caused a global health crisis that affects many
aspects of human lives. In the absence of vaccines and antivirals, several
behavioural change and policy initiatives, such as physical distancing, have
been implemented to control the spread of the coronavirus. Social media data
can reveal public perceptions toward how governments and health agencies across
the globe are handling the pandemic, as well as the impact of the disease on
people regardless of their geographic locations in line with various factors
that hinder or facilitate the efforts to control the spread of the pandemic
globally. This paper aims to investigate the impact of the COVID-19 pandemic on
people globally using social media data. We apply natural language processing
(NLP) and thematic analysis to understand public opinions, experiences, and
issues with respect to the COVID-19 pandemic using social media data. First, we
collect over 47 million COVID-19-related comments from Twitter, Facebook,
YouTube, and three online discussion forums. Second, we perform data
preprocessing which involves applying NLP techniques to clean and prepare the
data for automated theme extraction. Third, we apply context-aware NLP approach
to extract meaningful keyphrases or themes from over 1 million randomly
selected comments, as well as compute sentiment scores for each theme and
assign sentiment polarity based on the scores using lexicon-based technique.
Fourth, we categorize related themes into broader themes. A total of 34
negative themes emerged, out of which 15 are health-related issues,
psychosocial issues, and social issues related to the COVID-19 pandemic from
the public perspective. In addition, 20 positive themes emerged from our
results. Finally, we recommend interventions that can help address the negative
issues based on the positive themes and other remedial ideas rooted in
research.
| 2,021 | Computation and Language |
A Survey on Graph Neural Networks for Knowledge Graph Completion | Knowledge Graphs are increasingly becoming popular for a variety of
downstream tasks like Question Answering and Information Retrieval. However,
the Knowledge Graphs are often incomplete, thus leading to poor performance. As
a result, there has been a lot of interest in the task of Knowledge Base
Completion. More recently, Graph Neural Networks have been used to capture
structural information inherently stored in these Knowledge Graphs and have
been shown to achieve SOTA performance across a variety of datasets. In this
survey, we understand the various strengths and weaknesses of the proposed
methodology and try to find new exciting research problems in this area that
require further investigation.
| 2,020 | Computation and Language |
IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What
to Emphasize? | We propose a novel method that enables us to determine words that deserve to
be emphasized from written text in visual media, relying only on the
information from the self-attention distributions of pre-trained language
models (PLMs). With extensive experiments and analyses, we show that 1) our
zero-shot approach is superior to a reasonable baseline that adopts TF-IDF and
that 2) there exist several attention heads in PLMs specialized for emphasis
selection, confirming that PLMs are capable of recognizing important words in
sentences.
| 2,020 | Computation and Language |
MULTISEM at SemEval-2020 Task 3: Fine-tuning BERT for Lexical Meaning | We present the MULTISEM systems submitted to SemEval 2020 Task 3: Graded Word
Similarity in Context (GWSC). We experiment with injecting semantic knowledge
into pre-trained BERT models through fine-tuning on lexical semantic tasks
related to GWSC. We use existing semantically annotated datasets and propose to
approximate similarity through automatically generated lexical substitutes in
context. We participate in both GWSC subtasks and address two languages,
English and Finnish. Our best English models occupy the third and fourth
positions in the ranking for the two subtasks. Performance is lower for the
Finnish models which are mid-ranked in the respective subtasks, highlighting
the important role of data availability for fine-tuning.
| 2,020 | Computation and Language |
FiSSA at SemEval-2020 Task 9: Fine-tuned For Feelings | In this paper, we present our approach for sentiment classification on
Spanish-English code-mixed social media data in the SemEval-2020 Task 9. We
investigate performance of various pre-trained Transformer models by using
different fine-tuning strategies. We explore both monolingual and multilingual
models with the standard fine-tuning method. Additionally, we propose a custom
model that we fine-tune in two steps: once with a language modeling objective,
and once with a task-specific objective. Although two-step fine-tuning improves
sentiment classification performance over the base model, the large
multilingual XLM-RoBERTa model achieves best weighted F1-score with 0.537 on
development data and 0.739 on test data. With this score, our team jupitter
placed tenth overall in the competition.
| 2,020 | Computation and Language |
JUNLP@SemEval-2020 Task 9:Sentiment Analysis of Hindi-English code mixed
data using Grid Search Cross Validation | Code-mixing is a phenomenon which arises mainly in multilingual societies.
Multilingual people, who are well versed in their native languages and also
English speakers, tend to code-mix using English-based phonetic typing and the
insertion of anglicisms in their main language. This linguistic phenomenon
poses a great challenge to conventional NLP domains such as Sentiment Analysis,
Machine Translation, and Text Summarization, to name a few. In this work, we
focus on working out a plausible solution to the domain of Code-Mixed Sentiment
Analysis. This work was done as participation in the SemEval-2020 Sentimix
Task, where we focused on the sentiment analysis of English-Hindi code-mixed
sentences. our username for the submission was "sainik.mahata" and team name
was "JUNLP". We used feature extraction algorithms in conjunction with
traditional machine learning algorithms such as SVR and Grid Search in an
attempt to solve the task. Our approach garnered an f1-score of 66.2\% when
tested using metrics prepared by the organizers of the task.
| 2,020 | Computation and Language |
Named entity recognition in chemical patents using ensemble of
contextual language models | Chemical patent documents describe a broad range of applications holding key
reaction and compound information, such as chemical structure, reaction
formulas, and molecular properties. These informational entities should be
first identified in text passages to be utilized in downstream tasks. Text
mining provides means to extract relevant information from chemical patents
through information extraction techniques. As part of the Information
Extraction task of the Cheminformatics Elsevier Melbourne University challenge,
in this work we study the effectiveness of contextualized language models to
extract reaction information in chemical patents. We assess transformer
architectures trained on a generic and specialised corpora to propose a new
ensemble model. Our best model, based on a majority ensemble approach, achieves
an exact F1-score of 92.30% and a relaxed F1-score of 96.24%. The results show
that ensemble of contextualized language models can provide an effective method
to extract information from chemical patents.
| 2,020 | Computation and Language |
SummEval: Re-evaluating Summarization Evaluation | The scarcity of comprehensive up-to-date studies on evaluation metrics for
text summarization and the lack of consensus regarding evaluation protocols
continue to inhibit progress. We address the existing shortcomings of
summarization evaluation methods along five dimensions: 1) we re-evaluate 14
automatic evaluation metrics in a comprehensive and consistent fashion using
neural summarization model outputs along with expert and crowd-sourced human
annotations, 2) we consistently benchmark 23 recent summarization models using
the aforementioned automatic evaluation metrics, 3) we assemble the largest
collection of summaries generated by models trained on the CNN/DailyMail news
dataset and share it in a unified format, 4) we implement and share a toolkit
that provides an extensible and unified API for evaluating summarization models
across a broad range of automatic metrics, 5) we assemble and share the largest
and most diverse, in terms of model types, collection of human judgments of
model-generated summaries on the CNN/Daily Mail dataset annotated by both
expert judges and crowd-source workers. We hope that this work will help
promote a more complete evaluation protocol for text summarization as well as
advance research in developing evaluation metrics that better correlate with
human judgments.
| 2,021 | Computation and Language |
MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections
and State Tracking Baselines | MultiWOZ is a well-known task-oriented dialogue dataset containing over
10,000 annotated dialogues spanning 8 domains. It is extensively used as a
benchmark for dialogue state tracking. However, recent works have reported
presence of substantial noise in the dialogue state annotations. MultiWOZ 2.1
identified and fixed many of these erroneous annotations and user utterances,
resulting in an improved version of this dataset. This work introduces MultiWOZ
2.2, which is a yet another improved version of this dataset. Firstly, we
identify and fix dialogue state annotation errors across 17.3% of the
utterances on top of MultiWOZ 2.1. Secondly, we redefine the ontology by
disallowing vocabularies of slots with a large number of possible values (e.g.,
restaurant name, time of booking). In addition, we introduce slot span
annotations for these slots to standardize them across recent models, which
previously used custom string matching heuristics to generate them. We also
benchmark a few state of the art dialogue state tracking models on the
corrected dataset to facilitate comparison for future work. In the end, we
discuss best practices for dialogue data collection that can help avoid
annotation errors.
| 2,020 | Computation and Language |
IUST at SemEval-2020 Task 9: Sentiment Analysis for Code-Mixed Social
Media Text using Deep Neural Networks and Linear Baselines | Sentiment Analysis is a well-studied field of Natural Language Processing.
However, the rapid growth of social media and noisy content within them poses
significant challenges in addressing this problem with well-established methods
and tools. One of these challenges is code-mixing, which means using different
languages to convey thoughts in social media texts. Our group, with the name of
IUST(username: TAHA), participated at the SemEval-2020 shared task 9 on
Sentiment Analysis for Code-Mixed Social Media Text, and we have attempted to
develop a system to predict the sentiment of a given code-mixed tweet. We used
different preprocessing techniques and proposed to use different methods that
vary from NBSVM to more complicated deep neural network models. Our best
performing method obtains an F1 score of 0.751 for the Spanish-English sub-task
and 0.706 over the Hindi-English sub-task.
| 2,020 | Computation and Language |
Consistent Transcription and Translation of Speech | The conventional paradigm in speech translation starts with a speech
recognition step to generate transcripts, followed by a translation step with
the automatic transcripts as input. To address various shortcomings of this
paradigm, recent work explores end-to-end trainable direct models that
translate without transcribing. However, transcripts can be an indispensable
output in practical applications, which often display transcripts alongside the
translations to users.
We make this common requirement explicit and explore the task of jointly
transcribing and translating speech. While high accuracy of transcript and
translation are crucial, even highly accurate systems can suffer from
inconsistencies between both outputs that degrade the user experience. We
introduce a methodology to evaluate consistency and compare several modeling
approaches, including the traditional cascaded approach and end-to-end models.
We find that direct models are poorly suited to the joint
transcription/translation task, but that end-to-end models that feature a
coupled inference procedure are able to achieve strong consistency. We further
introduce simple techniques for directly optimizing for consistency, and
analyze the resulting trade-offs between consistency, transcription accuracy,
and translation accuracy.
| 2,020 | Computation and Language |
NoPropaganda at SemEval-2020 Task 11: A Borrowed Approach to Sequence
Tagging and Text Classification | This paper describes our contribution to SemEval-2020 Task 11: Detection Of
Propaganda Techniques In News Articles. We start with simple LSTM baselines and
move to an autoregressive transformer decoder to predict long continuous
propaganda spans for the first subtask. We also adopt an approach from relation
extraction by enveloping spans mentioned above with special tokens for the
second subtask of propaganda technique classification. Our models report an
F-score of 44.6% and a micro-averaged F-score of 58.2% for those tasks
accordingly.
| 2,020 | Computation and Language |
Bollyrics: Automatic Lyrics Generator for Romanised Hindi | Song lyrics convey a meaningful story in a creative manner with complex
rhythmic patterns. Researchers have been successful in generating and analyisng
lyrics for poetry and songs in English and Chinese. But there are no works
which explore the Hindi language datasets. Given the popularity of Hindi songs
across the world and the ambiguous nature of romanized Hindi script, we propose
Bollyrics, an automatic lyric generator for romanized Hindi songs. We propose
simple techniques to capture rhyming patterns before and during the model
training process in Hindi language. The dataset and codes are available
publicly at https://github.com/lingo-iitgn/Bollyrics.
| 2,020 | Computation and Language |
Insightful Assistant: AI-compatible Operation Graph Representations for
Enhancing Industrial Conversational Agents | Advances in voice-controlled assistants paved the way into the consumer
market. For professional or industrial use, the capabilities of such assistants
are too limited or too time-consuming to implement due to the higher complexity
of data, possible AI-based operations, and requests. In the light of these
deficits, this paper presents Insightful Assistant---a pipeline concept based
on a novel operation graph representation resulting from the intents detected.
Using a predefined set of semantically annotated (executable) functions, each
node of the operation graph is assigned to a function for execution. Besides
basic operations, such functions can contain artificial intelligence (AI) based
operations (e.g., anomaly detection). The result is then visualized to the user
according to type and extracted user preferences in an automated way. We
further collected a unique crowd-sourced set of 869 requests, each with four
different variants expected visualization, for an industrial dataset. The
evaluation of our proof-of-concept prototype on this dataset shows its
feasibility: it achieves an accuracy of up to 95.0% (74.5%) for simple
(complex) request detection with different variants and a top3-accuracy up to
95.4% for data-/user-adaptive visualization.
| 2,020 | Computation and Language |
Duluth at SemEval-2020 Task 12: Offensive Tweet Identification in
English with Logistic Regression | This paper describes the Duluth systems that participated in SemEval--2020
Task 12, Multilingual Offensive Language Identification in Social Media
(OffensEval--2020). We participated in the three English language tasks. Our
systems provide a simple Machine Learning baseline using logistic regression.
We trained our models on the distantly supervised training data made available
by the task organizers and used no other resources. As might be expected we did
not rank highly in the comparative evaluation: 79th of 85 in Task A, 34th of 43
in Task B, and 24th of 39 in Task C. We carried out a qualitative analysis of
our results and found that the class labels in the gold standard data are
somewhat noisy. We hypothesize that the extremely high accuracy (> 90%) of the
top ranked systems may reflect methods that learn the training data very well
but may not generalize to the task of identifying offensive language in
English. This analysis includes examples of tweets that despite being mildly
redacted are still offensive.
| 2,020 | Computation and Language |
Duluth at SemEval-2019 Task 6: Lexical Approaches to Identify and
Categorize Offensive Tweets | This paper describes the Duluth systems that participated in SemEval--2019
Task 6, Identifying and Categorizing Offensive Language in Social Media
(OffensEval). For the most part these systems took traditional Machine Learning
approaches that built classifiers from lexical features found in manually
labeled training data. However, our most successful system for classifying a
tweet as offensive (or not) was a rule-based black--list approach, and we also
experimented with combining the training data from two different but related
SemEval tasks. Our best systems in each of the three OffensEval tasks placed in
the middle of the comparative evaluation, ranking 57th of 103 in task A, 39th
of 75 in task B, and 44th of 65 in task C.
| 2,020 | Computation and Language |
Constructing a Testbed for Psychometric Natural Language Processing | Psychometric measures of ability, attitudes, perceptions, and beliefs are
crucial for understanding user behaviors in various contexts including health,
security, e-commerce, and finance. Traditionally, psychometric dimensions have
been measured and collected using survey-based methods. Inferring such
constructs from user-generated text could afford opportunities for timely,
unobtrusive, collection and analysis. In this paper, we describe our efforts to
construct a corpus for psychometric natural language processing (NLP). We
discuss our multi-step process to align user text with their survey-based
response items and provide an overview of the resulting testbed which
encompasses survey-based psychometric measures and accompanying user-generated
text from over 8,500 respondents. We report preliminary results on the use of
the text to categorize/predict users' survey response labels. We also discuss
the important implications of our work and resulting testbed for future
psychometric NLP research.
| 2,020 | Computation and Language |
Effect of Text Processing Steps on Twitter Sentiment Classification
using Word Embedding | Processing of raw text is the crucial first step in text classification and
sentiment analysis. However, text processing steps are often performed using
off-the-shelf routines and pre-built word dictionaries without optimizing for
domain, application, and context. This paper investigates the effect of seven
text processing scenarios on a particular text domain (Twitter) and application
(sentiment classification). Skip gram-based word embeddings are developed to
include Twitter colloquial words, emojis, and hashtag keywords that are often
removed for being unavailable in conventional literature corpora. Our
experiments reveal negative effects on sentiment classification of two common
text processing steps: 1) stop word removal and 2) averaging of word vectors to
represent individual tweets. New effective steps for 1) including non-ASCII
emoji characters, 2) measuring word importance from word embedding, 3)
aggregating word vectors into a tweet embedding, and 4) developing linearly
separable feature space have been proposed to optimize the sentiment
classification pipeline. The best combination of text processing steps yields
the highest average area under the curve (AUC) of 88.4 (+/-0.4) in classifying
14,640 tweets with three sentiment labels. Word selection from context-driven
word embedding reveals that only the ten most important words in Tweets
cumulatively yield over 98% of the maximum accuracy. Results demonstrate a
means for data-driven selection of important words in tweet classification as
opposed to using pre-built word dictionaries. The proposed tweet embedding is
robust to and alleviates the need for several text processing steps.
| 2,020 | Computation and Language |
Reed at SemEval-2020 Task 9: Fine-Tuning and Bag-of-Words Approaches to
Code-Mixed Sentiment Analysis | We explore the task of sentiment analysis on Hinglish (code-mixed
Hindi-English) tweets as participants of Task 9 of the SemEval-2020
competition, known as the SentiMix task. We had two main approaches: 1)
applying transfer learning by fine-tuning pre-trained BERT models and 2)
training feedforward neural networks on bag-of-words representations. During
the evaluation phase of the competition, we obtained an F-score of 71.3% with
our best model, which placed $4^{th}$ out of 62 entries in the official system
rankings.
| 2,020 | Computation and Language |
A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges | Question Answering (QA) over Knowledge Base (KB) aims to automatically answer
natural language questions via well-structured relation information between
entities stored in knowledge bases. In order to make KBQA more applicable in
actual scenarios, researchers have shifted their attention from simple
questions to complex questions, which require more KB triples and constraint
inference. In this paper, we introduce the recent advances in complex QA.
Besides traditional methods relying on templates and rules, the research is
categorized into a taxonomy that contains two main branches, namely Information
Retrieval-based and Neural Semantic Parsing-based. After describing the methods
of these branches, we analyze directions for future research and introduce the
models proposed by the Alime team.
| 2,020 | Computation and Language |
KUISAIL at SemEval-2020 Task 12: BERT-CNN for Offensive Speech
Identification in Social Media | In this paper, we describe our approach to utilize pre-trained BERT models
with Convolutional Neural Networks for sub-task A of the Multilingual Offensive
Language Identification shared task (OffensEval 2020), which is a part of the
SemEval 2020. We show that combining CNN with BERT is better than using BERT on
its own, and we emphasize the importance of utilizing pre-trained language
models for downstream tasks. Our system, ranked 4th with macro averaged
F1-Score of 0.897 in Arabic, 4th with score of 0.843 in Greek, and 3rd with
score of 0.814 in Turkish. Additionally, we present ArabicBERT, a set of
pre-trained transformer language models for Arabic that we share with the
community.
| 2,020 | Computation and Language |
Public Sentiment Toward Solar Energy: Opinion Mining of Twitter Using a
Transformer-Based Language Model | Public acceptance and support for renewable energy are important determinants
of renewable energy policies and market conditions. This paper examines public
sentiment toward solar energy in the United States using data from Twitter, a
micro-blogging platform in which people post messages, known as tweets. We
filtered tweets specific to solar energy and performed a classification task
using Robustly optimized Bidirectional Encoder Representations from
Transformers (RoBERTa). Analyzing 71,262 tweets during the period of late
January to early July 2020, we find public sentiment varies significantly
across states. Within the study period, the Northeastern U.S. region shows more
positive sentiment toward solar energy than did the Southern U.S. region. Solar
radiation does not correlate to variation in solar sentiment across states. We
also find that public sentiment toward solar correlates to renewable energy
policy and market conditions, specifically, Renewable Portfolio Standards (RPS)
targets, customer-friendly net metering policies, and a mature solar market.
| 2,020 | Computation and Language |
NAYEL at SemEval-2020 Task 12: TF/IDF-Based Approach for Automatic
Offensive Language Detection in Arabic Tweets | In this paper, we present the system submitted to "SemEval-2020 Task 12". The
proposed system aims at automatically identify the Offensive Language in Arabic
Tweets. A machine learning based approach has been used to design our system.
We implemented a linear classifier with Stochastic Gradient Descent (SGD) as
optimization algorithm. Our model reported 84.20%, 81.82% f1-score on
development set and test set respectively. The best performed system and the
system in the last rank reported 90.17% and 44.51% f1-score on test set
respectively.
| 2,020 | Computation and Language |
Linguistic Taboos and Euphemisms in Nepali | Languages across the world have words, phrases, and behaviors -- the taboos
-- that are avoided in public communication considering them as obscene or
disturbing to the social, religious, and ethical values of society. However,
people deliberately use these linguistic taboos and other language constructs
to make hurtful, derogatory, and obscene comments. It is nearly impossible to
construct a universal set of offensive or taboo terms because offensiveness is
determined entirely by different factors such as socio-physical setting,
speaker-listener relationship, and word choices. In this paper, we present a
detailed corpus-based study of offensive language in Nepali. We identify and
describe more than 18 different categories of linguistic offenses including
politics, religion, race, and sex. We discuss 12 common euphemisms such as
synonym, metaphor and circumlocution. In addition, we introduce a manually
constructed data set of over 1000 offensive and taboo terms popular among
contemporary speakers. This in-depth study of offensive language and resource
will provide a foundation for several downstream tasks such as offensive
language detection and language learning.
| 2,020 | Computation and Language |
Large Scale Subject Category Classification of Scholarly Papers with
Deep Attentive Neural Networks | Subject categories of scholarly papers generally refer to the knowledge
domain(s) to which the papers belong, examples being computer science or
physics. Subject category information can be used for building faceted search
for digital library search engines. This can significantly assist users in
narrowing down their search space of relevant documents. Unfortunately, many
academic papers do not have such information as part of their metadata.
Existing methods for solving this task usually focus on unsupervised learning
that often relies on citation networks. However, a complete list of papers
citing the current paper may not be readily available. In particular, new
papers that have few or no citations cannot be classified using such methods.
Here, we propose a deep attentive neural network (DANN) that classifies
scholarly papers using only their abstracts. The network is trained using 9
million abstracts from Web of Science (WoS). We also use the WoS schema that
covers 104 subject categories. The proposed network consists of two
bi-directional recurrent neural networks followed by an attention layer. We
compare our model against baselines by varying the architecture and text
representation. Our best model achieves micro-F1 measure of 0.76 with F1 of
individual subject categories ranging from 0.50-0.95. The results showed the
importance of retraining word embedding models to maximize the vocabulary
overlap and the effectiveness of the attention mechanism. The combination of
word vectors with TFIDF outperforms character and sentence level embedding
models. We discuss imbalanced samples and overlapping categories and suggest
possible strategies for mitigation. We also determine the subject category
distribution in CiteSeerX by classifying a random sample of one million
academic papers.
| 2,020 | Computation and Language |
Characterizing the Effect of Sentence Context on Word Meanings: Mapping
Brain to Behavior | Semantic feature models have become a popular tool for prediction and
interpretation of fMRI data. In particular, prior work has shown that
differences in the fMRI patterns in sentence reading can be explained by
context-dependent changes in the semantic feature representations of the words.
However, whether the subjects are aware of such changes and agree with them has
been an open question. This paper aims to answer this question through a
human-subject study. Subjects were asked to judge how the word change from
their generic meaning when the words were used in specific sentences. The
judgements were consistent with the model predictions well above chance. Thus,
the results support the hypothesis that word meaning change systematically
depending on sentence context.
| 2,021 | Computation and Language |
YNU-HPCC at SemEval-2020 Task 8: Using a Parallel-Channel Model for
Memotion Analysis | In recent years, the growing ubiquity of Internet memes on social media
platforms, such as Facebook, Instagram, and Twitter, has become a topic of
immense interest. However, the classification and recognition of memes is much
more complicated than that of social text since it involves visual cues and
language understanding. To address this issue, this paper proposed a
parallel-channel model to process the textual and visual information in memes
and then analyze the sentiment polarity of memes. In the shared task of
identifying and categorizing memes, we preprocess the dataset according to the
language behaviors on social media. Then, we adapt and fine-tune the
Bidirectional Encoder Representations from Transformers (BERT), and two types
of convolutional neural network models (CNNs) were used to extract the features
from the pictures. We applied an ensemble model that combined the BiLSTM,
BIGRU, and Attention models to perform cross domain suggestion mining. The
officially released results show that our system performs better than the
baseline algorithm. Our team won nineteenth place in subtask A (Sentiment
Classification). The code of this paper is availabled at :
https://github.com/YuanLi95/Semveal2020-Task8-emotion-analysis.
| 2,020 | Computation and Language |
SalamNET at SemEval-2020 Task12: Deep Learning Approach for Arabic
Offensive Language Detection | This paper describes SalamNET, an Arabic offensive language detection system
that has been submitted to SemEval 2020 shared task 12: Multilingual Offensive
Language Identification in Social Media. Our approach focuses on applying
multiple deep learning models and conducting in depth error analysis of results
to provide system implications for future development considerations. To pursue
our goal, a Recurrent Neural Network (RNN), a Gated Recurrent Unit (GRU), and
Long-Short Term Memory (LSTM) models with different design architectures have
been developed and evaluated. The SalamNET, a Bi-directional Gated Recurrent
Unit (Bi-GRU) based model, reports a macro-F1 score of 0.83.
| 2,020 | Computation and Language |
Emotion Correlation Mining Through Deep Learning Models on Natural
Language Text | Emotion analysis has been attracting researchers' attention. Most previous
works in the artificial intelligence field focus on recognizing emotion rather
than mining the reason why emotions are not or wrongly recognized. Correlation
among emotions contributes to the failure of emotion recognition. In this
paper, we try to fill the gap between emotion recognition and emotion
correlation mining through natural language text from web news. Correlation
among emotions, expressed as the confusion and evolution of emotion, is
primarily caused by human emotion cognitive bias. To mine emotion correlation
from emotion recognition through text, three kinds of features and two deep
neural network models are presented. The emotion confusion law is extracted
through orthogonal basis. The emotion evolution law is evaluated from three
perspectives, one-step shift, limited-step shifts, and shortest path transfer.
The method is validated using three datasets-the titles, the bodies, and the
comments of news articles, covering both objective and subjective texts in
varying lengths (long and short). The experimental results show that, in
subjective comments, emotions are easily mistaken as anger. Comments tend to
arouse emotion circulations of love-anger and sadness-anger. In objective news,
it is easy to recognize text emotion as love and cause fear-joy circulation.
That means, journalists may try to attract attention using fear and joy words
but arouse the emotion love instead; After news release, netizens generate
emotional comments to express their intense emotions, i.e., anger, sadness, and
love. These findings could provide insights for applications regarding
affective interaction such as network public sentiment, social media
communication, and human-computer interaction.
| 2,020 | Computation and Language |
Preparation of Sentiment tagged Parallel Corpus and Testing its effect
on Machine Translation | In the current work, we explore the enrichment in the machine translation
output when the training parallel corpus is augmented with the introduction of
sentiment analysis. The paper discusses the preparation of the same sentiment
tagged English-Bengali parallel corpus. The preparation of raw parallel corpus,
sentiment analysis of the sentences and the training of a Character Based
Neural Machine Translation model using the same has been discussed extensively
in this paper. The output of the translation model has been compared with a
base-line translation model using automated metrics such as BLEU and TER as
well as manually.
| 2,020 | Computation and Language |
BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual
statements with deep pre-trained language representation models | This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling
Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused
on detecting whether a given statement contains a counterfactual (Subtask 1)
and extracting both antecedent and consequent parts of the counterfactual from
the text (Subtask 2). We experimented with various state-of-the-art language
representation models (LRMs). We found RoBERTa LRM to perform the best in both
subtasks. We achieved the first place in both exact match and F1 for Subtask 2
and ranked second for Subtask 1.
| 2,020 | Computation and Language |
ECNU-SenseMaker at SemEval-2020 Task 4: Leveraging Heterogeneous
Knowledge Resources for Commonsense Validation and Explanation | This paper describes our system for SemEval-2020 Task 4: Commonsense
Validation and Explanation (Wang et al., 2020). We propose a novel
Knowledge-enhanced Graph Attention Network (KEGAT) architecture for this task,
leveraging heterogeneous knowledge from both the structured knowledge base
(i.e. ConceptNet) and unstructured text to better improve the ability of a
machine in commonsense understanding. This model has a powerful commonsense
inference capability via utilizing suitable commonsense incorporation methods
and upgraded data augmentation techniques. Besides, an internal sharing
mechanism is cooperated to prohibit our model from insufficient and excessive
reasoning for commonsense. As a result, this model performs quite well in both
validation and explanation. For instance, it achieves state-of-the-art accuracy
in the subtask called Commonsense Explanation (Multi-Choice). We officially
name the system as ECNU-SenseMaker. Code is publicly available at
https://github.com/ECNU-ICA/ECNU-SenseMaker.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.