Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Study on Efficiency, Accuracy and Document Structure for Answer
Sentence Selection | An essential task of most Question Answering (QA) systems is to re-rank the
set of answer candidates, i.e., Answer Sentence Selection (A2S). These
candidates are typically sentences either extracted from one or more documents
preserving their natural order or retrieved by a search engine. Most
state-of-the-art approaches to the task use huge neural models, such as BERT,
or complex attentive architectures. In this paper, we argue that by exploiting
the intrinsic structure of the original rank together with an effective
word-relatedness encoder, we can achieve competitive results with respect to
the state of the art while retaining high efficiency. Our model takes 9.5
seconds to train on the WikiQA dataset, i.e., very fast in comparison with the
$\sim 18$ minutes required by a standard BERT-base fine-tuning.
| 2,020 | Computation and Language |
Kleister: A novel task for Information Extraction involving Long
Documents with Complex Layout | State-of-the-art solutions for Natural Language Processing (NLP) are able to
capture a broad range of contexts, like the sentence-level context or
document-level context for short documents. But these solutions are still
struggling when it comes to longer, real-world documents with the information
encoded in the spatial structure of the document, such as page elements like
tables, forms, headers, openings or footers; complex page layout or presence of
multiple pages.
To encourage progress on deeper and more complex Information Extraction (IE)
we introduce a new task (named Kleister) with two new datasets. Utilizing both
textual and structural layout features, an NLP system must find the most
important information, about various types of entities, in long formal
documents. We propose Pipeline method as a text-only baseline with different
Named Entity Recognition architectures (Flair, BERT, RoBERTa). Moreover, we
checked the most popular PDF processing tools for text extraction (pdf2djvu,
Tesseract and Textract) in order to analyze behavior of IE system in presence
of errors introduced by these tools.
| 2,020 | Computation and Language |
RecipeGPT: Generative Pre-training Based Cooking Recipe Generation and
Evaluation System | Interests in the automatic generation of cooking recipes have been growing
steadily over the past few years thanks to a large amount of online cooking
recipes. We present RecipeGPT, a novel online recipe generation and evaluation
system. The system provides two modes of text generations: (1) instruction
generation from given recipe title and ingredients; and (2) ingredient
generation from recipe title and cooking instructions. Its back-end text
generation module comprises a generative pre-trained language model GPT-2
fine-tuned on a large cooking recipe dataset. Moreover, the recipe evaluation
module allows the users to conveniently inspect the quality of the generated
recipe contents and store the results for future reference. RecipeGPT can be
accessed online at https://recipegpt.org/.
| 2,020 | Computation and Language |
SentenceMIM: A Latent Variable Language Model | SentenceMIM is a probabilistic auto-encoder for language data, trained with
Mutual Information Machine (MIM) learning to provide a fixed length
representation of variable length language observations (i.e., similar to VAE).
Previous attempts to learn VAEs for language data faced challenges due to
posterior collapse. MIM learning encourages high mutual information between
observations and latent variables, and is robust against posterior collapse. As
such, it learns informative representations whose dimension can be an order of
magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss
has no hyper-parameters, simplifying optimization. We compare sentenceMIM with
VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction,
comparable to AEs, with a rich structured latent space, comparable to VAEs. The
structured latent representation is demonstrated with interpolation between
sentences of different lengths. We demonstrate the versatility of sentenceMIM
by utilizing a trained model for question-answering and transfer learning,
without fine-tuning, outperforming VAE and AE with similar architectures.
| 2,021 | Computation and Language |
Claim Check-Worthiness Detection as Positive Unlabelled Learning | As the first step of automatic fact checking, claim check-worthiness
detection is a critical component of fact checking systems. There are multiple
lines of research which study this problem: check-worthiness ranking from
political speeches and debates, rumour detection on Twitter, and citation
needed detection from Wikipedia. To date, there has been no structured
comparison of these various tasks to understand their relatedness, and no
investigation into whether or not a unified approach to all of them is
achievable. In this work, we illuminate a central challenge in claim
check-worthiness detection underlying all of these tasks, being that they hinge
upon detecting both how factual a sentence is, as well as how likely a sentence
is to be believed without verification. As such, annotators only mark those
instances they judge to be clear-cut check-worthy. Our best performing method
is a unified approach which automatically corrects for this using a variant of
positive unlabelled learning that finds instances which were incorrectly
labelled as not check-worthy. In applying this, we out-perform the state of the
art in two of the three tasks studied for claim check-worthiness detection in
English.
| 2,020 | Computation and Language |
Zero-Shot Cross-Lingual Transfer with Meta Learning | Learning what to share between tasks has been a topic of great importance
recently, as strategic sharing of knowledge has been shown to improve
downstream task performance. This is particularly important for multilingual
applications, as most languages in the world are under-resourced. Here, we
consider the setting of training models on multiple different languages at the
same time, when little or no data is available for languages other than
English. We show that this challenging setup can be approached using
meta-learning, where, in addition to training a source language model, another
model learns to select which training instances are the most beneficial to the
first. We experiment using standard supervised, zero-shot cross-lingual, as
well as few-shot cross-lingual settings for different natural language
understanding tasks (natural language inference, question answering). Our
extensive experimental setup demonstrates the consistent effectiveness of
meta-learning for a total of 15 languages. We improve upon the state-of-the-art
for zero-shot and few-shot NLI (on MultiNLI and XNLI) and QA (on the MLQA
dataset). A comprehensive error analysis indicates that the correlation of
typological features between languages can partly explain when parameter
sharing learned via meta-learning is beneficial.
| 2,020 | Computation and Language |
HypoNLI: Exploring the Artificial Patterns of Hypothesis-only Bias in
Natural Language Inference | Many recent studies have shown that for models trained on datasets for
natural language inference (NLI), it is possible to make correct predictions by
merely looking at the hypothesis while completely ignoring the premise. In this
work, we manage to derive adversarial examples in terms of the hypothesis-only
bias and explore eligible ways to mitigate such bias. Specifically, we extract
various phrases from the hypotheses (artificial patterns) in the training sets,
and show that they have been strong indicators to the specific labels. We then
figure out `hard' and `easy' instances from the original test sets whose labels
are opposite to or consistent with those indications. We also set up baselines
including both pretrained models (BERT, RoBERTa, XLNet) and competitive
non-pretrained models (InferSent, DAM, ESIM). Apart from the benchmark and
baselines, we also investigate two debiasing approaches which exploit the
artificial pattern modeling to mitigate such hypothesis-only bias:
down-sampling and adversarial training. We believe those methods can be treated
as competitive baselines in NLI debiasing tasks.
| 2,021 | Computation and Language |
An Empirical Accuracy Law for Sequential Machine Translation: the Case
of Google Translate | In this research, we have established, through empirical testing, a law that
relates the number of translating hops to translation accuracy in sequential
machine translation in Google Translate. Both accuracy and size decrease with
the number of hops; the former displays a decrease closely following a power
law. Such a law allows one to predict the behavior of translation chains that
may be built as society increasingly depends on automated devices.
| 2,020 | Computation and Language |
Distill, Adapt, Distill: Training Small, In-Domain Models for Neural
Machine Translation | We explore best practices for training small, memory efficient machine
translation models with sequence-level knowledge distillation in the domain
adaptation setting. While both domain adaptation and knowledge distillation are
widely-used, their interaction remains little understood. Our large-scale
empirical results in machine translation (on three language pairs with three
domains each) suggest distilling twice for best performance: once using
general-domain data and again using in-domain data with an adapted teacher.
| 2,020 | Computation and Language |
What the [MASK]? Making Sense of Language-Specific BERT Models | Recently, Natural Language Processing (NLP) has witnessed an impressive
progress in many areas, due to the advent of novel, pretrained contextual
representation models. In particular, Devlin et al. (2019) proposed a model,
called BERT (Bidirectional Encoder Representations from Transformers), which
enables researchers to obtain state-of-the art performance on numerous NLP
tasks by fine-tuning the representations on their data set and task, without
the need for developing and training highly-specific architectures. The authors
also released multilingual BERT (mBERT), a model trained on a corpus of 104
languages, which can serve as a universal language model. This model obtained
impressive results on a zero-shot cross-lingual natural inference task. Driven
by the potential of BERT models, the NLP community has started to investigate
and generate an abundant number of BERT models that are trained on a particular
language, and tested on a specific data domain and task. This allows us to
evaluate the true potential of mBERT as a universal language model, by
comparing it to the performance of these more specific models. This paper
presents the current state of the art in language-specific BERT models,
providing an overall picture with respect to different dimensions (i.e.
architectures, data domains, and tasks). Our aim is to provide an immediate and
straightforward overview of the commonalities and differences between
Language-Specific (language-specific) BERT models and mBERT. We also provide an
interactive and constantly updated website that can be used to explore the
information we have collected, at https://bertlang.unibocconi.it.
| 2,020 | Computation and Language |
Neural Cross-Lingual Transfer and Limited Annotated Data for Named
Entity Recognition in Danish | Named Entity Recognition (NER) has greatly advanced by the introduction of
deep neural architectures. However, the success of these methods depends on
large amounts of training data. The scarcity of publicly-available
human-labeled datasets has resulted in limited evaluation of existing NER
systems, as is the case for Danish. This paper studies the effectiveness of
cross-lingual transfer for Danish, evaluates its complementarity to limited
gold data, and sheds light on performance of Danish NER.
| 2,019 | Computation and Language |
Automatic Compilation of Resources for Academic Writing and Evaluating
with Informal Word Identification and Paraphrasing System | We present the first approach to automatically building resources for
academic writing. The aim is to build a writing aid system that automatically
edits a text so that it better adheres to the academic style of writing. On top
of existing academic resources, such as the Corpus of Contemporary American
English (COCA) academic Word List, the New Academic Word List, and the Academic
Collocation List, we also explore how to dynamically build such resources that
would be used to automatically identify informal or non-academic words or
phrases. The resources are compiled using different generic approaches that can
be extended for different domains and languages. We describe the evaluation of
resources with a system implementation. The system consists of an informal word
identification (IWI), academic candidate paraphrase generation, and paraphrase
ranking components. To generate candidates and rank them in context, we have
used the PPDB and WordNet paraphrase resources. We use the Concepts in Context
(CoInCO) "All-Words" lexical substitution dataset both for the informal word
identification and paraphrase generation experiments. Our informal word
identification component achieves an F-1 score of 82%, significantly
outperforming a stratified classifier baseline. The main contribution of this
work is a domain-independent methodology to build targeted resources for
writing aids.
| 2,020 | Computation and Language |
EmpTransfo: A Multi-head Transformer Architecture for Creating
Empathetic Dialog Systems | Understanding emotions and responding accordingly is one of the biggest
challenges of dialog systems. This paper presents EmpTransfo, a multi-head
Transformer architecture for creating an empathetic dialog system. EmpTransfo
utilizes state-of-the-art pre-trained models (e.g., OpenAI-GPT) for language
generation, though models with different sizes can be used. We show that
utilizing the history of emotions and other metadata can improve the quality of
generated conversations by the dialog system. Our experimental results using a
challenging language corpus show that the proposed approach outperforms other
models in terms of Hit@1 and PPL (Perplexity).
| 2,020 | Computation and Language |
S-APIR: News-based Business Sentiment Index | This paper describes our work on developing a new business sentiment index
using daily newspaper articles. We adopt a recurrent neural network (RNN) with
Gated Recurrent Units to predict the business sentiment of a given text. An RNN
is initially trained on Economy Watchers Survey and then fine-tuned on news
texts for domain adaptation. Also, a one-class support vector machine is
applied to filter out texts deemed irrelevant to business sentiment. Moreover,
we propose a simple approach to temporally analyzing how much and when any
given factor influences the predicted business sentiment. The validity and
utility of the proposed approaches are empirically demonstrated through a
series of experiments on Nikkei Newspaper articles published from 2013 to 2018.
| 2,020 | Computation and Language |
A Framework for the Computational Linguistic Analysis of Dehumanization | Dehumanization is a pernicious psychological process that often leads to
extreme intergroup bias, hate speech, and violence aimed at targeted social
groups. Despite these serious consequences and the wealth of available data,
dehumanization has not yet been computationally studied on a large scale.
Drawing upon social psychology research, we create a computational linguistic
framework for analyzing dehumanizing language by identifying linguistic
correlates of salient components of dehumanization. We then apply this
framework to analyze discussions of LGBTQ people in the New York Times from
1986 to 2015. Overall, we find increasingly humanizing descriptions of LGBTQ
people over time. However, we find that the label homosexual has emerged to be
much more strongly associated with dehumanizing attitudes than other labels,
such as gay. Our proposed techniques highlight processes of linguistic
variation and change in discourses surrounding marginalized groups.
Furthermore, the ability to analyze dehumanizing language at a large scale has
implications for automatically detecting and understanding media bias as well
as abusive language online.
| 2,020 | Computation and Language |
A Corpus for Detecting High-Context Medical Conditions in Intensive Care
Patient Notes Focusing on Frequently Readmitted Patients | A crucial step within secondary analysis of electronic health records (EHRs)
is to identify the patient cohort under investigation. While EHRs contain
medical billing codes that aim to represent the conditions and treatments
patients may have, much of the information is only present in the patient
notes. Therefore, it is critical to develop robust algorithms to infer
patients' conditions and treatments from their written notes. In this paper, we
introduce a dataset for patient phenotyping, a task that is defined as the
identification of whether a patient has a given medical condition (also
referred to as clinical indication or phenotype) based on their patient note.
Nursing Progress Notes and Discharge Summaries from the Intensive Care Unit of
a large tertiary care hospital were manually annotated for the presence of
several high-context phenotypes relevant to treatment and risk of
re-hospitalization. This dataset contains 1102 Discharge Summaries and 1000
Nursing Progress Notes. Each Discharge Summary and Progress Note has been
annotated by at least two expert human annotators (one clinical researcher and
one resident physician). Annotated phenotypes include treatment non-adherence,
chronic pain, advanced/metastatic cancer, as well as 10 other phenotypes. This
dataset can be utilized for academic and industrial research in medicine and
computer science, particularly within the field of medical natural language
processing.
| 2,020 | Computation and Language |
Parsing Thai Social Data: A New Challenge for Thai NLP | Dependency parsing (DP) is a task that analyzes text for syntactic structure
and relationship between words. DP is widely used to improve natural language
processing (NLP) applications in many languages such as English. Previous works
on DP are generally applicable to formally written languages. However, they do
not apply to informal languages such as the ones used in social networks.
Therefore, DP has to be researched and explored with such social network data.
In this paper, we explore and identify a DP model that is suitable for Thai
social network data. After that, we will identify the appropriate linguistic
unit as an input. The result showed that, the transition based model called,
improve Elkared dependency parser outperform the others at UAS of 81.42%.
| 2,020 | Computation and Language |
Improving Neural Named Entity Recognition with Gazetteers | The goal of this work is to improve the performance of a neural named entity
recognition system by adding input features that indicate a word is part of a
name included in a gazetteer. This article describes how to generate gazetteers
from the Wikidata knowledge graph as well as how to integrate the information
into a neural NER system. Experiments reveal that the approach yields
performance gains in two distinct languages: a high-resource, word-based
language, English and a high-resource, character-based language, Chinese.
Experiments were also performed in a low-resource language, Russian on a newly
annotated Russian NER corpus from Reddit tagged with four core types and twelve
extended types. This article reports a baseline score. It is a longer version
of a paper in the 33rd FLAIRS conference (Song et al. 2020).
| 2,020 | Computation and Language |
Sensitive Data Detection and Classification in Spanish Clinical Text:
Experiments with BERT | Massive digital data processing provides a wide range of opportunities and
benefits, but at the cost of endangering personal data privacy. Anonymisation
consists in removing or replacing sensitive information from data, enabling its
exploitation for different purposes while preserving the privacy of
individuals. Over the years, a lot of automatic anonymisation systems have been
proposed; however, depending on the type of data, the target language or the
availability of training documents, the task remains challenging still. The
emergence of novel deep-learning models during the last two years has brought
large improvements to the state of the art in the field of Natural Language
Processing. These advancements have been most noticeably led by BERT, a model
proposed by Google in 2018, and the shared language models pre-trained on
millions of documents. In this paper, we use a BERT-based sequence labelling
model to conduct a series of anonymisation experiments on several clinical
datasets in Spanish. We also compare BERT to other algorithms. The experiments
show that a simple BERT-based model with general-domain pre-training obtains
highly competitive results without any domain specific feature engineering.
| 2,020 | Computation and Language |
Morfessor EM+Prune: Improved Subword Segmentation with Expectation
Maximization and Pruning | Data-driven segmentation of words into subword units has been used in various
natural language processing applications such as automatic speech recognition
and statistical machine translation for almost 20 years. Recently it has became
more widely adopted, as models based on deep neural networks often benefit from
subword units even for morphologically simpler languages. In this paper, we
discuss and compare training algorithms for a unigram subword model, based on
the Expectation Maximization algorithm and lexicon pruning. Using English,
Finnish, North Sami, and Turkish data sets, we show that this approach is able
to find better solutions to the optimization problem defined by the Morfessor
Baseline model than its original recursive training algorithm. The improved
optimization also leads to higher morphological segmentation accuracy when
compared to a linguistic gold standard. We publish implementations of the new
algorithms in the widely-used Morfessor software package.
| 2,020 | Computation and Language |
Is POS Tagging Necessary or Even Helpful for Neural Dependency Parsing? | In the pre deep learning era, part-of-speech tags have been considered as
indispensable ingredients for feature engineering in dependency parsing. But
quite a few works focus on joint tagging and parsing models to avoid error
propagation. In contrast, recent studies suggest that POS tagging becomes much
less important or even useless for neural parsing, especially when using
character-based word representations. Yet there are not enough investigations
focusing on this issue, both empirically and linguistically. To answer this, we
design and compare three typical multi-task learning framework, i.e.,
Share-Loose, Share-Tight, and Stack, for joint tagging and parsing based on the
state-of-the-art biaffine parser. Considering that it is much cheaper to
annotate POS tags than parse trees, we also investigate the utilization of
large-scale heterogeneous POS tag data. We conduct experiments on both English
and Chinese datasets, and the results clearly show that POS tagging (both
homogeneous and heterogeneous) can still significantly improve parsing
performance when using the Stack joint framework. We conduct detailed analysis
and gain more insights from the linguistic aspect.
| 2,020 | Computation and Language |
Practical Annotation Strategies for Question Answering Datasets | Annotating datasets for question answering (QA) tasks is very costly, as it
requires intensive manual labor and often domain-specific knowledge. Yet
strategies for annotating QA datasets in a cost-effective manner are scarce. To
provide a remedy for practitioners, our objective is to develop heuristic rules
for annotating a subset of questions, so that the annotation cost is reduced
while maintaining both in- and out-of-domain performance. For this, we conduct
a large-scale analysis in order to derive practical recommendations. First, we
demonstrate experimentally that more training samples contribute often only to
a higher in-domain test-set performance, but do not help the model in
generalizing to unseen datasets. Second, we develop a model-guided annotation
strategy: it makes a recommendation with regard to which subset of samples
should be annotated. Its effectiveness is demonstrated in a case study based on
domain customization of QA to a clinical setting. Here, remarkably, annotating
a stratified subset with only 1.2% of the original training set achieves 97.7%
of the performance as if the complete dataset was annotated. Hence, the
labeling effort can be reduced immensely. Altogether, our work fulfills a
demand in practice when labeling budgets are limited and where thus
recommendations are needed for annotating QA datasets more cost-effectively.
| 2,020 | Computation and Language |
On the Role of Conceptualization in Commonsense Knowledge Graph
Construction | Commonsense knowledge graphs (CKGs) like Atomic and ASER are substantially
different from conventional KGs as they consist of much larger number of nodes
formed by loosely-structured text, which, though, enables them to handle highly
diverse queries in natural language related to commonsense, leads to unique
challenges for automatic KG construction methods. Besides identifying relations
absent from the KG between nodes, such methods are also expected to explore
absent nodes represented by text, in which different real-world things, or
entities, may appear. To deal with the innumerable entities involved with
commonsense in the real world, we introduce to CKG construction methods
conceptualization, i.e., to view entities mentioned in text as instances of
specific concepts or vice versa. We build synthetic triples by
conceptualization, and further formulate the task as triple classification,
handled by a discriminatory model with knowledge transferred from pretrained
language models and fine-tuned by negative sampling. Experiments demonstrate
that our methods can effectively identify plausible triples and expand the KG
by triples of both new nodes and edges of high diversity and novelty.
| 2,020 | Computation and Language |
Quality of Word Embeddings on Sentiment Analysis Tasks | Word embeddings or distributed representations of words are being used in
various applications like machine translation, sentiment analysis, topic
identification etc. Quality of word embeddings and performance of their
applications depends on several factors like training method, corpus size and
relevance etc. In this study we compare performance of a dozen of pretrained
word embedding models on lyrics sentiment analysis and movie review polarity
tasks. According to our results, Twitter Tweets is the best on lyrics sentiment
analysis, whereas Google News and Common Crawl are the top performers on movie
polarity analysis. Glove trained models slightly outrun those trained with
Skipgram. Also, factors like topic relevance and size of corpus significantly
impact the quality of the models. When medium or large-sized text sets are
available, obtaining word embeddings from same training dataset is usually the
best choice.
| 2,020 | Computation and Language |
Distributional semantic modeling: a revised technique to train term/word
vector space models applying the ontology-related approach | We design a new technique for the distributional semantic modeling with a
neural network-based approach to learn distributed term representations (or
term embeddings) - term vector space models as a result, inspired by the recent
ontology-related approach (using different types of contextual knowledge such
as syntactic knowledge, terminological knowledge, semantic knowledge, etc.) to
the identification of terms (term extraction) and relations between them
(relation extraction) called semantic pre-processing technology - SPT. Our
method relies on automatic term extraction from the natural language texts and
subsequent formation of the problem-oriented or application-oriented (also
deeply annotated) text corpora where the fundamental entity is the term
(includes non-compositional and compositional terms). This gives us an
opportunity to changeover from distributed word representations (or word
embeddings) to distributed term representations (or term embeddings). This
transition will allow to generate more accurate semantic maps of different
subject domains (also, of relations between input terms - it is useful to
explore clusters and oppositions, or to test your hypotheses about them). The
semantic map can be represented as a graph using Vec2graph - a Python library
for visualizing word embeddings (term embeddings in our case) as dynamic and
interactive graphs. The Vec2graph library coupled with term embeddings will not
only improve accuracy in solving standard NLP tasks, but also update the
conventional concept of automated ontology development. The main practical
result of our work is the development kit (set of toolkits represented as web
service APIs and web application), which provides all necessary routines for
the basic linguistic pre-processing and the semantic pre-processing of the
natural language texts in Ukrainian for future training of term vector space
models.
| 2,020 | Computation and Language |
NYTWIT: A Dataset of Novel Words in the New York Times | We present the New York Times Word Innovation Types dataset, or NYTWIT, a
collection of over 2,500 novel English words published in the New York Times
between November 2017 and March 2019, manually annotated for their class of
novelty (such as lexical derivation, dialectal variation, blending, or
compounding). We present baseline results for both uncontextual and contextual
prediction of novelty class, showing that there is room for improvement even
for state-of-the-art NLP systems. We hope this resource will prove useful for
linguists and NLP practitioners by providing a real-world environment of novel
word appearance.
| 2,020 | Computation and Language |
Natural Language QA Approaches using Reasoning with External Knowledge | Question answering (QA) in natural language (NL) has been an important aspect
of AI from its early days. Winograd's ``councilmen'' example in his 1972 paper
and McCarthy's Mr. Hug example of 1976 highlights the role of external
knowledge in NL understanding. While Machine Learning has been the go-to
approach in NL processing as well as NL question answering (NLQA) for the last
30 years, recently there has been an increasingly emphasized thread on NLQA
where external knowledge plays an important role. The challenges inspired by
Winograd's councilmen example, and recent developments such as the Rebooting AI
book, various NLQA datasets, research on knowledge acquisition in the NLQA
context, and their use in various NLQA models have brought the issue of NLQA
using ``reasoning'' with external knowledge to the forefront. In this paper, we
present a survey of the recent work on them. We believe our survey will help
establish a bridge between multiple fields of AI, especially between (a) the
traditional fields of knowledge representation and reasoning and (b) the field
of NL understanding and NLQA.
| 2,020 | Computation and Language |
Synthetic Error Dataset Generation Mimicking Bengali Writing Pattern | While writing Bengali using English keyboard, users often make spelling
mistakes. The accuracy of any Bengali spell checker or paragraph correction
module largely depends on the kind of error dataset it is based on. Manual
generation of such error dataset is a cumbersome process. In this research, We
present an algorithm for automatic misspelled Bengali word generation from
correct word through analyzing Bengali writing pattern using QWERTY layout
English keyboard. As part of our analysis, we have formed a list of most
commonly used Bengali words, phonetically similar replaceable clusters,
frequently mispressed replaceable clusters, frequently mispressed insertion
prone clusters and some rules for Juktakkhar (constant letter clusters)
handling while generating errors.
| 2,020 | Computation and Language |
A Post-processing Method for Detecting Unknown Intent of Dialogue System
via Pre-trained Deep Neural Network Classifier | With the maturity and popularity of dialogue systems, detecting user's
unknown intent in dialogue systems has become an important task. It is also one
of the most challenging tasks since we can hardly get examples, prior knowledge
or the exact numbers of unknown intents. In this paper, we propose SofterMax
and deep novelty detection (SMDN), a simple yet effective post-processing
method for detecting unknown intent in dialogue systems based on pre-trained
deep neural network classifiers. Our method can be flexibly applied on top of
any classifiers trained in deep neural networks without changing the model
architecture. We calibrate the confidence of the softmax outputs to compute the
calibrated confidence score (i.e., SofterMax) and use it to calculate the
decision boundary for unknown intent detection. Furthermore, we feed the
feature representations learned by the deep neural networks into traditional
novelty detection algorithm to detect unknown intents from different
perspectives. Finally, we combine the methods above to perform the joint
prediction. Our method classifies examples that differ from known intents as
unknown and does not require any examples or prior knowledge of it. We have
conducted extensive experiments on three benchmark dialogue datasets. The
results show that our method can yield significant improvements compared with
the state-of-the-art baselines
| 2,020 | Computation and Language |
ECSP: A New Task for Emotion-Cause Span-Pair Extraction and
Classification | Emotion cause analysis such as emotion cause extraction (ECE) and
emotion-cause pair extraction (ECPE) have gradually attracted the attention of
many researchers. However, there are still two shortcomings in the existing
research: 1) In most cases, emotion expression and cause are not the whole
clause, but the span in the clause, so extracting the clause-pair rather than
the span-pair greatly limits its applications in real-world scenarios; 2) It is
not enough to extract the emotion expression clause without identifying the
emotion categories, the presence of emotion clause does not necessarily convey
emotional information explicitly due to different possible causes. In this
paper, we propose a new task: Emotion-Cause Span-Pair extraction and
classification (ECSP), which aims to extract the potential span-pair of emotion
and corresponding causes in a document, and make emotion classification for
each pair. In the new ECSP task, ECE and ECPE can be regarded as two special
cases at the clause-level. We propose a span-based extract-then-classify (ETC)
model, where emotion and cause are directly extracted and paired from the
document under the supervision of target span boundaries, and corresponding
categories are then classified using their pair representations and localized
context. Experiments show that our proposed ETC model outperforms the SOTA
model of ECE and ECPE task respectively and gets a fair-enough results on ECSP
task.
| 2,020 | Computation and Language |
Automatic Recognition of the General-Purpose Communicative Functions
defined by the ISO 24617-2 Standard for Dialog Act Annotation | ISO 24617-2, the standard for dialog act annotation, defines a hierarchically
organized set of general-purpose communicative functions. The automatic
recognition of these functions, although practically unexplored, is relevant
for a dialog system, since they provide cues regarding the intention behind the
segments and how they should be interpreted. We explore the recognition of
general-purpose communicative functions in the DialogBank, which is a reference
set of dialogs annotated according to this standard. To do so, we propose
adaptations of existing approaches to flat dialog act recognition that allow
them to deal with the hierarchical classification problem. More specifically,
we propose the use of a hierarchical network with cascading outputs and maximum
a posteriori path estimation to predict the communicative function at each
level of the hierarchy, preserve the dependencies between the functions in the
path, and decide at which level to stop. Furthermore, since the amount of
dialogs in the DialogBank is reduced, we rely on transfer learning processes to
reduce overfitting and improve performance. The results of our experiments show
that the hierarchical approach outperforms a flat one and that each of its
components plays an important role towards the recognition of general-purpose
communicative functions.
| 2,021 | Computation and Language |
Generating Emotionally Aligned Responses in Dialogues using Affect
Control Theory | State-of-the-art neural dialogue systems excel at syntactic and semantic
modelling of language, but often have a hard time establishing emotional
alignment with the human interactant during a conversation. In this work, we
bring Affect Control Theory (ACT), a socio-mathematical model of emotions for
human-human interactions, to the neural dialogue generation setting. ACT makes
predictions about how humans respond to emotional stimuli in social situations.
Due to this property, ACT and its derivative probabilistic models have been
successfully deployed in several applications of Human-Computer Interaction,
including empathetic tutoring systems, assistive healthcare devices and
two-person social dilemma games. We investigate how ACT can be used to develop
affect-aware neural conversational agents, which produce emotionally aligned
responses to prompts and take into consideration the affective identities of
the interactants.
| 2,020 | Computation and Language |
Discovering linguistic (ir)regularities in word embeddings through
max-margin separating hyperplanes | We experiment with new methods for learning how related words are positioned
relative to each other in word embedding spaces. Previous approaches learned
constant vector offsets: vectors that point from source tokens to target tokens
with an assumption that these offsets were parallel to each other. We show that
the offsets between related tokens are closer to orthogonal than parallel, and
that they have low cosine similarities. We proceed by making a different
assumption; target tokens are linearly separable from source and un-labeled
tokens. We show that a max-margin hyperplane can separate target tokens and
that vectors orthogonal to this hyperplane represent the relationship between
source and targets. We find that this representation of the relationship
obtains the best results in dis-covering linguistic regularities. We experiment
with vector space models trained by a variety of algorithms (Word2vec:
CBOW/skip-gram, fastText, or GloVe), and various word context choices such as
linear word-order, syntax dependency grammars, and with and without knowledge
of word position. These experiments show that our model, SVMCos, is robust to a
range of experimental choices when training word embeddings.
| 2,020 | Computation and Language |
Multi-task Learning Based Neural Bridging Reference Resolution | We propose a multi task learning-based neural model for resolving bridging
references tackling two key challenges. The first challenge is the lack of
large corpora annotated with bridging references. To address this, we use
multi-task learning to help bridging reference resolution with coreference
resolution. We show that substantial improvements of up to 8 p.p. can be
achieved on full bridging resolution with this architecture. The second
challenge is the different definitions of bridging used in different corpora,
meaning that hand-coded systems or systems using special features designed for
one corpus do not work well with other corpora. Our neural model only uses a
small number of corpus independent features, thus can be applied to different
corpora. Evaluations with very different bridging corpora (ARRAU, ISNOTES,
BASHI and SCICORP) suggest that our architecture works equally well on all
corpora, and achieves the SoTA results on full bridging resolution for all
corpora, outperforming the best reported results by up to 36.3 p.p..
| 2,020 | Computation and Language |
The growing amplification of social media: Measuring temporal and social
contagion dynamics for over 150 languages on Twitter for 2009-2020 | Working from a dataset of 118 billion messages running from the start of 2009
to the end of 2019, we identify and explore the relative daily use of over 150
languages on Twitter. We find that eight languages comprise 80% of all tweets,
with English, Japanese, Spanish, and Portuguese being the most dominant. To
quantify social spreading in each language over time, we compute the 'contagion
ratio': The balance of retweets to organic messages. We find that for the most
common languages on Twitter there is a growing tendency, though not universal,
to retweet rather than share new content. By the end of 2019, the contagion
ratios for half of the top 30 languages, including English and Spanish, had
reached above 1 -- the naive contagion threshold. In 2019, the top 5 languages
with the highest average daily ratios were, in order, Thai (7.3), Hindi, Tamil,
Urdu, and Catalan, while the bottom 5 were Russian, Swedish, Esperanto,
Cebuano, and Finnish (0.26). Further, we show that over time, the contagion
ratios for most common languages are growing more strongly than those of rare
languages.
| 2,021 | Computation and Language |
Investigating the Decoders of Maximum Likelihood Sequence Models: A
Look-ahead Approach | We demonstrate how we can practically incorporate multi-step future
information into a decoder of maximum likelihood sequence models. We propose a
"k-step look-ahead" module to consider the likelihood information of a rollout
up to k steps. Unlike other approaches that need to train another value network
to evaluate the rollouts, we can directly apply this look-ahead module to
improve the decoding of any sequence model trained in a maximum likelihood
framework. We evaluate our look-ahead module on three datasets of varying
difficulties: IM2LATEX-100k OCR image to LaTeX, WMT16 multimodal machine
translation, and WMT14 machine translation. Our look-ahead module improves the
performance of the simpler datasets such as IM2LATEX-100k and WMT16 multimodal
machine translation. However, the improvement of the more difficult dataset
(e.g., containing longer sequences), WMT14 machine translation, becomes
marginal. Our further investigation using the k-step look-ahead suggests that
the more difficult tasks suffer from the overestimated EOS (end-of-sentence)
probability. We argue that the overestimated EOS probability also causes the
decreased performance of beam search when increasing its beam width. We tackle
the EOS problem by integrating an auxiliary EOS loss into the training to
estimate if the model should emit EOS or other words. Our experiments show that
improving EOS estimation not only increases the performance of our proposed
look-ahead module but also the robustness of the beam search.
| 2,020 | Computation and Language |
Pseudo Labeling and Negative Feedback Learning for Large-scale
Multi-label Domain Classification | In large-scale domain classification, an utterance can be handled by multiple
domains with overlapped capabilities. However, only a limited number of
ground-truth domains are provided for each training utterance in practice while
knowing as many as correct target labels is helpful for improving the model
performance. In this paper, given one ground-truth domain for each training
utterance, we regard domains consistently predicted with the highest
confidences as additional pseudo labels for the training. In order to reduce
prediction errors due to incorrect pseudo labels, we leverage utterances with
negative system responses to decrease the confidences of the incorrectly
predicted domains. Evaluating on user utterances from an intelligent
conversational system, we show that the proposed approach significantly
improves the performance of domain classification with hypothesis reranking.
| 2,020 | Computation and Language |
Keeping it simple: Implementation and performance of the proto-principle
of adaptation and learning in the language sciences | In this paper we present the Widrow-Hoff rule and its applications to
language data. After contextualizing the rule historically and placing it in
the chain of neurally inspired artificial learning models, we explain its
rationale and implementational considerations. Using a number of case studies
we illustrate how the Widrow-Hoff rule offers unexpected opportunities for the
computational simulation of a range of language phenomena that make it possible
to approach old problems from a novel perspective.
| 2,021 | Computation and Language |
Shallow Discourse Annotation for Chinese TED Talks | Text corpora annotated with language-related properties are an important
resource for the development of Language Technology. The current work
contributes a new resource for Chinese Language Technology and for
Chinese-English translation, in the form of a set of TED talks (some originally
given in English, some in Chinese) that have been annotated with discourse
relations in the style of the Penn Discourse TreeBank, adapted to properties of
Chinese text that are not present in English. The resource is currently unique
in annotating discourse-level properties of planned spoken monologues rather
than of written text. An inter-annotator agreement study demonstrates that the
annotation scheme is able to achieve highly reliable results.
| 2,020 | Computation and Language |
Sentence Analogies: Exploring Linguistic Relationships and Regularities
in Sentence Embeddings | While important properties of word vector representations have been studied
extensively, far less is known about the properties of sentence vector
representations. Word vectors are often evaluated by assessing to what degree
they exhibit regularities with regard to relationships of the sort considered
in word analogies. In this paper, we investigate to what extent commonly used
sentence vector representation spaces as well reflect certain kinds of
regularities. We propose a number of schemes to induce evaluation data, based
on lexical analogy data as well as semantic relationships between sentences.
Our experiments consider a wide range of sentence embedding methods, including
ones based on BERT-style contextual embeddings. We find that different models
differ substantially in their ability to reflect such regularities.
| 2,020 | Computation and Language |
A Multi-Source Entity-Level Sentiment Corpus for the Financial Domain:
The FinLin Corpus | We introduce FinLin, a novel corpus containing investor reports, company
reports, news articles, and microblogs from StockTwits, targeting multiple
entities stemming from the automobile industry and covering a 3-month period.
FinLin was annotated with a sentiment score and a relevance score in the range
[-1.0, 1.0] and [0.0, 1.0], respectively. The annotations also include the text
spans selected for the sentiment, thus, providing additional insight into the
annotators' reasoning. Overall, FinLin aims to complement the current knowledge
by providing a novel and publicly available financial sentiment corpus and to
foster research on the topic of financial sentiment analysis and potential
applications in behavioural science.
| 2,020 | Computation and Language |
An Empirical Investigation of Pre-Trained Transformer Language Models
for Open-Domain Dialogue Generation | We present an empirical investigation of pre-trained Transformer-based
auto-regressive language models for the task of open-domain dialogue
generation. Training paradigm of pre-training and fine-tuning is employed to
conduct the parameter learning. Corpora of News and Wikipedia in Chinese and
English are collected for the pre-training stage respectively. Dialogue context
and response are concatenated into a single sequence utilized as the input of
the models during the fine-tuning stage. A weighted joint prediction paradigm
for both context and response is designed to evaluate the performance of models
with or without the loss term for context prediction. Various of decoding
strategies such as greedy search, beam search, top-k sampling, etc. are
employed to conduct the response text generation. Extensive experiments are
conducted on the typical single-turn and multi-turn dialogue corpora such as
Weibo, Douban, Reddit, DailyDialog, and Persona-Chat. Detailed numbers of
automatic evaluation metrics on relevance and diversity of the generated
results for the languages models as well as the baseline approaches are
reported.
| 2,020 | Computation and Language |
GenNet : Reading Comprehension with Multiple Choice Questions using
Generation and Selection model | Multiple-choice machine reading comprehension is difficult task as its
required machines to select the correct option from a set of candidate or
possible options using the given passage and question.Reading Comprehension
with Multiple Choice Questions task,required a human (or machine) to read a
given passage, question pair and select the best one option from n given
options. There are two different ways to select the correct answer from the
given passage. Either by selecting the best match answer to by eliminating the
worst match answer. Here we proposed GenNet model, a neural network-based
model. In this model first we will generate the answer of the question from the
passage and then will matched the generated answer with given answer, the best
matched option will be our answer. For answer generation we used S-net (Tan et
al., 2017) model trained on SQuAD and to evaluate our model we used Large-scale
RACE (ReAding Comprehension Dataset From Examinations) (Lai et al.,2017).
| 2,020 | Computation and Language |
Combining Pretrained High-Resource Embeddings and Subword
Representations for Low-Resource Languages | The contrast between the need for large amounts of data for current Natural
Language Processing (NLP) techniques, and the lack thereof, is accentuated in
the case of African languages, most of which are considered low-resource. To
help circumvent this issue, we explore techniques exploiting the qualities of
morphologically rich languages (MRLs), while leveraging pretrained word vectors
in well-resourced languages. In our exploration, we show that a meta-embedding
approach combining both pretrained and morphologically-informed word embeddings
performs best in the downstream task of Xhosa-English translation.
| 2,020 | Computation and Language |
A Framework for Evaluation of Machine Reading Comprehension Gold
Standards | Machine Reading Comprehension (MRC) is the task of answering a question over
a paragraph of text. While neural MRC systems gain popularity and achieve
noticeable performance, issues are being raised with the methodology used to
establish their performance, particularly concerning the data design of gold
standards that are used to evaluate them. There is but a limited understanding
of the challenges present in this data, which makes it hard to draw comparisons
and formulate reliable hypotheses. As a first step towards alleviating the
problem, this paper proposes a unifying framework to systematically investigate
the present linguistic features, required reasoning and background knowledge
and factual correctness on one hand, and the presence of lexical cues as a
lower bound for the requirement of understanding on the other hand. We propose
a qualitative annotation schema for the first and a set of approximative
metrics for the latter. In a first application of the framework, we analyse
modern MRC gold standards and present our findings: the absence of features
that contribute towards lexical ambiguity, the varying factual correctness of
the expected answers and the presence of lexical cues, all of which potentially
lower the reading comprehension complexity and quality of the evaluation data.
| 2,020 | Computation and Language |
Learning to Respond with Stickers: A Framework of Unifying
Multi-Modality in Multi-Turn Dialog | Stickers with vivid and engaging expressions are becoming increasingly
popular in online messaging apps, and some works are dedicated to automatically
select sticker response by matching text labels of stickers with previous
utterances. However, due to their large quantities, it is impractical to
require text labels for the all stickers. Hence, in this paper, we propose to
recommend an appropriate sticker to user based on multi-turn dialog context
history without any external labels. Two main challenges are confronted in this
task. One is to learn semantic meaning of stickers without corresponding text
labels. Another challenge is to jointly model the candidate sticker with the
multi-turn dialog context. To tackle these challenges, we propose a sticker
response selector (SRS) model. Specifically, SRS first employs a convolutional
based sticker image encoder and a self-attention based multi-turn dialog
encoder to obtain the representation of stickers and utterances. Next, deep
interaction network is proposed to conduct deep matching between the sticker
with each utterance in the dialog history. SRS then learns the short-term and
long-term dependency between all interaction results by a fusion network to
output the the final matching score. To evaluate our proposed method, we
collect a large-scale real-world dialog dataset with stickers from one of the
most popular online chatting platform. Extensive experiments conducted on this
dataset show that our model achieves the state-of-the-art performance for all
commonly-used metrics. Experiments also verify the effectiveness of each
component of SRS. To facilitate further research in sticker selection field, we
release this dataset of 340K multi-turn dialog and sticker pairs.
| 2,020 | Computation and Language |
Efficient Intent Detection with Dual Sentence Encoders | Building conversational systems in new domains and with added functionality
requires resource-efficient models that work under low-data regimes (i.e., in
few-shot setups). Motivated by these requirements, we introduce intent
detection methods backed by pretrained dual sentence encoders such as USE and
ConveRT. We demonstrate the usefulness and wide applicability of the proposed
intent detectors, showing that: 1) they outperform intent detectors based on
fine-tuning the full BERT-Large model or using BERT as a fixed black-box
encoder on three diverse intent detection data sets; 2) the gains are
especially pronounced in few-shot setups (i.e., with only 10 or 30 annotated
examples per intent); 3) our intent detectors can be trained in a matter of
minutes on a single CPU; and 4) they are stable across different hyperparameter
settings. In hope of facilitating and democratizing research focused on
intention detection, we release our code, as well as a new challenging
single-domain intent detection dataset comprising 13,083 annotated examples
over 77 intents.
| 2,020 | Computation and Language |
Undersensitivity in Neural Reading Comprehension | Current reading comprehension models generalise well to in-distribution test
sets, yet perform poorly on adversarially selected inputs. Most prior work on
adversarial inputs studies oversensitivity: semantically invariant text
perturbations that cause a model's prediction to change when it should not. In
this work we focus on the complementary problem: excessive prediction
undersensitivity, where input text is meaningfully changed but the model's
prediction does not, even though it should. We formulate a noisy adversarial
attack which searches among semantic variations of the question for which a
model erroneously predicts the same answer, and with even higher probability.
Despite comprising unanswerable questions, both SQuAD2.0 and NewsQA models are
vulnerable to this attack. This indicates that although accurate, models tend
to rely on spurious patterns and do not fully consider the information
specified in a question. We experiment with data augmentation and adversarial
training as defences, and find that both substantially decrease vulnerability
to attacks on held out data, as well as held out attack spaces. Addressing
undersensitivity also improves results on AddSent and AddOneSent, and models
furthermore generalise better when facing train/evaluation distribution
mismatch: they are less prone to overly rely on predictive cues present only in
the training set, and outperform a conventional model by as much as 10.9% F1.
| 2,020 | Computation and Language |
Video Caption Dataset for Describing Human Actions in Japanese | In recent years, automatic video caption generation has attracted
considerable attention. This paper focuses on the generation of Japanese
captions for describing human actions. While most currently available video
caption datasets have been constructed for English, there is no equivalent
Japanese dataset. To address this, we constructed a large-scale Japanese video
caption dataset consisting of 79,822 videos and 399,233 captions. Each caption
in our dataset describes a video in the form of "who does what and where." To
describe human actions, it is important to identify the details of a person,
place, and action. Indeed, when we describe human actions, we usually mention
the scene, person, and action. In our experiments, we evaluated two caption
generation methods to obtain benchmark results. Further, we investigated
whether those generation methods could specify "who does what and where."
| 2,020 | Computation and Language |
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual
Lexical Semantic Similarity | We introduce Multi-SimLex, a large-scale lexical resource and evaluation
benchmark covering datasets for 12 typologically diverse languages, including
major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as
less-resourced ones (e.g., Welsh, Kiswahili). Each language dataset is
annotated for the lexical relation of semantic similarity and contains 1,888
semantically aligned concept pairs, providing a representative coverage of word
classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity
intervals, lexical fields, and concreteness levels. Additionally, owing to the
alignment of concepts across languages, we provide a suite of 66 cross-lingual
semantic similarity datasets. Due to its extensive size and language coverage,
Multi-SimLex provides entirely novel opportunities for experimental evaluation
and analysis. On its monolingual and cross-lingual benchmarks, we evaluate and
analyze a wide array of recent state-of-the-art monolingual and cross-lingual
representation models, including static and contextualized word embeddings
(such as fastText, M-BERT and XLM), externally informed lexical
representations, as well as fully unsupervised and (weakly) supervised
cross-lingual word embeddings. We also present a step-by-step dataset creation
protocol for creating consistent, Multi-Simlex-style resources for additional
languages. We make these contributions -- the public release of Multi-SimLex
datasets, their creation protocol, strong baseline results, and in-depth
analyses which can be be helpful in guiding future developments in multilingual
lexical semantics and representation learning -- available via a website which
will encourage community effort in further expansion of Multi-Simlex to many
more languages. Such a large-scale semantic resource could inspire significant
further advances in NLP across languages.
| 2,020 | Computation and Language |
KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using
Twitter Sentiments | Cryptocurrencies, such as Bitcoin, are becoming increasingly popular, having
been widely used as an exchange medium in areas such as financial transaction
and asset transfer verification. However, there has been a lack of solutions
that can support real-time price prediction to cope with high currency
volatility, handle massive heterogeneous data volumes, including social media
sentiments, while supporting fault tolerance and persistence in real time, and
provide real-time adaptation of learning algorithms to cope with new price and
sentiment data. In this paper we introduce KryptoOracle, a novel real-time and
adaptive cryptocurrency price prediction platform based on Twitter sentiments.
The integrative and modular platform is based on (i) a Spark-based architecture
which handles the large volume of incoming data in a persistent and fault
tolerant way; (ii) an approach that supports sentiment analysis which can
respond to large amounts of natural language processing queries in real time;
and (iii) a predictive method grounded on online learning in which a model
adapts its weights to cope with new prices and sentiments. Besides providing an
architectural design, the paper also describes the KryptoOracle platform
implementation and experimental evaluation. Overall, the proposed platform can
help accelerate decision-making, uncover new opportunities and provide more
timely insights based on the available and ever-larger financial data volume
and variety.
| 2,020 | Computation and Language |
Aspect Term Extraction using Graph-based Semi-Supervised Learning | Aspect based Sentiment Analysis is a major subarea of sentiment analysis.
Many supervised and unsupervised approaches have been proposed in the past for
detecting and analyzing the sentiment of aspect terms. In this paper, a
graph-based semi-supervised learning approach for aspect term extraction is
proposed. In this approach, every identified token in the review document is
classified as aspect or non-aspect term from a small set of labeled tokens
using label spreading algorithm. The k-Nearest Neighbor (kNN) for graph
sparsification is employed in the proposed approach to make it more time and
memory efficient. The proposed work is further extended to determine the
polarity of the opinion words associated with the identified aspect terms in
review sentence to generate visual aspect-based summary of review documents.
The experimental study is conducted on benchmark and crawled datasets of
restaurant and laptop domains with varying value of labeled instances. The
results depict that the proposed approach could achieve good result in terms of
Precision, Recall and Accuracy with limited availability of labeled data.
| 2,020 | Computation and Language |
A Dataset Independent Set of Baselines for Relation Prediction in
Argument Mining | Argument Mining is the research area which aims at extracting argument
components and predicting argumentative relations (i.e.,support and attack)
from text. In particular, numerous approaches have been proposed in the
literature to predict the relations holding between the arguments, and
application-specific annotated resources were built for this purpose. Despite
the fact that these resources have been created to experiment on the same task,
the definition of a single relation prediction method to be successfully
applied to a significant portion of these datasets is an open research problem
in Argument Mining. This means that none of the methods proposed in the
literature can be easily ported from one resource to another. In this paper, we
address this problem by proposing a set of dataset independent strong neural
baselines which obtain homogeneous results on all the datasets proposed in the
literature for the argumentative relation prediction task. Thus, our baselines
can be employed by the Argument Mining community to compare more effectively
how well a method performs on the argumentative relation prediction task.
| 2,020 | Computation and Language |
A Comparative Study of Sequence Classification Models for Privacy Policy
Coverage Analysis | Privacy policies are legal documents that describe how a website will
collect, use, and distribute a user's data. Unfortunately, such documents are
often overly complicated and filled with legal jargon; making it difficult for
users to fully grasp what exactly is being collected and why. Our solution to
this problem is to provide users with a coverage analysis of a given website's
privacy policy using a wide range of classical machine learning and deep
learning techniques. Given a website's privacy policy, the classifier
identifies the associated data practice for each logical segment. These data
practices/labels are taken directly from the OPP-115 corpus. For example, the
data practice "Data Retention" refers to how long a website stores a user's
information. The coverage analysis allows users to determine how many of the
ten possible data practices are covered, along with identifying the sections
that correspond to the data practices of particular interest.
| 2,020 | Computation and Language |
Localized Flood DetectionWith Minimal Labeled Social Media Data Using
Transfer Learning | Social media generates an enormous amount of data on a daily basis but it is
very challenging to effectively utilize the data without annotating or labeling
it according to the target application. We investigate the problem of localized
flood detection using the social sensing model (Twitter) in order to provide an
efficient, reliable and accurate flood text classification model with minimal
labeled data. This study is important since it can immensely help in providing
the flood-related updates and notifications to the city officials for emergency
decision making, rescue operations, and early warnings, etc. We propose to
perform the text classification using the inductive transfer learning method
i.e pre-trained language model ULMFiT and fine-tune it in order to effectively
classify the flood-related feeds in any new location. Finally, we show that
using very little new labeled data in the target domain we can successfully
build an efficient and high performing model for flood detection and analysis
with human-generated facts and observations from Twitter.
| 2,020 | Computation and Language |
Transformer++ | Recent advancements in attention mechanisms have replaced recurrent neural
networks and its variants for machine translation tasks. Transformer using
attention mechanism solely achieved state-of-the-art results in sequence
modeling. Neural machine translation based on the attention mechanism is
parallelizable and addresses the problem of handling long-range dependencies
among words in sentences more effectively than recurrent neural networks. One
of the key concepts in attention is to learn three matrices, query, key, and
value, where global dependencies among words are learned through linearly
projecting word embeddings through these matrices. Multiple query, key, value
matrices can be learned simultaneously focusing on a different subspace of the
embedded dimension, which is called multi-head in Transformer. We argue that
certain dependencies among words could be learned better through an
intermediate context than directly modeling word-word dependencies. This could
happen due to the nature of certain dependencies or lack of patterns that lend
them difficult to be modeled globally using multi-head self-attention. In this
work, we propose a new way of learning dependencies through a context in
multi-head using convolution. This new form of multi-head attention along with
the traditional form achieves better results than Transformer on the WMT 2014
English-to-German and English-to-French translation tasks. We also introduce a
framework to learn POS tagging and NER information during the training of
encoder which further improves results achieving a new state-of-the-art of 32.1
BLEU, better than existing best by 1.4 BLEU, on the WMT 2014 English-to-German
and 44.6 BLEU, better than existing best by 1.1 BLEU, on the WMT 2014
English-to-French translation tasks. We call this Transformer++.
| 2,020 | Computation and Language |
A Computational Investigation on Denominalization | Language has been a dynamic system and word meanings always have been changed
over times. Every time a novel concept or sense is introduced, we need to
assign it a word to express it. Also, some changes have happened because the
result of a change can be more desirable for humans, or cognitively easier to
be used by humans. Finding the patterns of these changes is interesting and can
reveal some facts about human cognitive evolution. As we have enough resources
for studying this problem, it is a good idea to work on the problem through
computational modeling, and that can make the work easier and possible to be
studied on large scale. In this work, we want to study the nouns which have
been used as verbs after some years of their emergence as nouns and find some
commonalities among these nouns. In other words, we are interested in finding
what potential requirements are essential for this change.
| 2,020 | Computation and Language |
Mask & Focus: Conversation Modelling by Learning Concepts | Sequence to sequence models attempt to capture the correlation between all
the words in the input and output sequences. While this is quite useful for
machine translation where the correlation among the words is indeed quite
strong, it becomes problematic for conversation modelling where the correlation
is often at a much abstract level. In contrast, humans tend to focus on the
essential concepts discussed in the conversation context and generate responses
accordingly. In this paper, we attempt to mimic this response generating
mechanism by learning the essential concepts in the context and response in an
unsupervised manner. The proposed model, referred to as Mask \& Focus maps the
input context to a sequence of concepts which are then used to generate the
response concepts. Together, the context and the response concepts generate the
final response. In order to learn context concepts from the training data
automatically, we \emph{mask} words in the input and observe the effect of
masking on response generation. We train our model to learn those response
concepts that have high mutual information with respect to the context
concepts, thereby guiding the model to \emph{focus} on the context concepts.
Mask \& Focus achieves significant improvement over the existing baselines in
several established metrics for dialogues.
| 2,020 | Computation and Language |
Fake News Detection with Different Models | This is a paper for exploring various different models aiming at developing
fake news detection models and we had used certain machine learning algorithms
and we had used pretrained algorithms such as TFIDF and CV and W2V as features
for processing textual data.
| 2,020 | Computation and Language |
Improving Reliability of Latent Dirichlet Allocation by Assessing Its
Stability Using Clustering Techniques on Replicated Runs | For organizing large text corpora topic modeling provides useful tools. A
widely used method is Latent Dirichlet Allocation (LDA), a generative
probabilistic model which models single texts in a collection of texts as
mixtures of latent topics. The assignments of words to topics rely on initial
values such that generally the outcome of LDA is not fully reproducible. In
addition, the reassignment via Gibbs Sampling is based on conditional
distributions, leading to different results in replicated runs on the same text
data. This fact is often neglected in everyday practice. We aim to improve the
reliability of LDA results. Therefore, we study the stability of LDA by
comparing assignments from replicated runs. We propose to quantify the
similarity of two generated topics by a modified Jaccard coefficient. Using
such similarities, topics can be clustered. A new pruning algorithm for
hierarchical clustering results based on the idea that two LDA runs create
pairs of similar topics is proposed. This approach leads to the new measure
S-CLOP ({\bf S}imilarity of multiple sets by {\bf C}lustering with {\bf LO}cal
{\bf P}runing) for quantifying the stability of LDA models. We discuss some
characteristics of this measure and illustrate it with an application to real
data consisting of newspaper articles from \textit{USA Today}. Our results show
that the measure S-CLOP is useful for assessing the stability of LDA models or
any other topic modeling procedure that characterize its topics by word
distributions. Based on the newly proposed measure for LDA stability, we
propose a method to increase the reliability and hence to improve the
reproducibility of empirical findings based on topic modeling. This increase in
reliability is obtained by running the LDA several times and taking as
prototype the most representative run, that is the LDA run with highest average
similarity to all other runs.
| 2,020 | Computation and Language |
SAFE: Similarity-Aware Multi-Modal Fake News Detection | Effective detection of fake news has recently attracted significant
attention. Current studies have made significant contributions to predicting
fake news with less focus on exploiting the relationship (similarity) between
the textual and visual information in news articles. Attaching importance to
such similarity helps identify fake news stories that, for example, attempt to
use irrelevant images to attract readers' attention. In this work, we propose a
$\mathsf{S}$imilarity-$\mathsf{A}$ware $\mathsf{F}$ak$\mathsf{E}$ news
detection method ($\mathsf{SAFE}$) which investigates multi-modal (textual and
visual) information of news articles. First, neural networks are adopted to
separately extract textual and visual features for news representation. We
further investigate the relationship between the extracted features across
modalities. Such representations of news textual and visual information along
with their relationship are jointly learned and used to predict fake news. The
proposed method facilitates recognizing the falsity of news articles based on
their text, images, or their "mismatches." We conduct extensive experiments on
large-scale real-world data, which demonstrate the effectiveness of the
proposed method.
| 2,020 | Computation and Language |
Understanding the Downstream Instability of Word Embeddings | Many industrial machine learning (ML) systems require frequent retraining to
keep up-to-date with constantly changing data. This retraining exacerbates a
large challenge facing ML systems today: model training is unstable, i.e.,
small changes in training data can cause significant changes in the model's
predictions. In this paper, we work on developing a deeper understanding of
this instability, with a focus on how a core building block of modern natural
language processing (NLP) pipelines---pre-trained word embeddings---affects the
instability of downstream NLP models. We first empirically reveal a tradeoff
between stability and memory: increasing the embedding memory 2x can reduce the
disagreement in predictions due to small changes in training data by 5% to 37%
(relative). To theoretically explain this tradeoff, we introduce a new measure
of embedding instability---the eigenspace instability measure---which we prove
bounds the disagreement in downstream predictions introduced by the change in
word embeddings. Practically, we show that the eigenspace instability measure
can be a cost-effective way to choose embedding parameters to minimize
instability without training downstream models, outperforming other embedding
distance measures and performing competitively with a nearest neighbor-based
measure. Finally, we demonstrate that the observed stability-memory tradeoffs
extend to other types of embeddings as well, including knowledge graph and
contextual word embeddings.
| 2,020 | Computation and Language |
Adv-BERT: BERT is not robust on misspellings! Generating nature
adversarial samples on BERT | There is an increasing amount of literature that claims the brittleness of
deep neural networks in dealing with adversarial examples that are created
maliciously. It is unclear, however, how the models will perform in realistic
scenarios where \textit{natural rather than malicious} adversarial instances
often exist. This work systematically explores the robustness of BERT, the
state-of-the-art Transformer-style model in NLP, in dealing with noisy data,
particularly mistakes in typing the keyboard, that occur inadvertently.
Intensive experiments on sentiment analysis and question answering benchmarks
indicate that: (i) Typos in various words of a sentence do not influence
equally. The typos in informative words make severer damages; (ii) Mistype is
the most damaging factor, compared with inserting, deleting, etc.; (iii) Humans
and machines have different focuses on recognizing adversarial attacks.
| 2,020 | Computation and Language |
Investigating an approach for low resource language dataset creation,
curation and classification: Setswana and Sepedi | The recent advances in Natural Language Processing have been a boon for
well-represented languages in terms of available curated data and research
resources. One of the challenges for low-resourced languages is clear
guidelines on the collection, curation and preparation of datasets for
different use-cases. In this work, we take on the task of creation of two
datasets that are focused on news headlines (i.e short text) for Setswana and
Sepedi and creation of a news topic classification task. We document our work
and also present baselines for classification. We investigate an approach on
data augmentation, better suited to low resource languages, to improve the
performance of the classifiers
| 2,020 | Computation and Language |
A Financial Service Chatbot based on Deep Bidirectional Transformers | We develop a chatbot using Deep Bidirectional Transformer models (BERT) to
handle client questions in financial investment customer service. The bot can
recognize 381 intents, and decides when to say "I don't know" and escalates
irrelevant/uncertain questions to human operators. Our main novel contribution
is the discussion about uncertainty measure for BERT, where three different
approaches are systematically compared on real problems. We investigated two
uncertainty metrics, information entropy and variance of dropout sampling in
BERT, followed by mixed-integer programming to optimize decision thresholds.
Another novel contribution is the usage of BERT as a language model in
automatic spelling correction. Inputs with accidental spelling errors can
significantly decrease intent classification performance. The proposed approach
combines probabilities from masked language model and word edit distances to
find the best corrections for misspelled words. The chatbot and the entire
conversational AI system are developed using open-source tools, and deployed
within our company's intranet. The proposed approach can be useful for
industries seeking similar in-house solutions in their specific business
domains. We share all our code and a sample chatbot built on a public dataset
on Github.
| 2,020 | Computation and Language |
ScopeIt: Scoping Task Relevant Sentences in Documents | Intelligent assistants like Cortana, Siri, Alexa, and Google Assistant are
trained to parse information when the conversation is synchronous and short;
however, for email-based conversational agents, the communication is
asynchronous, and often contains information irrelevant to the assistant. This
makes it harder for the system to accurately detect intents, extract entities
relevant to those intents and thereby perform the desired action. We present a
neural model for scoping relevant information for the agent from a large query.
We show that when used as a preprocessing step, the model improves performance
of both intent detection and entity extraction tasks. We demonstrate the
model's impact on Scheduler (Cortana is the persona of the agent, while
Scheduler is the name of the service. We use them interchangeably in the
context of this paper.) - a virtual conversational meeting scheduling assistant
that interacts asynchronously with users through email. The model helps the
entity extraction and intent detection tasks requisite by Scheduler achieve an
average gain of 35% in precision without any drop in recall. Additionally, we
demonstrate that the same approach can be used for component level analysis in
large documents, such as signature block identification.
| 2,020 | Computation and Language |
Unsupervised and Interpretable Domain Adaptation to Rapidly Filter
Tweets for Emergency Services | During the onset of a disaster event, filtering relevant information from the
social web data is challenging due to its sparse availability and practical
limitations in labeling datasets of an ongoing crisis. In this paper, we
hypothesize that unsupervised domain adaptation through multi-task learning can
be a useful framework to leverage data from past crisis events for training
efficient information filtering models during the sudden onset of a new crisis.
We present a novel method to classify relevant tweets during an ongoing crisis
without seeing any new examples, using the publicly available dataset of TREC
incident streams. Specifically, we construct a customized multi-task
architecture with a multi-domain discriminator for crisis analytics: multi-task
domain adversarial attention network. This model consists of dedicated
attention layers for each task to provide model interpretability; critical for
real-word applications. As deep networks struggle with sparse datasets, we show
that this can be improved by sharing a base layer for multi-task learning and
domain adversarial training. Evaluation of domain adaptation for crisis events
is performed by choosing a target event as the test set and training on the
rest. Our results show that the multi-task model outperformed its single task
counterpart. For the qualitative evaluation of interpretability, we show that
the attention layer can be used as a guide to explain the model predictions and
empower emergency services for exploring accountability of the model, by
showcasing the words in a tweet that are deemed important in the classification
process. Finally, we show a practical implication of our work by providing a
use-case for the COVID-19 pandemic.
| 2,020 | Computation and Language |
Multi-task Learning with Multi-head Attention for Multi-choice Reading
Comprehension | Multiple-choice Machine Reading Comprehension (MRC) is an important and
challenging Natural Language Understanding (NLU) task, in which a machine must
choose the answer to a question from a set of choices, with the question placed
in context of text passages or dialog. In the last a couple of years the NLU
field has been revolutionized with the advent of models based on the
Transformer architecture, which are pretrained on massive amounts of
unsupervised data and then fine-tuned for various supervised learning NLU
tasks. Transformer models have come to dominate a wide variety of leader-boards
in the NLU field; in the area of MRC, the current state-of-the-art model on the
DREAM dataset (see[Sunet al., 2019]) fine tunes Albert, a large pretrained
Transformer-based model, and addition-ally combines it with an extra layer of
multi-head attention between context and question-answer[Zhuet al., 2020].The
purpose of this note is to document a new state-of-the-art result in the DREAM
task, which is accomplished by, additionally, performing multi-task learning on
two MRC multi-choice reading comprehension tasks (RACE and DREAM).
| 2,020 | Computation and Language |
Learning to mirror speaking styles incrementally | Mirroring is the behavior in which one person subconsciously imitates the
gesture, speech pattern, or attitude of another. In conversations, mirroring
often signals the speakers enjoyment and engagement in their communication. In
chatbots, methods have been proposed to add personas to the chatbots and to
train them to speak or to shift their dialogue style to that of the personas.
However, they often require a large dataset consisting of dialogues of the
target personalities to train. In this work, we explore a method that can learn
to mirror the speaking styles of a person incrementally. Our method extracts
ngrams that capture a persons speaking styles and uses the ngrams to create
patterns for transforming sentences to the persons speaking styles. Our
experiments show that our method is able to capture patterns of speaking style
that can be used to transform regular sentences into sentences with the target
style.
| 2,020 | Computation and Language |
Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning | Multi-role dialogue understanding comprises a wide range of diverse tasks
such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific
learning tasks, can be highly scarce and expensive. In this work, we
investigate dialogue context representation learning with various types
unsupervised pretraining tasks where the training objectives are given
naturally according to the nature of the utterance and the structure of the
multi-role conversation. Meanwhile, in order to locate essential information
for dialogue summarization/extraction, the pretraining process enables external
knowledge integration. The proposed fine-tuned pretraining mechanism is
comprehensively evaluated via three different dialogue datasets along with a
number of downstream dialogue-mining tasks. Result shows that the proposed
pretraining mechanism significantly contributes to all the downstream tasks
without discrimination to different encoders.
| 2,020 | Computation and Language |
GASP! Generating Abstracts of Scientific Papers from Abstracts of Cited
Papers | Creativity is one of the driving forces of human kind as it allows to break
current understanding to envision new ideas, which may revolutionize entire
fields of knowledge. Scientific research offers a challenging environment where
to learn a model for the creative process. In fact, scientific research is a
creative act in the formal settings of the scientific method and this creative
act is described in articles.
In this paper, we dare to introduce the novel, scientifically and
philosophically challenging task of Generating Abstracts of Scientific Papers
from abstracts of cited papers (GASP) as a text-to-text task to investigate
scientific creativity, To foster research in this novel, challenging task, we
prepared a dataset by using services where that solve the problem of copyright
and, hence, the dataset is public available with its standard split. Finally,
we experimented with two vanilla summarization systems to start the analysis of
the complexity of the GASP task.
| 2,020 | Computation and Language |
Toward Interpretability of Dual-Encoder Models for Dialogue Response
Suggestions | This work shows how to improve and interpret the commonly used dual encoder
model for response suggestion in dialogue. We present an attentive dual encoder
model that includes an attention mechanism on top of the extracted word-level
features from two encoders, one for context and one for label respectively. To
improve the interpretability in the dual encoder models, we design a novel
regularization loss to minimize the mutual information between unimportant
words and desired labels, in addition to the original attention method, so that
important words are emphasized while unimportant words are de-emphasized. This
can help not only with model interpretability, but can also further improve
model accuracy. We propose an approximation method that uses a neural network
to calculate the mutual information. Furthermore, by adding a residual layer
between raw word embeddings and the final encoded context feature, word-level
interpretability is preserved at the final prediction of the model. We compare
the proposed model with existing methods for the dialogue response task on two
public datasets (Persona and Ubuntu). The experiments demonstrate the
effectiveness of the proposed model in terms of better Recall@1 accuracy and
visualized interpretability.
| 2,020 | Computation and Language |
TyDi QA: A Benchmark for Information-Seeking Question Answering in
Typologically Diverse Languages | Confidently making progress on multilingual modeling requires challenging,
trustworthy evaluations. We present TyDi QA---a question answering dataset
covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology---the set of
linguistic features each language expresses---such that we expect models
performing well on this set to generalize across a large number of the world's
languages. We present a quantitative analysis of the data quality and
example-level qualitative linguistic analyses of observed language phenomena
that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by
people who want to know the answer, but don't know the answer yet, and the data
is collected directly in each language without the use of translation.
| 2,020 | Computation and Language |
A Benchmark for Systematic Generalization in Grounded Language
Understanding | Humans easily interpret expressions that describe unfamiliar situations
composed from familiar parts ("greet the pink brontosaurus by the ferris
wheel"). Modern neural networks, by contrast, struggle to interpret novel
compositions. In this paper, we introduce a new benchmark, gSCAN, for
evaluating compositional generalization in situated language understanding.
Going beyond a related benchmark that focused on syntactic aspects of
generalization, gSCAN defines a language grounded in the states of a grid
world, facilitating novel evaluations of acquiring linguistically motivated
rules. For example, agents must understand how adjectives such as 'small' are
interpreted relative to the current world state or how adverbs such as
'cautiously' combine with new verbs. We test a strong multi-modal baseline
model and a state-of-the-art compositional method finding that, in most cases,
they fail dramatically when generalization requires systematic compositional
rules.
| 2,020 | Computation and Language |
Vector symbolic architectures for context-free grammars | Background / introduction. Vector symbolic architectures (VSA) are a viable
approach for the hyperdimensional representation of symbolic data, such as
documents, syntactic structures, or semantic frames. Methods. We present a
rigorous mathematical framework for the representation of phrase structure
trees and parse trees of context-free grammars (CFG) in Fock space, i.e.
infinite-dimensional Hilbert space as being used in quantum field theory. We
define a novel normal form for CFG by means of term algebras. Using a recently
developed software toolbox, called FockBox, we construct Fock space
representations for the trees built up by a CFG left-corner (LC) parser.
Results. We prove a universal representation theorem for CFG term algebras in
Fock space and illustrate our findings through a low-dimensional principal
component projection of the LC parser states. Conclusions. Our approach could
leverage the development of VSA for explainable artificial intelligence (XAI)
by means of hyperdimensional deep neural computation. It could be of
significance for the improvement of cognitive user interfaces and other
applications of VSA in machine learning.
| 2,020 | Computation and Language |
Capturing document context inside sentence-level neural machine
translation models with self-training | Neural machine translation (NMT) has arguably achieved human level parity
when trained and evaluated at the sentence-level. Document-level neural machine
translation has received less attention and lags behind its sentence-level
counterpart. The majority of the proposed document-level approaches investigate
ways of conditioning the model on several source or target sentences to capture
document context. These approaches require training a specialized NMT model
from scratch on parallel document-level corpora. We propose an approach that
doesn't require training a specialized model on parallel document-level corpora
and is applied to a trained sentence-level NMT model at decoding time. We
process the document from left to right multiple times and self-train the
sentence-level model on pairs of source sentences and generated translations.
Our approach reinforces the choices made by the model, thus making it more
likely that the same choices will be made in other sentences in the document.
We evaluate our approach on three document-level datasets: NIST
Chinese-English, WMT'19 Chinese-English and OpenSubtitles English-Russian. We
demonstrate that our approach has higher BLEU score and higher human preference
than the baseline. Qualitative analysis of our approach shows that choices made
by model are consistent across the document.
| 2,020 | Computation and Language |
Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network | Organize songs, albums, and artists in groups with shared similarity could be
done with the help of genre labels. In this paper, we present a novel approach
for automatic classifying musical genre in Brazilian music using only the song
lyrics. This kind of classification remains a challenge in the field of Natural
Language Processing. We construct a dataset of 138,368 Brazilian song lyrics
distributed in 14 genres. We apply SVM, Random Forest and a Bidirectional Long
Short-Term Memory (BLSTM) network combined with different word embeddings
techniques to address this classification task. Our experiments show that the
BLSTM method outperforms the other models with an F1-score average of $0.48$.
Some genres like "gospel", "funk-carioca" and "sertanejo", which obtained 0.89,
0.70 and 0.69 of F1-score, respectively, can be defined as the most distinct
and easy to classify in the Brazilian musical genres context.
| 2,020 | Computation and Language |
A Precisely Xtreme-Multi Channel Hybrid Approach For Roman Urdu
Sentiment Analysis | In order to accelerate the performance of various Natural Language Processing
tasks for Roman Urdu, this paper for the very first time provides 3 neural word
embeddings prepared using most widely used approaches namely Word2vec,
FastText, and Glove. The integrity of generated neural word embeddings is
evaluated using intrinsic and extrinsic evaluation approaches. Considering the
lack of publicly available benchmark datasets, it provides a first-ever Roman
Urdu dataset which consists of 3241 sentiments annotated against positive,
negative and neutral classes. To provide benchmark baseline performance over
the presented dataset, we adapt diverse machine learning (Support Vector
Machine Logistic Regression, Naive Bayes), deep learning (convolutional neural
network, recurrent neural network), and hybrid approaches. Effectiveness of
generated neural word embeddings is evaluated by comparing the performance of
machine and deep learning based methodologies using 7, and 5 distinct feature
representation approaches respectively. Finally, it proposes a novel precisely
extreme multi-channel hybrid methodology which outperforms state-of-the-art
adapted machine and deep learning approaches by the figure of 9%, and 4% in
terms of F1-score. Roman Urdu Sentiment Analysis, Pretrain word embeddings for
Roman Urdu, Word2Vec, Glove, Fast-Text
| 2,020 | Computation and Language |
Investigating Entity Knowledge in BERT with Simple Neural End-To-End
Entity Linking | A typical architecture for end-to-end entity linking systems consists of
three steps: mention detection, candidate generation and entity disambiguation.
In this study we investigate the following questions: (a) Can all those steps
be learned jointly with a model for contextualized text-representations, i.e.
BERT (Devlin et al., 2019)? (b) How much entity knowledge is already contained
in pretrained BERT? (c) Does additional entity knowledge improve BERT's
performance in downstream tasks? To this end, we propose an extreme
simplification of the entity linking setup that works surprisingly well: simply
cast it as a per token classification over the entire entity vocabulary (over
700K classes in our case). We show on an entity linking benchmark that (i) this
model improves the entity representations over plain BERT, (ii) that it
outperforms entity linking architectures that optimize the tasks separately and
(iii) that it only comes second to the current state-of-the-art that does
mention detection and entity disambiguation jointly. Additionally, we
investigate the usefulness of entity-aware token-representations in the
text-understanding benchmark GLUE, as well as the question answering benchmarks
SQUAD V2 and SWAG and also the EN-DE WMT14 machine translation benchmark. To
our surprise, we find that most of those benchmarks do not benefit from
additional entity knowledge, except for a task with very small training data,
the RTE task in GLUE, which improves by 2%.
| 2,019 | Computation and Language |
Semantic Holism and Word Representations in Artificial Neural Networks | Artificial neural networks are a state-of-the-art solution for many problems
in natural language processing. What can we learn about language and meaning
from the way artificial neural networks represent it? Word representations
obtained from the Skip-gram variant of the word2vec model exhibit interesting
semantic properties. This is usually explained by referring to the general
distributional hypothesis, which states that the meaning of the word is given
by the contexts where it occurs. We propose a more specific approach based on
Frege's holistic and functional approach to meaning. Taking Tugendhat's formal
reinterpretation of Frege's work as a starting point, we demonstrate that it is
analogical to the process of training the Skip-gram model and offers a possible
explanation of its semantic properties.
| 2,020 | Computation and Language |
Learning word-referent mappings and concepts from raw inputs | How do children learn correspondences between the language and the world from
noisy, ambiguous, naturalistic input? One hypothesis is via cross-situational
learning: tracking words and their possible referents across multiple
situations allows learners to disambiguate correct word-referent mappings (Yu &
Smith, 2007). However, previous models of cross-situational word learning
operate on highly simplified representations, side-stepping two important
aspects of the actual learning problem. First, how can word-referent mappings
be learned from raw inputs such as images? Second, how can these learned
mappings generalize to novel instances of a known word? In this paper, we
present a neural network model trained from scratch via self-supervision that
takes in raw images and words as inputs, and show that it can learn
word-referent mappings from fully ambiguous scenes and utterances through
cross-situational learning. In addition, the model generalizes to novel word
instances, locates referents of words in a scene, and shows a preference for
mutual exclusivity.
| 2,020 | Computation and Language |
Sentiment Analysis with Contextual Embeddings and Self-Attention | In natural language the intended meaning of a word or phrase is often
implicit and depends on the context. In this work, we propose a simple yet
effective method for sentiment analysis using contextual embeddings and a
self-attention mechanism. The experimental results for three languages,
including morphologically rich Polish and German, show that our model is
comparable to or even outperforms state-of-the-art models. In all cases the
superiority of models leveraging contextual embeddings is demonstrated.
Finally, this work is intended as a step towards introducing a universal,
multilingual sentiment classifier.
| 2,020 | Computation and Language |
It Means More if It Sounds Good: Yet Another Hypothesis Concerning the
Evolution of Polysemous Words | This position paper looks into the formation of language and shows ties
between structural properties of the words in the English language and their
polysemy. Using Ollivier-Ricci curvature over a large graph of synonyms to
estimate polysemy it shows empirically that the words that arguably are easier
to pronounce also tend to have multiple meanings.
| 2,021 | Computation and Language |
KGvec2go -- Knowledge Graph Embeddings as a Service | In this paper, we present KGvec2go, a Web API for accessing and consuming
graph embeddings in a light-weight fashion in downstream applications.
Currently, we serve pre-trained embeddings for four knowledge graphs. We
introduce the service and its usage, and we show further that the trained
models have semantic value by evaluating them on multiple semantic benchmarks.
The evaluation also reveals that the combination of multiple models can lead to
a better outcome than the best individual model.
| 2,020 | Computation and Language |
Local Contextual Attention with Hierarchical Structure for Dialogue Act
Recognition | Dialogue act recognition is a fundamental task for an intelligent dialogue
system. Previous work models the whole dialog to predict dialog acts, which may
bring the noise from unrelated sentences. In this work, we design a
hierarchical model based on self-attention to capture intra-sentence and
inter-sentence information. We revise the attention distribution to focus on
the local and contextual semantic information by incorporating the relative
position information between utterances. Based on the found that the length of
dialog affects the performance, we introduce a new dialog segmentation
mechanism to analyze the effect of dialog length and context padding length
under online and offline settings. The experiment shows that our method
achieves promising performance on two datasets: Switchboard Dialogue Act and
DailyDialog with the accuracy of 80.34\% and 85.81\% respectively.
Visualization of the attention weights shows that our method can learn the
context dependency between utterances explicitly.
| 2,020 | Computation and Language |
MixPoet: Diverse Poetry Generation via Learning Controllable Mixed
Latent Space | As an essential step towards computer creativity, automatic poetry generation
has gained increasing attention these years. Though recent neural models make
prominent progress in some criteria of poetry quality, generated poems still
suffer from the problem of poor diversity. Related literature researches show
that different factors, such as life experience, historical background, etc.,
would influence composition styles of poets, which considerably contributes to
the high diversity of human-authored poetry. Inspired by this, we propose
MixPoet, a novel model that absorbs multiple factors to create various styles
and promote diversity. Based on a semi-supervised variational autoencoder, our
model disentangles the latent space into some subspaces, with each conditioned
on one influence factor by adversarial training. In this way, the model learns
a controllable latent variable to capture and mix generalized factor-related
properties. Different factor mixtures lead to diverse styles and hence further
differentiate generated poems from each other. Experiment results on Chinese
poetry demonstrate that MixPoet improves both diversity and quality against
three state-of-the-art models.
| 2,020 | Computation and Language |
WAC: A Corpus of Wikipedia Conversations for Online Abuse Detection | With the spread of online social networks, it is more and more difficult to
monitor all the user-generated content. Automating the moderation process of
the inappropriate exchange content on Internet has thus become a priority task.
Methods have been proposed for this purpose, but it can be challenging to find
a suitable dataset to train and develop them. This issue is especially true for
approaches based on information derived from the structure and the dynamic of
the conversation. In this work, we propose an original framework, based on the
Wikipedia Comment corpus, with comment-level abuse annotations of different
types. The major contribution concerns the reconstruction of conversations, by
comparison to existing corpora, which focus only on isolated messages (i.e.
taken out of their conversational context). This large corpus of more than 380k
annotated messages opens perspectives for online abuse detection and especially
for context-based approaches. We also propose, in addition to this corpus, a
complete benchmarking platform to stimulate and fairly compare scientific works
around the problem of content abuse detection, trying to avoid the recurring
problem of result replication. Finally, we apply two classification methods to
our dataset to demonstrate its potential.
| 2,020 | Computation and Language |
Review-guided Helpful Answer Identification in E-commerce | Product-specific community question answering platforms can greatly help
address the concerns of potential customers. However, the user-provided answers
on such platforms often vary a lot in their qualities. Helpfulness votes from
the community can indicate the overall quality of the answer, but they are
often missing. Accurately predicting the helpfulness of an answer to a given
question and thus identifying helpful answers is becoming a demanding need.
Since the helpfulness of an answer depends on multiple perspectives instead of
only topical relevance investigated in typical QA tasks, common answer
selection algorithms are insufficient for tackling this task. In this paper, we
propose the Review-guided Answer Helpfulness Prediction (RAHP) model that not
only considers the interactions between QA pairs but also investigates the
opinion coherence between the answer and crowds' opinions reflected in the
reviews, which is another important factor to identify helpful answers.
Moreover, we tackle the task of determining opinion coherence as a language
inference problem and explore the utilization of pre-training strategy to
transfer the textual inference knowledge obtained from a specifically designed
trained network. Extensive experiments conducted on real-world data across
seven product categories show that our proposed model achieves superior
performance on the prediction task.
| 2,020 | Computation and Language |
Using word embeddings to improve the discriminability of co-occurrence
text networks | Word co-occurrence networks have been employed to analyze texts both in the
practical and theoretical scenarios. Despite the relative success in several
applications, traditional co-occurrence networks fail in establishing links
between similar words whenever they appear distant in the text. Here we
investigate whether the use of word embeddings as a tool to create virtual
links in co-occurrence networks may improve the quality of classification
systems. Our results revealed that the discriminability in the stylometry task
is improved when using Glove, Word2Vec and FastText. In addition, we found that
optimized results are obtained when stopwords are not disregarded and a simple
global thresholding strategy is used to establish virtual links. Because the
proposed approach is able to improve the representation of texts as complex
networks, we believe that it could be extended to study other natural language
processing tasks. Likewise, theoretical languages studies could benefit from
the adopted enriched representation of word co-occurrence networks.
| 2,021 | Computation and Language |
Sentence Level Human Translation Quality Estimation with Attention-based
Neural Networks | This paper explores the use of Deep Learning methods for automatic estimation
of quality of human translations. Automatic estimation can provide useful
feedback for translation teaching, examination and quality control.
Conventional methods for solving this task rely on manually engineered features
and external knowledge. This paper presents an end-to-end neural model without
feature engineering, incorporating a cross attention mechanism to detect which
parts in sentence pairs are most relevant for assessing quality. Another
contribution concerns of prediction of fine-grained scores for measuring
different aspects of translation quality. Empirical results on a large human
annotated dataset show that the neural model outperforms feature-based methods
significantly. The dataset and the tools are available.
| 2,020 | Computation and Language |
Know thy corpus! Robust methods for digital curation of Web corpora | This paper proposes a novel framework for digital curation of Web corpora in
order to provide robust estimation of their parameters, such as their
composition and the lexicon. In recent years language models pre-trained on
large corpora emerged as clear winners in numerous NLP tasks, but no proper
analysis of the corpora which led to their success has been conducted. The
paper presents a procedure for robust frequency estimation, which helps in
establishing the core lexicon for a given corpus, as well as a procedure for
estimating the corpus composition via unsupervised topic models and via
supervised genre classification of Web pages. The results of the digital
curation study applied to several Web-derived corpora demonstrate their
considerable differences. First, this concerns different frequency bursts which
impact the core lexicon obtained from each corpus. Second, this concerns the
kinds of texts they contain. For example, OpenWebText contains considerably
more topical news and political argumentation in comparison to ukWac or
Wikipedia. The tools and the results of analysis have been released.
| 2,020 | Computation and Language |
LSCP: Enhanced Large Scale Colloquial Persian Language Understanding | Language recognition has been significantly advanced in recent years by means
of modern machine learning methods such as deep learning and benchmarks with
rich annotations. However, research is still limited in low-resource formal
languages. This consists of a significant gap in describing the colloquial
language especially for low-resourced ones such as Persian. In order to target
this gap for low resource languages, we propose a "Large Scale Colloquial
Persian Dataset" (LSCP). LSCP is hierarchically organized in a semantic
taxonomy that focuses on multi-task informal Persian language understanding as
a comprehensive problem. This encompasses the recognition of multiple semantic
aspects in the human-level sentences, which naturally captures from the
real-world sentences. We believe that further investigations and processing, as
well as the application of novel algorithms and methods, can strengthen
enriching computerized understanding and processing of low resource languages.
The proposed corpus consists of 120M sentences resulted from 27M tweets
annotated with parsing tree, part-of-speech tags, sentiment polarity and
translation in five different languages.
| 2,020 | Computation and Language |
DAN: Dual-View Representation Learning for Adapting Stance Classifiers
to New Domains | We address the issue of having a limited number of annotations for stance
classification in a new domain, by adapting out-of-domain classifiers with
domain adaptation. Existing approaches often align different domains in a
single, global feature space (or view), which may fail to fully capture the
richness of the languages used for expressing stances, leading to reduced
adaptability on stance data. In this paper, we identify two major types of
stance expressions that are linguistically distinct, and we propose a tailored
dual-view adaptation network (DAN) to adapt these expressions across domains.
The proposed model first learns a separate view for domain transfer in each
expression channel and then selects the best adapted parts of both views for
optimal transfer. We find that the learned view features can be more easily
aligned and more stance-discriminative in either or both views, leading to more
transferable overall features after combining the views. Results from extensive
experiments show that our method can enhance the state-of-the-art single-view
methods in matching stance data across different domains, and that it
consistently improves those methods on various adaptation tasks.
| 2,020 | Computation and Language |
Text Similarity Using Word Embeddings to Classify Misinformation | Fake news is a growing problem in the last years, especially during
elections. It's hard work to identify what is true and what is false among all
the user generated content that circulates every day. Technology can help with
that work and optimize the fact-checking process. In this work, we address the
challenge of finding similar content in order to be able to suggest to a
fact-checker articles that could have been verified before and thus avoid that
the same information is verified more than once. This is especially important
in collaborative approaches to fact-checking where members of large teams will
not know what content others have already fact-checked.
| 2,020 | Computation and Language |
Word Sense Disambiguation for 158 Languages using Word Embeddings Only | Disambiguation of word senses in context is easy for humans, but is a major
challenge for automatic approaches. Sophisticated supervised and
knowledge-based models were developed to solve this task. However, (i) the
inherent Zipfian distribution of supervised training instances for a given word
and/or (ii) the quality of linguistic knowledge representations motivate the
development of completely unsupervised and knowledge-free approaches to word
sense disambiguation (WSD). They are particularly useful for under-resourced
languages which do not have any resources for building either supervised and/or
knowledge-based models. In this paper, we present a method that takes as input
a standard pre-trained word embedding model and induces a fully-fledged word
sense inventory, which can be used for disambiguation in context. We use this
method to induce a collection of sense inventories for 158 languages on the
basis of the original pre-trained fastText word embeddings by Grave et al.
(2018), enabling WSD in these languages. Models and system are available
online.
| 2,020 | Computation and Language |
Revisit Systematic Generalization via Meaningful Learning | Humans can systematically generalize to novel compositions of existing
concepts. Recent studies argue that neural networks appear inherently
ineffective in such cognitive capacity, leading to a pessimistic view and a
lack of attention to optimistic results. We revisit this controversial topic
from the perspective of meaningful learning, an exceptional capability of
humans to learn novel concepts by connecting them with known ones. We reassess
the compositional skills of sequence-to-sequence models conditioned on the
semantic links between new and old concepts. Our observations suggest that
models can successfully one-shot generalize to novel concepts and compositions
through semantic linking, either inductively or deductively. We demonstrate
that prior knowledge plays a key role as well. In addition to synthetic tests,
we further conduct proof-of-concept experiments in machine translation and
semantic parsing, showing the benefits of meaningful learning in applications.
We hope our positive findings will encourage excavating modern neural networks'
potential in systematic generalization through more advanced learning schemes.
| 2,022 | Computation and Language |
Leveraging Foreign Language Labeled Data for Aspect-Based Opinion Mining | Aspect-based opinion mining is the task of identifying sentiment at the
aspect level in opinionated text, which consists of two subtasks: aspect
category extraction and sentiment polarity classification. While aspect
category extraction aims to detect and categorize opinion targets such as
product features, sentiment polarity classification assigns a sentiment label,
i.e. positive, negative, or neutral, to each identified aspect. Supervised
learning methods have been shown to deliver better accuracy for this task but
they require labeled data, which is costly to obtain, especially for
resource-poor languages like Vietnamese. To address this problem, we present a
supervised aspect-based opinion mining method that utilizes labeled data from a
foreign language (English in this case), which is translated to Vietnamese by
an automated translation tool (Google Translate). Because aspects and opinions
in different languages may be expressed by different words, we propose using
word embeddings, in addition to other features, to reduce the vocabulary
difference between the original and translated texts, thus improving the
effectiveness of aspect category extraction and sentiment polarity
classification processes. We also introduce an annotated corpus of aspect
categories and sentiment polarities extracted from restaurant reviews in
Vietnamese, and conduct a series of experiments on the corpus. Experimental
results demonstrate the effectiveness of the proposed approach.
| 2,020 | Computation and Language |
TRANS-BLSTM: Transformer with Bidirectional LSTM for Language
Understanding | Bidirectional Encoder Representations from Transformers (BERT) has recently
achieved state-of-the-art performance on a broad range of NLP tasks including
sentence classification, machine translation, and question answering. The BERT
model architecture is derived primarily from the transformer. Prior to the
transformer era, bidirectional Long Short-Term Memory (BLSTM) has been the
dominant modeling architecture for neural machine translation and question
answering. In this paper, we investigate how these two modeling techniques can
be combined to create a more powerful model architecture. We propose a new
architecture denoted as Transformer with BLSTM (TRANS-BLSTM) which has a BLSTM
layer integrated to each transformer block, leading to a joint modeling
framework for transformer and BLSTM. We show that TRANS-BLSTM models
consistently lead to improvements in accuracy compared to BERT baselines in
GLUE and SQuAD 1.1 experiments. Our TRANS-BLSTM model obtains an F1 score of
94.01% on the SQuAD 1.1 development dataset, which is comparable to the
state-of-the-art result.
| 2,020 | Computation and Language |
CompLex: A New Corpus for Lexical Complexity Prediction from Likert
Scale Data | Predicting which words are considered hard to understand for a given target
population is a vital step in many NLP applications such as text
simplification. This task is commonly referred to as Complex Word
Identification (CWI). With a few exceptions, previous studies have approached
the task as a binary classification task in which systems predict a complexity
value (complex vs. non-complex) for a set of target words in a text. This
choice is motivated by the fact that all CWI datasets compiled so far have been
annotated using a binary annotation scheme. Our paper addresses this limitation
by presenting the first English dataset for continuous lexical complexity
prediction. We use a 5-point Likert scale scheme to annotate complex words in
texts from three sources/domains: the Bible, Europarl, and biomedical texts.
This resulted in a corpus of 9,476 sentences each annotated by around 7
annotators.
| 2,020 | Computation and Language |
Key Phrase Classification in Complex Assignments | Complex assignments typically consist of open-ended questions with large and
diverse content in the context of both classroom and online graduate programs.
With the sheer scale of these programs comes a variety of problems in peer and
expert feedback, including rogue reviews. As such with the hope of identifying
important contents needed for the review, in this work we present a very first
work on key phrase classification with a detailed empirical study on
traditional and most recent language modeling approaches. From this study, we
find that the task of classification of key phrases is ambiguous at a human
level producing Cohen's kappa of 0.77 on a new data set. Both pretrained
language models and simple TFIDF SVM classifiers produce similar results with a
former producing average of 0.6 F1 higher than the latter. We finally derive
practical advice from our extensive empirical and model interpretability
results for those interested in key phrase classification from educational
reports in the future.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.