Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Efficient Neural Query Auto Completion | Query Auto Completion (QAC), as the starting point of information retrieval
tasks, is critical to user experience. Generally it has two steps: generating
completed query candidates according to query prefixes, and ranking them based
on extracted features. Three major challenges are observed for a query auto
completion system: (1) QAC has a strict online latency requirement. For each
keystroke, results must be returned within tens of milliseconds, which poses a
significant challenge in designing sophisticated language models for it. (2)
For unseen queries, generated candidates are of poor quality as contextual
information is not fully utilized. (3) Traditional QAC systems heavily rely on
handcrafted features such as the query candidate frequency in search logs,
lacking sufficient semantic understanding of the candidate.
In this paper, we propose an efficient neural QAC system with effective
context modeling to overcome these challenges. On the candidate generation
side, this system uses as much information as possible in unseen prefixes to
generate relevant candidates, increasing the recall by a large margin. On the
candidate ranking side, an unnormalized language model is proposed, which
effectively captures deep semantics of queries. This approach presents better
ranking performance over state-of-the-art neural ranking methods and reduces
$\sim$95\% latency compared to neural language modeling methods. The empirical
results on public datasets show that our model achieves a good balance between
accuracy and efficiency. This system is served in LinkedIn job search with
significant product impact observed.
| 2,020 | Computation and Language |
Evaluating computational models of infant phonetic learning across
languages | In the first year of life, infants' speech perception becomes attuned to the
sounds of their native language. Many accounts of this early phonetic learning
exist, but computational models predicting the attunement patterns observed in
infants from the speech input they hear have been lacking. A recent study
presented the first such model, drawing on algorithms proposed for unsupervised
learning from naturalistic speech, and tested it on a single phone contrast.
Here we study five such algorithms, selected for their potential cognitive
relevance. We simulate phonetic learning with each algorithm and perform tests
on three phone contrasts from different languages, comparing the results to
infants' discrimination patterns. The five models display varying degrees of
agreement with empirical observations, showing that our approach can help
decide between candidate mechanisms for early phonetic learning, and providing
insight into which aspects of the models are critical for capturing infants'
perceptual development.
| 2,020 | Computation and Language |
Which Kind Is Better in Open-domain Multi-turn Dialog,Hierarchical or
Non-hierarchical Models? An Empirical Study | Currently, open-domain generative dialog systems have attracted considerable
attention in academia and industry. Despite the success of single-turn dialog
generation, multi-turn dialog generation is still a big challenge. So far,
there are two kinds of models for open-domain multi-turn dialog generation:
hierarchical and non-hierarchical models. Recently, some works have shown that
the hierarchical models are better than non-hierarchical models under their
experimental settings; meanwhile, some works also demonstrate the opposite
conclusion. Due to the lack of adequate comparisons, it's not clear which kind
of models are better in open-domain multi-turn dialog generation. Thus, in this
paper, we will measure systematically nearly all representative hierarchical
and non-hierarchical models over the same experimental settings to check which
kind is better. Through extensive experiments, we have the following three
important conclusions: (1) Nearly all hierarchical models are worse than
non-hierarchical models in open-domain multi-turn dialog generation, except for
the HRAN model. Through further analysis, the excellent performance of HRAN
mainly depends on its word-level attention mechanism; (2) The performance of
other hierarchical models will also obtain a great improvement if integrating
the word-level attention mechanism into these models. The modified hierarchical
models even significantly outperform the non-hierarchical models; (3) The
reason why the word-level attention mechanism is so powerful for hierarchical
models is because it can leverage context information more effectively,
especially the fine-grained information. Besides, we have implemented all of
the models and already released the codes.
| 2,020 | Computation and Language |
Data Weighted Training Strategies for Grammatical Error Correction | Recent progress in the task of Grammatical Error Correction (GEC) has been
driven by addressing data sparsity, both through new methods for generating
large and noisy pretraining data and through the publication of small and
higher-quality finetuning data in the BEA-2019 shared task. Building upon
recent work in Neural Machine Translation (NMT), we make use of both kinds of
data by deriving example-level scores on our large pretraining data based on a
smaller, higher-quality dataset. In this work, we perform an empirical study to
discover how to best incorporate delta-log-perplexity, a type of example
scoring, into a training schedule for GEC. In doing so, we perform experiments
that shed light on the function and applicability of delta-log-perplexity.
Models trained on scored data achieve state-of-the-art results on common GEC
test sets.
| 2,020 | Computation and Language |
A Context-based Disambiguation Model for Sentiment Concepts Using a
Bag-of-concepts Approach | With the widespread dissemination of user-generated content on different
social networks, and online consumer systems such as Amazon, the quantity of
opinionated information available on the Internet has been increased. One of
the main tasks of the sentiment analysis is to detect polarity within a text.
The existing polarity detection methods mainly focus on keywords and their
naive frequency counts; however, they less regard the meanings and implicit
dimensions of the natural concepts. Although background knowledge plays a
critical role in determining the polarity of concepts, it has been disregarded
in polarity detection methods. This study presents a context-based model to
solve ambiguous polarity concepts using commonsense knowledge. First, a model
is presented to generate a source of ambiguous sentiment concepts based on
SenticNet by computing the probability distribution. Then the model uses a
bag-of-concepts approach to remove ambiguities and semantic augmentation with
the ConceptNet handling to overcome lost knowledge. ConceptNet is a large-scale
semantic network with a large number of commonsense concepts. In this paper,
the point mutual information (PMI) measure is used to select the contextual
concepts having strong relationships with ambiguous concepts. The polarity of
the ambiguous concepts is precisely detected using positive/negative contextual
concepts and the relationship of the concepts in the semantic knowledge base.
The text representation scheme is semantically enriched using Numberbatch,
which is a word embedding model based on the concepts from the ConceptNet
semantic network. The proposed model is evaluated by applying a corpus of
product reviews, called Semeval. The experimental results revealed an accuracy
rate of 82.07%, representing the effectiveness of the proposed model.
| 2,020 | Computation and Language |
Perception Score, A Learned Metric for Open-ended Text Generation
Evaluation | Automatic evaluation for open-ended natural language generation tasks remains
a challenge. Existing metrics such as BLEU show a low correlation with human
judgment. We propose a novel and powerful learning-based evaluation metric:
Perception Score. The method measures the overall quality of the generation and
scores holistically instead of only focusing on one evaluation criteria, such
as word overlapping. Moreover, it also shows the amount of uncertainty about
its evaluation result. By connecting the uncertainty, Perception Score gives a
more accurate evaluation for the generation system. Perception Score provides
state-of-the-art results on two conditional generation tasks and two
unconditional generation tasks.
| 2,020 | Computation and Language |
Privacy Guarantees for De-identifying Text Transformations | Machine Learning approaches to Natural Language Processing tasks benefit from
a comprehensive collection of real-life user data. At the same time, there is a
clear need for protecting the privacy of the users whose data is collected and
processed. For text collections, such as, e.g., transcripts of voice
interactions or patient records, replacing sensitive parts with benign
alternatives can provide de-identification. However, how much privacy is
actually guaranteed by such text transformations, and are the resulting texts
still useful for machine learning? In this paper, we derive formal privacy
guarantees for general text transformation-based de-identification methods on
the basis of Differential Privacy. We also measure the effect that different
ways of masking private information in dialog transcripts have on a subsequent
machine learning task. To this end, we formulate different masking strategies
and compare their privacy-utility trade-offs. In particular, we compare a
simple redact approach with more sophisticated word-by-word replacement using
deep learning models on multiple natural language understanding tasks like
named entity recognition, intent detection, and dialog act classification. We
find that only word-by-word replacement is robust against performance drops in
various tasks.
| 2,022 | Computation and Language |
IMS at SemEval-2020 Task 1: How low can you go? Dimensionality in
Lexical Semantic Change Detection | We present the results of our system for SemEval-2020 Task 1 that exploits a
commonly used lexical semantic change detection model based on Skip-Gram with
Negative Sampling. Our system focuses on Vector Initialization (VI) alignment,
compares VI to the currently top-ranking models for Subtask 2 and demonstrates
that these can be outperformed if we optimize VI dimensionality. We demonstrate
that differences in performance can largely be attributed to model-specific
sources of noise, and we reveal a strong relationship between dimensionality
and frequency-induced noise in VI alignment. Our results suggest that lexical
semantic change models integrating vector space alignment should pay more
attention to the role of the dimensionality parameter.
| 2,020 | Computation and Language |
Quran Intelligent Ontology Construction Approach Using Association Rules
Mining | Ontology can be seen as a formal representation of knowledge. They have been
investigated in many artificial intelligence studies including semantic web,
software engineering, and information retrieval. The aim of ontology is to
develop knowledge representations that can be shared and reused. This research
project is concerned with the use of association rules to extract the Quran
ontology. The manual acquisition of ontologies from Quran verses can be very
costly; therefore, we need an intelligent system for Quran ontology
construction using patternbased schemes and associations rules to discover
Quran concepts and semantics relations from Quran verses. Our system is based
on the combination of statistics and linguistics methods to extract concepts
and conceptual relations from Quran. In particular, a linguistic pattern-based
approach is exploited to extract specific concepts from the Quran, while the
conceptual relations are found based on association rules technique. The Quran
ontology will offer a new and powerful representation of Quran knowledge, and
the association rules will help to represent the relations between all classes
of connected concepts in the Quran ontology.
| 2,020 | Computation and Language |
SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual
Media | In this paper, we present the main findings and compare the results of
SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media. The
goal of this shared task is to design automatic methods for emphasis selection,
i.e. choosing candidates for emphasis in textual content to enable automated
design assistance in authoring. The main focus is on short text instances for
social media, with a variety of examples, from social media posts to
inspirational quotes. Participants were asked to model emphasis using plain
text with no additional context from the user or other design considerations.
SemEval-2020 Emphasis Selection shared task attracted 197 participants in the
early phase and a total of 31 teams made submissions to this task. The
highest-ranked submission achieved 0.823 Matchm score. The analysis of systems
submitted to the task indicates that BERT and RoBERTa were the most common
choice of pre-trained models used, and part of speech tag (POS) was the most
useful feature. Full results can be found on the task's website.
| 2,020 | Computation and Language |
Learning a natural-language to LTL executable semantic parser for
grounded robotics | Children acquire their native language with apparent ease by observing how
language is used in context and attempting to use it themselves. They do so
without laborious annotations, negative examples, or even direct corrections.
We take a step toward robots that can do the same by training a grounded
semantic parser, which discovers latent linguistic representations that can be
used for the execution of natural-language commands. In particular, we focus on
the difficult domain of commands with a temporal aspect, whose semantics we
capture with Linear Temporal Logic, LTL. Our parser is trained with pairs of
sentences and executions as well as an executor. At training time, the parser
hypothesizes a meaning representation for the input as a formula in LTL. Three
competing pressures allow the parser to discover meaning from language. First,
any hypothesized meaning for a sentence must be permissive enough to reflect
all the annotated execution trajectories. Second, the executor -- a pretrained
end-to-end LTL planner -- must find that the observe trajectories are likely
executions of the meaning. Finally, a generator, which reconstructs the
original input, encourages the model to find representations that conserve
knowledge about the command. Together these ensure that the meaning is neither
too general nor too specific. Our model generalizes well, being able to parse
and execute both machine-generated and human-generated commands, with
near-equal accuracy, despite the fact that the human-generated sentences are
much more varied and complex with an open lexicon. The approach presented here
is not specific to LTL: it can be applied to any domain where sentence meanings
can be hypothesized and an executor can verify these meanings, thus opening the
door to many applications for robotic agents.
| 2,021 | Computation and Language |
Retrofitting Vector Representations of Adverse Event Reporting Data to
Structured Knowledge to Improve Pharmacovigilance Signal Detection | Adverse drug events (ADE) are prevalent and costly. Clinical trials are
constrained in their ability to identify potential ADEs, motivating the
development of spontaneous reporting systems for post-market surveillance.
Statistical methods provide a convenient way to detect signals from these
reports but have limitations in leveraging relationships between drugs and ADEs
given their discrete count-based nature. A previously proposed method, aer2vec,
generates distributed vector representations of ADE report entities that
capture patterns of similarity but cannot utilize lexical knowledge. We address
this limitation by retrofitting aer2vec drug embeddings to knowledge from
RxNorm and developing a novel retrofitting variant using vector rescaling to
preserve magnitude. When evaluated in the context of a pharmacovigilance signal
detection task, aer2vec with retrofitting consistently outperforms
disproportionality metrics when trained on minimally preprocessed data.
Retrofitting with rescaling results in further improvements in the larger and
more challenging of two pharmacovigilance reference sets used for evaluation.
| 2,020 | Computation and Language |
Diversifying Task-oriented Dialogue Response Generation with Prototype
Guided Paraphrasing | Existing methods for Dialogue Response Generation (DRG) in Task-oriented
Dialogue Systems (TDSs) can be grouped into two categories: template-based and
corpus-based. The former prepare a collection of response templates in advance
and fill the slots with system actions to produce system responses at runtime.
The latter generate system responses token by token by taking system actions
into account. While template-based DRG provides high precision and highly
predictable responses, they usually lack in terms of generating diverse and
natural responses when compared to (neural) corpus-based approaches.
Conversely, while corpus-based DRG methods are able to generate natural
responses, we cannot guarantee their precision or predictability. Moreover, the
diversity of responses produced by today's corpus-based DRG methods is still
limited. We propose to combine the merits of template-based and corpus-based
DRGs by introducing a prototype-based, paraphrasing neural network, called
P2-Net, which aims to enhance quality of the responses in terms of both
precision and diversity. Instead of generating a response from scratch, P2-Net
generates system responses by paraphrasing template-based responses. To
guarantee the precision of responses, P2-Net learns to separate a response into
its semantics, context influence, and paraphrasing noise, and to keep the
semantics unchanged during paraphrasing. To introduce diversity, P2-Net
randomly samples previous conversational utterances as prototypes, from which
the model can then extract speaking style information. We conduct extensive
experiments on the MultiWOZ dataset with both automatic and human evaluations.
The results show that P2-Net achieves a significant improvement in diversity
while preserving the semantics of responses.
| 2,020 | Computation and Language |
Assessing Demographic Bias in Named Entity Recognition | Named Entity Recognition (NER) is often the first step towards automated
Knowledge Base (KB) generation from raw text. In this work, we assess the bias
in various Named Entity Recognition (NER) systems for English across different
demographic groups with synthetically generated corpora. Our analysis reveals
that models perform better at identifying names from specific demographic
groups across two datasets. We also identify that debiased embeddings do not
help in resolving this issue. Finally, we observe that character-based
contextualized word representation models such as ELMo results in the least
bias across demographics. Our work can shed light on potential biases in
automated KB generation due to systematic exclusion of named entities belonging
to certain demographics.
| 2,020 | Computation and Language |
Point or Generate Dialogue State Tracker | Dialogue state tracking is a key part of a task-oriented dialogue system,
which estimates the user's goal at each turn of the dialogue. In this paper, we
propose the Point-Or-Generate Dialogue State Tracker (POGD). POGD solves the
dialogue state tracking task in two perspectives: 1) point out explicitly
expressed slot values from the user's utterance, and 2) generate implicitly
expressed ones based on slot-specific contexts. It also shares parameters
across all slots, which achieves knowledge sharing and gains scalability to
large-scale across-domain dialogues. Moreover, the training process of its
submodules is formulated as a multi-task learning procedure to further promote
its capability of generalization. Experiments show that POGD not only obtains
state-of-the-art results on both WoZ 2.0 and MultiWoZ 2.0 datasets but also has
good generalization on unseen values and new slots.
| 2,020 | Computation and Language |
Adversarial Training with Fast Gradient Projection Method against
Synonym Substitution based Text Attacks | Adversarial training is the most empirically successful approach in improving
the robustness of deep neural networks for image classification.For text
classification, however, existing synonym substitution based adversarial
attacks are effective but not efficient to be incorporated into practical text
adversarial training. Gradient-based attacks, which are very efficient for
images, are hard to be implemented for synonym substitution based text attacks
due to the lexical, grammatical and semantic constraints and the discrete text
input space. Thereby, we propose a fast text adversarial attack method called
Fast Gradient Projection Method (FGPM) based on synonym substitution, which is
about 20 times faster than existing text attack methods and could achieve
similar attack performance. We then incorporate FGPM with adversarial training
and propose a text defense method called Adversarial Training with FGPM
enhanced by Logit pairing (ATFL). Experiments show that ATFL could
significantly improve the model robustness and block the transferability of
adversarial examples.
| 2,020 | Computation and Language |
Fast and Accurate Neural CRF Constituency Parsing | Estimating probability distribution is one of the core issues in the NLP
field. However, in both deep learning (DL) and pre-DL eras, unlike the vast
applications of linear-chain CRF in sequence labeling tasks, very few works
have applied tree-structure CRF to constituency parsing, mainly due to the
complexity and inefficiency of the inside-outside algorithm. This work presents
a fast and accurate neural CRF constituency parser. The key idea is to batchify
the inside algorithm for loss computation by direct large tensor operations on
GPU, and meanwhile avoid the outside algorithm for gradient computation via
efficient back-propagation. We also propose a simple two-stage
bracketing-then-labeling parsing approach to improve efficiency further. To
improve the parsing performance, inspired by recent progress in dependency
parsing, we introduce a new scoring architecture based on boundary
representation and biaffine attention, and a beneficial dropout strategy.
Experiments on PTB, CTB5.1, and CTB7 show that our two-stage CRF parser
achieves new state-of-the-art performance on both settings of w/o and w/ BERT,
and can parse over 1,000 sentences per second. We release our code at
https://github.com/yzhangcs/crfpar.
| 2,020 | Computation and Language |
Distilling the Knowledge of BERT for Sequence-to-Sequence ASR | Attention-based sequence-to-sequence (seq2seq) models have achieved promising
results in automatic speech recognition (ASR). However, as these models decode
in a left-to-right way, they do not have access to context on the right. We
leverage both left and right context by applying BERT as an external language
model to seq2seq ASR through knowledge distillation. In our proposed method,
BERT generates soft labels to guide the training of seq2seq ASR. Furthermore,
we leverage context beyond the current utterance as input to BERT. Experimental
evaluations show that our method significantly improves the ASR performance
from the seq2seq baseline on the Corpus of Spontaneous Japanese (CSJ).
Knowledge distillation from BERT outperforms that from a transformer LM that
only looks at left context. We also show the effectiveness of leveraging
context beyond the current utterance. Our method outperforms other LM
application approaches such as n-best rescoring and shallow fusion, while it
does not require extra inference cost.
| 2,020 | Computation and Language |
Question Identification in Arabic Language Using Emotional Based
Features | With the growth of content on social media networks, enterprises and services
providers have become interested in identifying the questions of their
customers. Tracking these questions become very challenging with the growth of
text that grows directly proportional to the increase of Arabic users thus
making it very difficult to be tracked manually. By automatic identifying the
questions seeking answers on the social media networks and defining their
category, we can automatically answer them by finding an existing answer or
even routing them to those responsible for answering those questions in the
customer service. This will result in saving the time and the effort and
enhancing the customer feedback and improving the business. In this paper, we
have implemented a binary classifier to classify Arabic text to either question
seeking answer or not. We have added emotional based features to the state of
the art features. Experimental evaluation has done and showed that these
emotional features have improved the accuracy of the classifier.
| 2,020 | Computation and Language |
Knowledge Distillation and Data Selection for Semi-Supervised Learning
in CTC Acoustic Models | Semi-supervised learning (SSL) is an active area of research which aims to
utilize unlabelled data in order to improve the accuracy of speech recognition
systems. The current study proposes a methodology for integration of two key
ideas: 1) SSL using connectionist temporal classification (CTC) objective and
teacher-student based learning 2) Designing effective data-selection mechanisms
for leveraging unlabelled data to boost performance of student models. Our aim
is to establish the importance of good criteria in selecting samples from a
large pool of unlabelled data based on attributes like confidence measure,
speaker and content variability. The question we try to answer is: Is it
possible to design a data selection mechanism which reduces dependence on a
large set of randomly selected unlabelled samples without compromising on Word
Error Rate (WER)? We perform empirical investigations of different data
selection methods to answer this question and quantify the effect of different
sampling strategies. On a semi-supervised ASR setting with 40000 hours of
carefully selected unlabelled data, our CTC-SSL approach gives 17% relative WER
improvement over a baseline CTC system trained with labelled data. It also
achieves on-par performance with CTC-SSL system trained on order of magnitude
larger unlabeled data based on random sampling.
| 2,020 | Computation and Language |
On Commonsense Cues in BERT for Solving Commonsense Tasks | BERT has been used for solving commonsense tasks such as CommonsenseQA. While
prior research has found that BERT does contain commonsense information to some
extent, there has been work showing that pre-trained models can rely on
spurious associations (e.g., data bias) rather than key cues in solving
sentiment classification and other problems. We quantitatively investigate the
presence of structural commonsense cues in BERT when solving commonsense tasks,
and the importance of such cues for the model prediction. Using two different
measures, we find that BERT does use relevant knowledge for solving the task,
and the presence of commonsense knowledge is positively correlated to the model
accuracy.
| 2,021 | Computation and Language |
A Large-Scale Chinese Short-Text Conversation Dataset | The advancements of neural dialogue generation models show promising results
on modeling short-text conversations. However, training such models usually
needs a large-scale high-quality dialogue corpus, which is hard to access. In
this paper, we present a large-scale cleaned Chinese conversation dataset,
LCCC, which contains a base version (6.8million dialogues) and a large version
(12.0 million dialogues). The quality of our dataset is ensured by a rigorous
data cleaning pipeline, which is built based on a set of rules and a classifier
that is trained on manually annotated 110K dialogue pairs. We also release
pre-training dialogue models which are trained on LCCC-base and LCCC-large
respectively. The cleaned dataset and the pre-training models will facilitate
the research of short-text conversation modeling. All the models and datasets
are available at https://github.com/thu-coai/CDial-GPT.
| 2,022 | Computation and Language |
DQI: A Guide to Benchmark Evaluation | A `state of the art' model A surpasses humans in a benchmark B, but fails on
similar benchmarks C, D, and E. What does B have that the other benchmarks do
not? Recent research provides the answer: spurious bias. However, developing A
to solve benchmarks B through E does not guarantee that it will solve future
benchmarks. To progress towards a model that `truly learns' an underlying task,
we need to quantify the differences between successive benchmarks, as opposed
to existing binary and black-box approaches. We propose a novel approach to
solve this underexplored task of quantifying benchmark quality by debuting a
data quality metric: DQI.
| 2,020 | Computation and Language |
KR-BERT: A Small-Scale Korean-Specific Language Model | Since the appearance of BERT, recent works including XLNet and RoBERTa
utilize sentence embedding models pre-trained by large corpora and a large
number of parameters. Because such models have large hardware and a huge amount
of data, they take a long time to pre-train. Therefore it is important to
attempt to make smaller models that perform comparatively. In this paper, we
trained a Korean-specific model KR-BERT, utilizing a smaller vocabulary and
dataset. Since Korean is one of the morphologically rich languages with poor
resources using non-Latin alphabets, it is also important to capture
language-specific linguistic phenomena that the Multilingual BERT model missed.
We tested several tokenizers including our BidirectionalWordPiece Tokenizer and
adjusted the minimal span of tokens for tokenization ranging from sub-character
level to character-level to construct a better vocabulary for our model. With
those adjustments, our KR-BERT model performed comparably and even better than
other existing pre-trained models using a corpus about 1/10 of the size.
| 2,020 | Computation and Language |
FireBERT: Hardening BERT-based classifiers against adversarial attack | We present FireBERT, a set of three proof-of-concept NLP classifiers hardened
against TextFooler-style word-perturbation by producing diverse alternatives to
original samples. In one approach, we co-tune BERT against the training data
and synthetic adversarial samples. In a second approach, we generate the
synthetic samples at evaluation time through substitution of words and
perturbation of embedding vectors. The diversified evaluation results are then
combined by voting. A third approach replaces evaluation-time word substitution
with perturbation of embedding vectors. We evaluate FireBERT for MNLI and IMDB
Movie Review datasets, in the original and on adversarial examples generated by
TextFooler. We also test whether TextFooler is less successful in creating new
adversarial samples when manipulating FireBERT, compared to working on
unhardened classifiers. We show that it is possible to improve the accuracy of
BERT-based models in the face of adversarial attacks without significantly
reducing the accuracy for regular benchmark samples. We present co-tuning with
a synthetic data generator as a highly effective method to protect against 95%
of pre-manufactured adversarial samples while maintaining 98% of original
benchmark performance. We also demonstrate evaluation-time perturbation as a
promising direction for further research, restoring accuracy up to 75% of
benchmark performance for pre-made adversarials, and up to 65% (from a baseline
of 75% orig. / 12% attack) under active attack by TextFooler.
| 2,020 | Computation and Language |
A Bootstrapped Model to Detect Abuse and Intent in White Supremacist
Corpora | Intelligence analysts face a difficult problem: distinguishing extremist
rhetoric from potential extremist violence. Many are content to express abuse
against some target group, but only a few indicate a willingness to engage in
violence. We address this problem by building a predictive model for intent,
bootstrapping from a seed set of intent words, and language templates
expressing intent. We design both an n-gram and attention-based deep learner
for intent and use them as colearners to improve both the basis for prediction
and the predictions themselves. They converge to stable predictions in a few
rounds. We merge predictions of intent with predictions of abusive language to
detect posts that indicate a desire for violent action. We validate the
predictions by comparing them to crowd-sourced labelling. The methodology can
be applied to other linguistic properties for which a plausible starting point
can be defined.
| 2,020 | Computation and Language |
SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets | In this paper, we present the results of the SemEval-2020 Task 9 on Sentiment
Analysis of Code-Mixed Tweets (SentiMix 2020). We also release and describe our
Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora annotated with
word-level language identification and sentence-level sentiment labels. These
corpora are comprised of 20K and 19K examples, respectively. The sentiment
labels are - Positive, Negative, and Neutral. SentiMix attracted 89 submissions
in total including 61 teams that participated in the Hinglish contest and 28
submitted systems to the Spanglish competition. The best performance achieved
was 75.0% F1 score for Hinglish and 80.6% F1 for Spanglish. We observe that
BERT-like models and ensemble methods are the most common and successful
approaches among the participants.
| 2,020 | Computation and Language |
Can We Spot the "Fake News" Before It Was Even Written? | Given the recent proliferation of disinformation online, there has been also
growing research interest in automatically debunking rumors, false claims, and
"fake news." A number of fact-checking initiatives have been launched so far,
both manual and automatic, but the whole enterprise remains in a state of
crisis: by the time a claim is finally fact-checked, it could have reached
millions of users, and the harm caused could hardly be undone. An arguably more
promising direction is to focus on fact-checking entire news outlets, which can
be done in advance. Then, we could fact-check the news before it was even
written: by checking how trustworthy the outlets that published it is. We
describe how we do this in the Tanbih news aggregator, which makes readers
aware of what they are reading. In particular, we develop media profiles that
show the general factuality of reporting, the degree of propagandistic content,
hyper-partisanship, leading political ideology, general frame of reporting, and
stance with respect to various claims and topics.
| 2,020 | Computation and Language |
Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling | Visual Storytelling~(VIST) is a task to tell a narrative story about a
certain topic according to the given photo stream. The existing studies focus
on designing complex models, which rely on a huge amount of human-annotated
data. However, the annotation of VIST is extremely costly and many topics
cannot be covered in the training dataset due to the long-tail topic
distribution. In this paper, we focus on enhancing the generalization ability
of the VIST model by considering the few-shot setting. Inspired by the way
humans tell a story, we propose a topic adaptive storyteller to model the
ability of inter-topic generalization. In practice, we apply the gradient-based
meta-learning algorithm on multi-modal seq2seq models to endow the model the
ability to adapt quickly from topic to topic. Besides, We further propose a
prototype encoding structure to model the ability of intra-topic derivation.
Specifically, we encode and restore the few training story text to serve as a
reference to guide the generation at inference time. Experimental results show
that topic adaptation and prototype encoding structure mutually bring benefit
to the few-shot model on BLEU and METEOR metric. The further case study shows
that the stories generated after few-shot adaptation are more relative and
expressive.
| 2,020 | Computation and Language |
A Parallel Evaluation Data Set of Software Documentation with Document
Structure Annotation | This paper accompanies the software documentation data set for machine
translation, a parallel evaluation data set of data originating from the SAP
Help Portal, that we released to the machine translation community for research
purposes. It offers the possibility to tune and evaluate machine translation
systems in the domain of corporate software documentation and contributes to
the availability of a wider range of evaluation scenarios. The data set
comprises of the language pairs English to Hindi, Indonesian, Malay and Thai,
and thus also increases the test coverage for the many low-resource language
pairs. Unlike most evaluation data sets that consist of plain parallel text,
the segments in this data set come with additional metadata that describes
structural information of the document context. We provide insights into the
origin and creation, the particularities and characteristics of the data set as
well as machine translation results.
| 2,020 | Computation and Language |
A Comparison of Synthetic Oversampling Methods for Multi-class Text
Classification | The authors compared oversampling methods for the problem of multi-class
topic classification. The SMOTE algorithm underlies one of the most popular
oversampling methods. It consists in choosing two examples of a minority class
and generating a new example based on them. In the paper, the authors compared
the basic SMOTE method with its two modifications (Borderline SMOTE and ADASYN)
and random oversampling technique on the example of one of text classification
tasks. The paper discusses the k-nearest neighbor algorithm, the support vector
machine algorithm and three types of neural networks (feedforward network, long
short-term memory (LSTM) and bidirectional LSTM). The authors combine these
machine learning algorithms with different text representations and compared
synthetic oversampling methods. In most cases, the use of oversampling
techniques can significantly improve the quality of classification. The authors
conclude that for this task, the quality of the KNN and SVM algorithms is more
influenced by class imbalance than neural networks.
| 2,020 | Computation and Language |
A Neural Generative Model for Joint Learning Topics and Topic-Specific
Word Embeddings | We propose a novel generative model to explore both local and global context
for joint learning topics and topic-specific word embeddings. In particular, we
assume that global latent topics are shared across documents, a word is
generated by a hidden semantic vector encoding its contextual semantic meaning,
and its context words are generated conditional on both the hidden semantic
vector and global latent topics. Topics are trained jointly with the word
embeddings. The trained model maps words to topic-dependent embeddings, which
naturally addresses the issue of word polysemy. Experimental results show that
the proposed model outperforms the word-level embedding methods in both word
similarity evaluation and word sense disambiguation. Furthermore, the model
also extracts more coherent topics compared with existing neural topic models
or other models for joint learning of topics and word embeddings. Finally, the
model can be easily integrated with existing deep contextualized word embedding
learning methods to further improve the performance of downstream tasks such as
sentiment classification.
| 2,020 | Computation and Language |
Hybrid Ranking Network for Text-to-SQL | In this paper, we study how to leverage pre-trained language models in
Text-to-SQL. We argue that previous approaches under utilize the base language
models by concatenating all columns together with the NL question and feeding
them into the base language model in the encoding stage. We propose a neat
approach called Hybrid Ranking Network (HydraNet) which breaks down the problem
into column-wise ranking and decoding and finally assembles the column-wise
outputs into a SQL query by straightforward rules. In this approach, the
encoder is given a NL question and one individual column, which perfectly
aligns with the original tasks BERT/RoBERTa is trained on, and hence we avoid
any ad-hoc pooling or additional encoding layers which are necessary in prior
approaches. Experiments on the WikiSQL dataset show that the proposed approach
is very effective, achieving the top place on the leaderboard.
| 2,020 | Computation and Language |
LTIatCMU at SemEval-2020 Task 11: Incorporating Multi-Level Features for
Multi-Granular Propaganda Span Identification | In this paper we describe our submission for the task of Propaganda Span
Identification in news articles. We introduce a BERT-BiLSTM based span-level
propaganda classification model that identifies which token spans within the
sentence are indicative of propaganda. The "multi-granular" model incorporates
linguistic knowledge at various levels of text granularity, including word,
sentence and document level syntactic, semantic and pragmatic affect features,
which significantly improve model performance, compared to its
language-agnostic variant. To facilitate better representation learning, we
also collect a corpus of 10k news articles, and use it for fine-tuning the
model. The final model is a majority-voting ensemble which learns different
propaganda class boundaries by leveraging different subsets of incorporated
knowledge and attains $4^{th}$ position on the test leaderboard. Our final
model and code is released at https://github.com/sopu/PropagandaSemEval2020.
| 2,020 | Computation and Language |
Revisiting Low Resource Status of Indian Languages in Machine
Translation | Indian language machine translation performance is hampered due to the lack
of large scale multi-lingual sentence aligned corpora and robust benchmarks.
Through this paper, we provide and analyse an automated framework to obtain
such a corpus for Indian language neural machine translation (NMT) systems. Our
pipeline consists of a baseline NMT system, a retrieval module, and an
alignment module that is used to work with publicly available websites such as
press releases by the government. The main contribution towards this effort is
to obtain an incremental method that uses the above pipeline to iteratively
improve the size of the corpus as well as improve each of the components of our
system. Through our work, we also evaluate the design choices such as the
choice of pivoting language and the effect of iterative incremental increase in
corpus size. Our work in addition to providing an automated framework also
results in generating a relatively larger corpus as compared to existing
corpora that are available for Indian languages. This corpus helps us obtain
substantially improved results on the publicly available WAT evaluation
benchmark and other standard evaluation benchmarks.
| 2,021 | Computation and Language |
The Sockeye 2 Neural Machine Translation Toolkit at AMTA 2020 | We present Sockeye 2, a modernized and streamlined version of the Sockeye
neural machine translation (NMT) toolkit. New features include a simplified
code base through the use of MXNet's Gluon API, a focus on state of the art
model architectures, distributed mixed precision training, and efficient CPU
decoding with 8-bit quantization. These improvements result in faster training
and inference, higher automatic metric scores, and a shorter path from research
to production.
| 2,020 | Computation and Language |
Paraphrase Generation as Zero-Shot Multilingual Translation:
Disentangling Semantic Similarity from Lexical and Syntactic Diversity | Recent work has shown that a multilingual neural machine translation (NMT)
model can be used to judge how well a sentence paraphrases another sentence in
the same language (Thompson and Post, 2020); however, attempting to generate
paraphrases from such a model using standard beam search produces trivial
copies or near copies. We introduce a simple paraphrase generation algorithm
which discourages the production of n-grams that are present in the input. Our
approach enables paraphrase generation in many languages from a single
multilingual NMT model. Furthermore, the amount of lexical diversity between
the input and output can be controlled at generation time. We conduct a human
evaluation to compare our method to a paraphraser trained on the large English
synthetic paraphrase database ParaBank 2 (Hu et al., 2019c) and find that our
method produces paraphrases that better preserve meaning and are more
gramatical, for the same level of lexical diversity. Additional smaller human
assessments demonstrate our approach also works in two non-English languages.
| 2,020 | Computation and Language |
Distantly Supervised Relation Extraction in Federated Settings | This paper investigates distantly supervised relation extraction in federated
settings. Previous studies focus on distant supervision under the assumption of
centralized training, which requires collecting texts from different platforms
and storing them on one machine. However, centralized training is challenged by
two issues, namely, data barriers and privacy protection, which make it almost
impossible or cost-prohibitive to centralize data from multiple platforms.
Therefore, it is worthy to investigate distant supervision in the federated
learning paradigm, which decouples the model training from the need for direct
access to the raw data. Overcoming label noise of distant supervision, however,
becomes more difficult in federated settings, since the sentences containing
the same entity pair may scatter around different platforms. In this paper, we
propose a federated denoising framework to suppress label noise in federated
settings. The core of this framework is a multiple instance learning based
denoising method that is able to select reliable instances via cross-platform
collaboration. Various experimental results on New York Times dataset and miRNA
gene regulation relation dataset demonstrate the effectiveness of the proposed
method.
| 2,020 | Computation and Language |
The Annotation Guideline of LST20 Corpus | This report presents the annotation guideline for LST20, a large-scale corpus
with multiple layers of linguistic annotation for Thai language processing. Our
guideline consists of five layers of linguistic annotation: word segmentation,
POS tagging, named entities, clause boundaries, and sentence boundaries. The
dataset complies to the CoNLL-2003-style format for ease of use. LST20 Corpus
offers five layers of linguistic annotation as aforementioned. At a large
scale, it consists of 3,164,864 words, 288,020 named entities, 248,962 clauses,
and 74,180 sentences, while it is annotated with 16 distinct POS tags. All
3,745 documents are also annotated with 15 news genres. Regarding its sheer
size, this dataset is considered large enough for developing joint neural
models for NLP. With the existence of this publicly available corpus, Thai has
become a linguistically rich language for the first time.
| 2,020 | Computation and Language |
The Language Interpretability Tool: Extensible, Interactive
Visualizations and Analysis for NLP Models | We present the Language Interpretability Tool (LIT), an open-source platform
for visualization and understanding of NLP models. We focus on core questions
about model behavior: Why did my model make this prediction? When does it
perform poorly? What happens under a controlled change in the input? LIT
integrates local explanations, aggregate analysis, and counterfactual
generation into a streamlined, browser-based interface to enable rapid
exploration and error analysis. We include case studies for a diverse set of
workflows, including exploring counterfactuals for sentiment analysis,
measuring gender bias in coreference systems, and exploring local behavior in
text generation. LIT supports a wide range of models--including classification,
seq2seq, and structured prediction--and is highly extensible through a
declarative, framework-agnostic API. LIT is under active development, with code
and full documentation available at https://github.com/pair-code/lit.
| 2,020 | Computation and Language |
Modeling Inter-Aspect Dependencies with a Non-temporal Mechanism for
Aspect-Based Sentiment Analysis | For multiple aspects scenario of aspect-based sentiment analysis (ABSA),
existing approaches typically ignore inter-aspect relations or rely on temporal
dependencies to process aspect-aware representations of all aspects in a
sentence. Although multiple aspects of a sentence appear in a non-adjacent
sequential order, they are not in a strict temporal relationship as natural
language sequence, thus the aspect-aware sentence representations should not be
treated as temporal dependency processing. In this paper, we propose a novel
non-temporal mechanism to enhance the ABSA task through modeling inter-aspect
dependencies. Furthermore, we focus on the well-known class imbalance issue on
the ABSA task and address it by down-weighting the loss assigned to
well-classified instances. Experiments on two distinct domains of SemEval 2014
task 4 demonstrate the effectiveness of our proposed approach.
| 2,022 | Computation and Language |
Evaluating the Impact of Knowledge Graph Context on Entity
Disambiguation Models | Pretrained Transformer models have emerged as state-of-the-art approaches
that learn contextual information from text to improve the performance of
several NLP tasks. These models, albeit powerful, still require specialized
knowledge in specific scenarios. In this paper, we argue that context derived
from a knowledge graph (in our case: Wikidata) provides enough signals to
inform pretrained transformer models and improve their performance for named
entity disambiguation (NED) on Wikidata KG. We further hypothesize that our
proposed KG context can be standardized for Wikipedia, and we evaluate the
impact of KG context on state-of-the-art NED model for the Wikipedia knowledge
base. Our empirical results validate that the proposed KG context can be
generalized (for Wikipedia), and providing KG context in transformer
architectures considerably outperforms the existing baselines, including the
vanilla transformer models.
| 2,020 | Computation and Language |
OCoR: An Overlapping-Aware Code Retriever | Code retrieval helps developers reuse the code snippet in the open-source
projects. Given a natural language description, code retrieval aims to search
for the most relevant code among a set of code. Existing state-of-the-art
approaches apply neural networks to code retrieval. However, these approaches
still fail to capture an important feature: overlaps. The overlaps between
different names used by different people indicate that two different names may
be potentially related (e.g., "message" and "msg"), and the overlaps between
identifiers in code and words in natural language descriptions indicate that
the code snippet and the description may potentially be related. To address
these problems, we propose a novel neural architecture named OCoR, where we
introduce two specifically-designed components to capture overlaps: the first
embeds identifiers by character to capture the overlaps between identifiers,
and the second introduces a novel overlap matrix to represent the degrees of
overlaps between each natural language word and each identifier.
The evaluation was conducted on two established datasets. The experimental
results show that OCoR significantly outperforms the existing state-of-the-art
approaches and achieves 13.1% to 22.3% improvements. Moreover, we also
conducted several in-depth experiments to help understand the performance of
different components in OCoR.
| 2,020 | Computation and Language |
Compression of Deep Learning Models for Text: A Survey | In recent years, the fields of natural language processing (NLP) and
information retrieval (IR) have made tremendous progress thanksto deep learning
models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and
Long Short-Term Memory (LSTMs)networks, and Transformer [120] based models like
Bidirectional Encoder Representations from Transformers (BERT) [24],
GenerativePre-training Transformer (GPT-2) [94], Multi-task Deep Neural Network
(MT-DNN) [73], Extra-Long Network (XLNet) [134], Text-to-text transfer
transformer (T5) [95], T-NLG [98] and GShard [63]. But these models are
humongous in size. On the other hand,real world applications demand small model
size, low response times and low computational power wattage. In this survey,
wediscuss six different types of methods (Pruning, Quantization, Knowledge
Distillation, Parameter Sharing, Tensor Decomposition, andSub-quadratic
Transformer based methods) for compression of such models to enable their
deployment in real industry NLP projects.Given the critical need of building
applications with efficient and small models, and the large amount of recently
published work inthis area, we believe that this survey organizes the plethora
of work done by the 'deep learning for NLP' community in the past fewyears and
presents it as a coherent story.
| 2,021 | Computation and Language |
Text Classification based on Multi-granularity Attention Hybrid Neural
Network | Neural network-based approaches have become the driven forces for Natural
Language Processing (NLP) tasks. Conventionally, there are two mainstream
neural architectures for NLP tasks: the recurrent neural network (RNN) and the
convolution neural network (ConvNet). RNNs are good at modeling long-term
dependencies over input texts, but preclude parallel computation. ConvNets do
not have memory capability and it has to model sequential data as un-ordered
features. Therefore, ConvNets fail to learn sequential dependencies over the
input texts, but it is able to carry out high-efficient parallel computation.
As each neural architecture, such as RNN and ConvNets, has its own pro and con,
integration of different architectures is assumed to be able to enrich the
semantic representation of texts, thus enhance the performance of NLP tasks.
However, few investigation explores the reconciliation of these seemingly
incompatible architectures. To address this issue, we propose a hybrid
architecture based on a novel hierarchical multi-granularity attention
mechanism, named Multi-granularity Attention-based Hybrid Neural Network
(MahNN). The attention mechanism is to assign different weights to different
parts of the input sequence to increase the computation efficiency and
performance of neural models. In MahNN, two types of attentions are introduced:
the syntactical attention and the semantical attention. The syntactical
attention computes the importance of the syntactic elements (such as words or
sentence) at the lower symbolic level and the semantical attention is used to
compute the importance of the embedded space dimension corresponding to the
upper latent semantics. We adopt the text classification as an exemplifying way
to illustrate the ability of MahNN to understand texts.
| 2,020 | Computation and Language |
Variance-reduced Language Pretraining via a Mask Proposal Network | Self-supervised learning, a.k.a., pretraining, is important in natural
language processing. Most of the pretraining methods first randomly mask some
positions in a sentence and then train a model to recover the tokens at the
masked positions. In such a way, the model can be trained without human
labeling, and the massive data can be used with billion parameters. Therefore,
the optimization efficiency becomes critical. In this paper, we tackle the
problem from the view of gradient variance reduction. In particular, we first
propose a principled gradient variance decomposition theorem, which shows that
the variance of the stochastic gradient of the language pretraining can be
naturally decomposed into two terms: the variance that arises from the sample
of data in a batch, and the variance that arises from the sampling of the mask.
The second term is the key difference between selfsupervised learning and
supervised learning, which makes the pretraining slower. In order to reduce the
variance of the second part, we leverage the importance sampling strategy,
which aims at sampling the masks according to a proposal distribution instead
of the uniform distribution. It can be shown that if the proposal distribution
is proportional to the gradient norm, the variance of the sampling is reduced.
To improve efficiency, we introduced a MAsk Proposal Network (MAPNet), which
approximates the optimal mask proposal distribution and is trained end-to-end
along with the model. According to the experimental result, our model converges
much faster and achieves higher performance than the baseline BERT model.
| 2,020 | Computation and Language |
Approaching Neural Chinese Word Segmentation as a Low-Resource Machine
Translation Task | Chinese word segmentation has entered the deep learning era which greatly
reduces the hassle of feature engineering. Recently, some researchers attempted
to treat it as character-level translation, which further simplified model
designing, but there is a performance gap between the translation-based
approach and other methods. This motivates our work, in which we apply the best
practices from low-resource neural machine translation to supervised Chinese
segmentation. We examine a series of techniques including regularization, data
augmentation, objective weighting, transfer learning, and ensembling. Compared
to previous works, our low-resource translation-based method maintains the
effortless model design, yet achieves the same result as state of the art in
the constrained evaluation without using additional data.
| 2,022 | Computation and Language |
Model Robustness with Text Classification: Semantic-preserving
adversarial attacks | We propose algorithms to create adversarial attacks to assess model
robustness in text classification problems. They can be used to create white
box attacks and black box attacks while at the same time preserving the
semantics and syntax of the original text. The attacks cause significant number
of flips in white-box setting and same rule based can be used in black-box
setting. In a black-box setting, the attacks created are able to reverse
decisions of transformer based architectures.
| 2,020 | Computation and Language |
Ranking Enhanced Dialogue Generation | How to effectively utilize the dialogue history is a crucial problem in
multi-turn dialogue generation. Previous works usually employ various neural
network architectures (e.g., recurrent neural networks, attention mechanisms,
and hierarchical structures) to model the history. However, a recent empirical
study by Sankar et al. has shown that these architectures lack the ability of
understanding and modeling the dynamics of the dialogue history. For example,
the widely used architectures are insensitive to perturbations of the dialogue
history, such as words shuffling, utterances missing, and utterances
reordering. To tackle this problem, we propose a Ranking Enhanced Dialogue
generation framework in this paper. Despite the traditional representation
encoder and response generation modules, an additional ranking module is
introduced to model the ranking relation between the former utterance and
consecutive utterances. Specifically, the former utterance and consecutive
utterances are treated as query and corresponding documents, and both local and
global ranking losses are designed in the learning process. In this way, the
dynamics in the dialogue history can be explicitly captured. To evaluate our
proposed models, we conduct extensive experiments on three public datasets,
i.e., bAbI, PersonaChat, and JDC. Experimental results show that our models
produce better responses in terms of both quantitative measures and human
judgments, as compared with the state-of-the-art dialogue generation models.
Furthermore, we give some detailed experimental analysis to show where and how
the improvements come from.
| 2,020 | Computation and Language |
Cognitive Representation Learning of Self-Media Online Article Quality | The automatic quality assessment of self-media online articles is an urgent
and new issue, which is of great value to the online recommendation and search.
Different from traditional and well-formed articles, self-media online articles
are mainly created by users, which have the appearance characteristics of
different text levels and multi-modal hybrid editing, along with the potential
characteristics of diverse content, different styles, large semantic spans and
good interactive experience requirements. To solve these challenges, we
establish a joint model CoQAN in combination with the layout organization,
writing characteristics and text semantics, designing different representation
learning subnetworks, especially for the feature learning process and
interactive reading habits on mobile terminals. It is more consistent with the
cognitive style of expressing an expert's evaluation of articles. We have also
constructed a large scale real-world assessment dataset. Extensive experimental
results show that the proposed framework significantly outperforms
state-of-the-art methods, and effectively learns and integrates different
factors of the online article quality assessment.
| 2,020 | Computation and Language |
Dialogue State Induction Using Neural Latent Variable Models | Dialogue state modules are a useful component in a task-oriented dialogue
system. Traditional methods find dialogue states by manually labeling training
corpora, upon which neural models are trained. However, the labeling process
can be costly, slow, error-prone, and more importantly, cannot cover the vast
range of domains in real-world dialogues for customer service. We propose the
task of dialogue state induction, building two neural latent variable models
that mine dialogue states automatically from unlabeled customer service
dialogue records. Results show that the models can effectively find meaningful
slots. In addition, equipped with induced dialogue states, a state-of-the-art
dialogue system gives better performance compared with not using a dialogue
state module.
| 2,020 | Computation and Language |
Exploration of Gender Differences in COVID-19 Discourse on Reddit | Decades of research on differences in the language of men and women have
established postulates about preferences in lexical, topical, and emotional
expression between the two genders, along with their sociological
underpinnings. Using a novel dataset of male and female linguistic productions
collected from the Reddit discussion platform, we further confirm existing
assumptions about gender-linked affective distinctions, and demonstrate that
these distinctions are amplified in social media postings involving
emotionally-charged discourse related to COVID-19. Our analysis also confirms
considerable differences in topical preferences between male and female authors
in spontaneous pandemic-related discussions.
| 2,020 | Computation and Language |
MICE: Mining Idioms with Contextual Embeddings | Idiomatic expressions can be problematic for natural language processing
applications as their meaning cannot be inferred from their constituting words.
A lack of successful methodological approaches and sufficiently large datasets
prevents the development of machine learning approaches for detecting idioms,
especially for expressions that do not occur in the training set. We present an
approach, called MICE, that uses contextual embeddings for that purpose. We
present a new dataset of multi-word expressions with literal and idiomatic
meanings and use it to train a classifier based on two state-of-the-art
contextual word embeddings: ELMo and BERT. We show that deep neural networks
using both embeddings perform much better than existing approaches, and are
capable of detecting idiomatic word use, even for expressions that were not
present in the training set. We demonstrate cross-lingual transfer of developed
models and analyze the size of the required dataset.
| 2,021 | Computation and Language |
MASRI-HEADSET: A Maltese Corpus for Speech Recognition | Maltese, the national language of Malta, is spoken by approximately 500,000
people. Speech processing for Maltese is still in its early stages of
development. In this paper, we present the first spoken Maltese corpus designed
purposely for Automatic Speech Recognition (ASR). The MASRI-HEADSET corpus was
developed by the MASRI project at the University of Malta. It consists of 8
hours of speech paired with text, recorded by using short text snippets in a
laboratory environment. The speakers were recruited from different geographical
locations all over the Maltese islands, and were roughly evenly distributed by
gender. This paper also presents some initial results achieved in baseline
experiments for Maltese ASR using Sphinx and Kaldi. The MASRI-HEADSET Corpus is
publicly available for research/academic purposes.
| 2,020 | Computation and Language |
On the Importance of Local Information in Transformer Based Models | The self-attention module is a key component of Transformer-based models,
wherein each token pays attention to every other token. Recent studies have
shown that these heads exhibit syntactic, semantic, or local behaviour. Some
studies have also identified promise in restricting this attention to be local,
i.e., a token attending to other tokens only in a small neighbourhood around
it. However, no conclusive evidence exists that such local attention alone is
sufficient to achieve high accuracy on multiple NLP tasks. In this work, we
systematically analyse the role of locality information in learnt models and
contrast it with the role of syntactic information. More specifically, we first
do a sensitivity analysis and show that, at every layer, the representation of
a token is much more sensitive to tokens in a small neighborhood around it than
to tokens which are syntactically related to it. We then define an attention
bias metric to determine whether a head pays more attention to local tokens or
to syntactically related tokens. We show that a larger fraction of heads have a
locality bias as compared to a syntactic bias. Having established the
importance of local attention heads, we train and evaluate models where varying
fractions of the attention heads are constrained to be local. Such models would
be more efficient as they would have fewer computations in the attention layer.
We evaluate these models on 4 GLUE datasets (QQP, SST-2, MRPC, QNLI) and 2 MT
datasets (En-De, En-Ru) and clearly demonstrate that such constrained models
have comparable performance to the unconstrained models. Through this
systematic evaluation we establish that attention in Transformer-based models
can be constrained to be local without affecting performance.
| 2,020 | Computation and Language |
Commonsense Knowledge Graph Reasoning by Selection or Generation? Why? | Commonsense knowledge graph reasoning(CKGR) is the task of predicting a
missing entity given one existing and the relation in a commonsense knowledge
graph (CKG). Existing methods can be classified into two categories generation
method and selection method. Each method has its own advantage. We
theoretically and empirically compare the two methods, finding the selection
method is more suitable than the generation method in CKGR. Given the
observation, we further combine the structure of neural Text Encoder and
Knowledge Graph Embedding models to solve the selection method's two problems,
achieving competitive results. We provide a basic framework and baseline model
for subsequent CKGR tasks by selection methods.
| 2,020 | Computation and Language |
Studying Dishonest Intentions in Brazilian Portuguese Texts | Previous work in the social sciences, psychology and linguistics has show
that liars have some control over the content of their stories, however their
underlying state of mind may "leak out" through the way that they tell them. To
the best of our knowledge, no previous systematic effort exists in order to
describe and model deception language for Brazilian Portuguese. To fill this
important gap, we carry out an initial empirical linguistic study on false
statements in Brazilian news. We methodically analyze linguistic features using
a deceptive news corpus, which includes both fake and true news. The results
show that they present substantial lexical, syntactic and semantic variations,
as well as punctuation and emotion distinctions.
| 2,021 | Computation and Language |
Speech To Semantics: Improve ASR and NLU Jointly via All-Neural
Interfaces | We consider the problem of spoken language understanding (SLU) of extracting
natural language intents and associated slot arguments or named entities from
speech that is primarily directed at voice assistants. Such a system subsumes
both automatic speech recognition (ASR) as well as natural language
understanding (NLU). An end-to-end joint SLU model can be built to a required
specification opening up the opportunity to deploy on hardware constrained
scenarios like devices enabling voice assistants to work offline, in a privacy
preserving manner, whilst also reducing server costs.
We first present models that extract utterance intent directly from speech
without intermediate text output. We then present a compositional model, which
generates the transcript using the Listen Attend Spell ASR system and then
extracts interpretation using a neural NLU model. Finally, we contrast these
methods to a jointly trained end-to-end joint SLU model, consisting of ASR and
NLU subsystems which are connected by a neural network based interface instead
of text, that produces transcripts as well as NLU interpretation. We show that
the jointly trained model shows improvements to ASR incorporating semantic
information from NLU and also improves NLU by exposing it to ASR confusion
encoded in the hidden layer.
| 2,020 | Computation and Language |
Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems | Task-oriented dialogue systems use four connected modules, namely, Natural
Language Understanding (NLU), a Dialogue State Tracking (DST), Dialogue Policy
(DP) and Natural Language Generation (NLG). A research challenge is to learn
each module with the least amount of samples (i.e., few-shots) given the high
cost related to the data collection. The most common and effective technique to
solve this problem is transfer learning, where large language models, either
pre-trained on text or task-specific data, are fine-tuned on the few samples.
These methods require fine-tuning steps and a set of parameters for each task.
Differently, language models, such as GPT-2 (Radford et al., 2019) and GPT-3
(Brown et al., 2020), allow few-shot learning by priming the model with few
examples. In this paper, we evaluate the priming few-shot ability of language
models in the NLU, DST, DP and NLG tasks. Importantly, we highlight the current
limitations of this approach, and we discuss the possible implication for
future work.
| 2,020 | Computation and Language |
Unsupervised vs. transfer learning for multimodal one-shot matching of
speech and images | We consider the task of multimodal one-shot speech-image matching. An agent
is shown a picture along with a spoken word describing the object in the
picture, e.g. cookie, broccoli and ice-cream. After observing one paired
speech-image example per class, it is shown a new set of unseen pictures, and
asked to pick the "ice-cream". Previous work attempted to tackle this problem
using transfer learning: supervised models are trained on labelled background
data not containing any of the one-shot classes. Here we compare transfer
learning to unsupervised models trained on unlabelled in-domain data. On a
dataset of paired isolated spoken and visual digits, we specifically compare
unsupervised autoencoder-like models to supervised classifier and Siamese
neural networks. In both unimodal and multimodal few-shot matching experiments,
we find that transfer learning outperforms unsupervised training. We also
present experiments towards combining the two methodologies, but find that
transfer learning still performs best (despite idealised experiments showing
the benefits of unsupervised learning).
| 2,020 | Computation and Language |
Graph-based Modeling of Online Communities for Fake News Detection | Over the past few years, there has been a substantial effort towards
automated detection of fake news on social media platforms. Existing research
has modeled the structure, style, content, and patterns in dissemination of
online posts, as well as the demographic traits of users who interact with
them. However, no attention has been directed towards modeling the properties
of online communities that interact with the posts. In this work, we propose a
novel social context-aware fake news detection framework, SAFER, based on graph
neural networks (GNNs). The proposed framework aggregates information with
respect to: 1) the nature of the content disseminated, 2) content-sharing
behavior of users, and 3) the social network of those users. We furthermore
perform a systematic comparison of several GNN models for this task and
introduce novel methods based on relational and hyperbolic GNNs, which have not
been previously used for user or community modeling within NLP. We empirically
demonstrate that our framework yields significant improvements over existing
text-based techniques and achieves state-of-the-art results on fake news
datasets from two different domains.
| 2,020 | Computation and Language |
ANDES at SemEval-2020 Task 12: A jointly-trained BERT multilingual model
for offensive language detection | This paper describes our participation in SemEval-2020 Task 12: Multilingual
Offensive Language Detection. We jointly-trained a single model by fine-tuning
Multilingual BERT to tackle the task across all the proposed languages:
English, Danish, Turkish, Greek and Arabic. Our single model had competitive
results, with a performance close to top-performing systems in spite of sharing
the same parameters across all languages. Zero-shot and few-shot experiments
were also conducted to analyze the transference performance among these
languages. We make our code public for further research
| 2,020 | Computation and Language |
Predicting Event Time by Classifying Sub-Level Temporal Relations
Induced from a Unified Representation of Time Anchors | Extracting event time from news articles is a challenging but attractive
task. In contrast to the most existing pair-wised temporal link annotation,
Reimers et al.(2016) proposed to annotate the time anchor (a.k.a. the exact
time) of each event. Their work represents time anchors with discrete
representations of Single-Day/Multi-Day and Certain/Uncertain. This increases
the complexity of modeling the temporal relations between two time anchors,
which cannot be categorized into the relations of Allen's interval algebra
(Allen, 1990).
In this paper, we propose an effective method to decompose such complex
temporal relations into sub-level relations by introducing a unified quadruple
representation for both Single-Day/Multi-Day and Certain/Uncertain time
anchors. The temporal relation classifiers are trained in a multi-label
classification manner. The system structure of our approach is much simpler
than the existing decision tree model (Reimers et al., 2018), which is composed
by a dozen of node classifiers. Another contribution of this work is to
construct a larger event time corpus (256 news documents) with a reasonable
Inter-Annotator Agreement (IAA), for the purpose of overcoming the data
shortage of the existing event time corpus (36 news documents). The empirical
results show our approach outperforms the state-of-the-art decision tree model
and the increase of data size obtained a significant improvement of
performance.
| 2,020 | Computation and Language |
Quantification of BERT Diagnosis Generalizability Across Medical
Specialties Using Semantic Dataset Distance | Deep learning models in healthcare may fail to generalize on data from unseen
corpora. Additionally, no quantitative metric exists to tell how existing
models will perform on new data. Previous studies demonstrated that NLP models
of medical notes generalize variably between institutions, but ignored other
levels of healthcare organization. We measured SciBERT diagnosis sentiment
classifier generalizability between medical specialties using EHR sentences
from MIMIC-III. Models trained on one specialty performed better on internal
test sets than mixed or external test sets (mean AUCs 0.92, 0.87, and 0.83,
respectively; p = 0.016). When models are trained on more specialties, they
have better test performances (p < 1e-4). Model performance on new corpora is
directly correlated to the similarity between train and test sentence content
(p < 1e-4). Future studies should assess additional axes of generalization to
ensure deep learning models fulfil their intended purpose across institutions,
specialties, and practices.
| 2,021 | Computation and Language |
Label-Wise Document Pre-Training for Multi-Label Text Classification | A major challenge of multi-label text classification (MLTC) is to
stimulatingly exploit possible label differences and label correlations. In
this paper, we tackle this challenge by developing Label-Wise Pre-Training
(LW-PT) method to get a document representation with label-aware information.
The basic idea is that, a multi-label document can be represented as a
combination of multiple label-wise representations, and that, correlated labels
always cooccur in the same or similar documents. LW-PT implements this idea by
constructing label-wise document classification tasks and trains label-wise
document encoders. Finally, the pre-trained label-wise encoder is fine-tuned
with the downstream MLTC task. Extensive experimental results validate that the
proposed method has significant advantages over the previous state-of-the-art
models and is able to discover reasonable label relationship. The code is
released to facilitate other researchers.
| 2,020 | Computation and Language |
Deep Search Query Intent Understanding | Understanding a user's query intent behind a search is critical for modern
search engine success. Accurate query intent prediction allows the search
engine to better serve the user's need by rendering results from more relevant
categories. This paper aims to provide a comprehensive learning framework for
modeling query intent under different stages of a search. We focus on the
design for 1) predicting users' intents as they type in queries on-the-fly in
typeahead search using character-level models; and 2) accurate word-level
intent prediction models for complete queries. Various deep learning components
for query text understanding are experimented. Offline evaluation and online
A/B test experiments show that the proposed methods are effective in
understanding query intent and efficient to scale for online search systems.
| 2,020 | Computation and Language |
Is Supervised Syntactic Parsing Beneficial for Language Understanding?
An Empirical Investigation | Traditional NLP has long held (supervised) syntactic parsing necessary for
successful higher-level semantic language understanding (LU). The recent advent
of end-to-end neural models, self-supervised via language modeling (LM), and
their success on a wide range of LU tasks, however, questions this belief. In
this work, we empirically investigate the usefulness of supervised parsing for
semantic LU in the context of LM-pretrained transformer networks. Relying on
the established fine-tuning paradigm, we first couple a pretrained transformer
with a biaffine parsing head, aiming to infuse explicit syntactic knowledge
from Universal Dependencies treebanks into the transformer. We then fine-tune
the model for LU tasks and measure the effect of the intermediate parsing
training (IPT) on downstream LU task performance. Results from both monolingual
English and zero-shot language transfer experiments (with intermediate
target-language parsing) show that explicit formalized syntax, injected into
transformers through IPT, has very limited and inconsistent effect on
downstream LU performance. Our results, coupled with our analysis of
transformers' representation spaces before and after intermediate parsing, make
a significant step towards providing answers to an essential question: how
(un)availing is supervised parsing for high-level semantic natural language
understanding in the era of large neural models?
| 2,021 | Computation and Language |
SGG: Spinbot, Grammarly and GloVe based Fake News Detection | Recently, news consumption using online news portals has increased
exponentially due to several reasons, such as low cost and easy accessibility.
However, such online platforms inadvertently also become the cause of spreading
false information across the web. They are being misused quite frequently as a
medium to disseminate misinformation and hoaxes. Such malpractices call for a
robust automatic fake news detection system that can keep us at bay from such
misinformation and hoaxes. We propose a robust yet simple fake news detection
system, leveraging the tools for paraphrasing, grammar-checking, and
word-embedding. In this paper, we try to the potential of these tools in
jointly unearthing the authenticity of a news article. Notably, we leverage
Spinbot (for paraphrasing), Grammarly (for grammar-checking), and GloVe (for
word-embedding) tools for this purpose. Using these tools, we were able to
extract novel features that could yield state-of-the-art results on the Fake
News AMT dataset and comparable results on Celebrity datasets when combined
with some of the essential features. More importantly, the proposed method is
found to be more robust empirically than the existing ones, as revealed in our
cross-domain analysis and multi-domain analysis.
| 2,020 | Computation and Language |
TextDecepter: Hard Label Black Box Attack on Text Classifiers | Machine learning has been proven to be susceptible to carefully crafted
samples, known as adversarial examples. The generation of these adversarial
examples helps to make the models more robust and gives us an insight into the
underlying decision-making of these models. Over the years, researchers have
successfully attacked image classifiers in both, white and black-box settings.
However, these methods are not directly applicable to texts as text data is
discrete. In recent years, research on crafting adversarial examples against
textual applications has been on the rise. In this paper, we present a novel
approach for hard-label black-box attacks against Natural Language Processing
(NLP) classifiers, where no model information is disclosed, and an attacker can
only query the model to get a final decision of the classifier, without
confidence scores of the classes involved. Such an attack scenario applies to
real-world black-box models being used for security-sensitive applications such
as sentiment analysis and toxic content detection.
| 2,020 | Computation and Language |
Discovering Lexical Similarity Through Articulatory Feature-based
Phonetic Edit Distance | Lexical Similarity (LS) between two languages uncovers many interesting
linguistic insights such as genetic relationship, mutual intelligibility, and
the usage of one's vocabulary into other. There are various methods through
which LS is evaluated. In the same regard, this paper presents a method of
Phonetic Edit Distance (PED) that uses a soft comparison of letters using the
articulatory features associated with them. The system converts the words into
the corresponding International Phonetic Alphabet (IPA), followed by the
conversion of IPA into its set of articulatory features. Later, the lists of
the set of articulatory features are compared using the proposed method. As an
example, PED gives edit distance of German word vater and Persian word pidar as
0.82; and similarly, Hebrew word shalom and Arabic word salaam as 0.93, whereas
for a juxtapose comparison, their IPA based edit distances are 4 and 2
respectively. Experiments are performed with six languages (Arabic, Hindi,
Marathi, Persian, Sanskrit, and Urdu). In this regard, we extracted part of
speech wise word-lists from the Universal Dependency corpora and evaluated the
LS for every pair of language. Thus, with the proposed approach, we find the
genetic affinity, similarity, and borrowing/loan-words despite having script
differences and sound variation phenomena among these languages.
| 2,022 | Computation and Language |
TopicBERT: A Transformer transfer learning based memory-graph approach
for multimodal streaming social media topic detection | Real time nature of social networks with bursty short messages and their
respective large data scale spread among vast variety of topics are research
interest of many researchers. These properties of social networks which are
known as 5'Vs of big data has led to many unique and enlightenment algorithms
and techniques applied to large social networking datasets and data streams.
Many of these researches are based on detection and tracking of hot topics and
trending social media events that help revealing many unanswered questions.
These algorithms and in some cases software products mostly rely on the nature
of the language itself. Although, other techniques such as unsupervised data
mining methods are language independent but many requirements for a
comprehensive solution are not met. Many research issues such as noisy
sentences that adverse grammar and new online user invented words are
challenging maintenance of a good social network topic detection and tracking
methodology; The semantic relationship between words and in most cases,
synonyms are also ignored by many of these researches. In this research, we use
Transformers combined with an incremental community detection algorithm.
Transformer in one hand, provides the semantic relation between words in
different contexts. On the other hand, the proposed graph mining technique
enhances the resulting topics with aid of simple structural rules. Named entity
recognition from multimodal data, image and text, labels the named entities
with entity type and the extracted topics are tuned using them. All operations
of proposed system has been applied with big social data perspective under
NoSQL technologies. In order to present a working and systematic solution, we
combined MongoDB with Neo4j as two major database systems of our work. The
proposed system shows higher precision and recall compared to other methods in
three different datasets.
| 2,021 | Computation and Language |
DCR-Net: A Deep Co-Interactive Relation Network for Joint Dialog Act
Recognition and Sentiment Classification | In dialog system, dialog act recognition and sentiment classification are two
correlative tasks to capture speakers intentions, where dialog act and
sentiment can indicate the explicit and the implicit intentions separately.
Most of the existing systems either treat them as separate tasks or just
jointly model the two tasks by sharing parameters in an implicit way without
explicitly modeling mutual interaction and relation. To address this problem,
we propose a Deep Co-Interactive Relation Network (DCR-Net) to explicitly
consider the cross-impact and model the interaction between the two tasks by
introducing a co-interactive relation layer. In addition, the proposed relation
layer can be stacked to gradually capture mutual knowledge with multiple steps
of interaction. Especially, we thoroughly study different relation layers and
their effects. Experimental results on two public datasets (Mastodon and
Dailydialog) show that our model outperforms the state-of-the-art joint model
by 4.3% and 3.4% in terms of F1 score on dialog act recognition task, 5.7% and
12.4% on sentiment classification respectively. Comprehensive analysis
empirically verifies the effectiveness of explicitly modeling the relation
between the two tasks and the multi-steps interaction mechanism. Finally, we
employ the Bidirectional Encoder Representation from Transformer (BERT) in our
framework, which can further boost our performance in both tasks.
| 2,020 | Computation and Language |
OpenFraming: We brought the ML; you bring the data. Interact with your
data and discover its frames | When journalists cover a news story, they can cover the story from multiple
angles or perspectives. A news article written about COVID-19 for example,
might focus on personal preventative actions such as mask-wearing, while
another might focus on COVID-19's impact on the economy. These perspectives are
called "frames," which when used may influence public perception and opinion of
the issue. We introduce a Web-based system for analyzing and classifying frames
in text documents. Our goal is to make effective tools for automatic frame
discovery and labeling based on topic modeling and deep learning widely
accessible to researchers from a diverse array of disciplines. To this end, we
provide both state-of-the-art pre-trained frame classification models on
various issues as well as a user-friendly pipeline for training novel
classification models on user-provided corpora. Researchers can submit their
documents and obtain frames of the documents. The degree of user involvement is
flexible: they can run models that have been pre-trained on select issues;
submit labeled documents and train a new model for frame classification; or
submit unlabeled documents and obtain potential frames of the documents. The
code making up our system is also open-sourced and well-documented, making the
system transparent and expandable. The system is available on-line at
http://www.openframing.org and via our GitHub page
https://github.com/davidatbu/openFraming .
| 2,020 | Computation and Language |
Efficient Knowledge Graph Validation via Cross-Graph Representation
Learning | Recent advances in information extraction have motivated the automatic
construction of huge Knowledge Graphs (KGs) by mining from large-scale text
corpus. However, noisy facts are unavoidably introduced into KGs that could be
caused by automatic extraction. To validate the correctness of facts (i.e.,
triplets) inside a KG, one possible approach is to map the triplets into vector
representations by capturing the semantic meanings of facts. Although many
representation learning approaches have been developed for knowledge graphs,
these methods are not effective for validation. They usually assume that facts
are correct, and thus may overfit noisy facts and fail to detect such facts.
Towards effective KG validation, we propose to leverage an external
human-curated KG as auxiliary information source to help detect the errors in a
target KG. The external KG is built upon human-curated knowledge repositories
and tends to have high precision. On the other hand, although the target KG
built by information extraction from texts has low precision, it can cover new
or domain-specific facts that are not in any human-curated repositories. To
tackle this challenging task, we propose a cross-graph representation learning
framework, i.e., CrossVal, which can leverage an external KG to validate the
facts in the target KG efficiently. This is achieved by embedding triplets
based on their semantic meanings, drawing cross-KG negative samples and
estimating a confidence score for each triplet based on its degree of
correctness. We evaluate the proposed framework on datasets across different
domains. Experimental results show that the proposed framework achieves the
best performance compared with the state-of-the-art methods on large-scale KGs.
| 2,020 | Computation and Language |
Adding Recurrence to Pretrained Transformers for Improved Efficiency and
Context Size | Fine-tuning a pretrained transformer for a downstream task has become a
standard method in NLP in the last few years. While the results from these
models are impressive, applying them can be extremely computationally
expensive, as is pretraining new models with the latest architectures. We
present a novel method for applying pretrained transformer language models
which lowers their memory requirement both at training and inference time. An
additional benefit is that our method removes the fixed context size constraint
that most transformer models have, allowing for more flexible use. When applied
to the GPT-2 language model, we find that our method attains better perplexity
than an unmodified GPT-2 model on the PG-19 and WikiText-103 corpora, for a
given amount of computation or memory.
| 2,020 | Computation and Language |
Logical Semantics, Dialogical Argumentation, and Textual Entailment | In this chapter, we introduce a new dialogical system for first order
classical logic which is close to natural language argumentation, and we prove
its completeness with respect to usual classical validity. We combine our
dialogical system with the Grail syntactic and semantic parser developed by the
second author in order to address automated textual entailment, that is, we use
it for deciding whether or not a sentence is a consequence of a short text.
This work-which connects natural language semantics and argumentation with
dialogical logic-can be viewed as a step towards an inferentialist view of
natural language semantics.
| 2,020 | Computation and Language |
Comparison of Syntactic Parsers on Biomedical Texts | Syntactic parsing is an important step in the automated text analysis which
aims at information extraction. Quality of the syntactic parsing determines to
a large extent the recall and precision of the text mining results. In this
paper we evaluate the performance of several popular syntactic parsers in
application to the biomedical text mining.
| 2,020 | Computation and Language |
BUT-FIT at SemEval-2020 Task 4: Multilingual commonsense | This paper describes work of the BUT-FIT's team at SemEval 2020 Task 4 -
Commonsense Validation and Explanation. We participated in all three subtasks.
In subtasks A and B, our submissions are based on pretrained language
representation models (namely ALBERT) and data augmentation. We experimented
with solving the task for another language, Czech, by means of multilingual
models and machine translated dataset, or translated model inputs. We show that
with a strong machine translation system, our system can be used in another
language with a small accuracy loss. In subtask C, our submission, which is
based on pretrained sequence-to-sequence model (BART), ranked 1st in BLEU score
ranking, however, we show that the correlation between BLEU and human
evaluation, in which our submission ended up 4th, is low. We analyse the
metrics used in the evaluation and we propose an additional score based on
model from subtask B, which correlates well with our manual ranking, as well as
reranking method based on the same principle. We performed an error and dataset
analysis for all subtasks and we present our findings.
| 2,020 | Computation and Language |
A Survey of Active Learning for Text Classification using Deep Neural
Networks | Natural language processing (NLP) and neural networks (NNs) have both
undergone significant changes in recent years. For active learning (AL)
purposes, NNs are, however, less commonly used -- despite their current
popularity. By using the superior text classification performance of NNs for
AL, we can either increase a model's performance using the same amount of data
or reduce the data and therefore the required annotation efforts while keeping
the same performance. We review AL for text classification using deep neural
networks (DNNs) and elaborate on two main causes which used to hinder the
adoption: (a) the inability of NNs to provide reliable uncertainty estimates,
on which the most commonly used query strategies rely, and (b) the challenge of
training DNNs on small data. To investigate the former, we construct a taxonomy
of query strategies, which distinguishes between data-based, model-based, and
prediction-based instance selection, and investigate the prevalence of these
classes in recent research. Moreover, we review recent NN-based advances in NLP
like word embeddings or language models in the context of (D)NNs, survey the
current state-of-the-art at the intersection of AL, text classification, and
DNNs and relate recent advances in NLP to AL. Finally, we analyze recent work
in AL for text classification, connect the respective query strategies to the
taxonomy, and outline commonalities and shortcomings. As a result, we highlight
gaps in current research and present open research questions.
| 2,020 | Computation and Language |
Evaluating for Diversity in Question Generation over Text | Generating diverse and relevant questions over text is a task with widespread
applications. We argue that commonly-used evaluation metrics such as BLEU and
METEOR are not suitable for this task due to the inherent diversity of
reference questions, and propose a scheme for extending conventional metrics to
reflect diversity. We furthermore propose a variational encoder-decoder model
for this task. We show through automatic and human evaluation that our
variational model improves diversity without loss of quality, and demonstrate
how our evaluation scheme reflects this improvement.
| 2,020 | Computation and Language |
HunFlair: An Easy-to-Use Tool for State-of-the-Art Biomedical Named
Entity Recognition | Summary: Named Entity Recognition (NER) is an important step in biomedical
information extraction pipelines. Tools for NER should be easy to use, cover
multiple entity types, highly accurate, and robust towards variations in text
genre and style. To this end, we propose HunFlair, an NER tagger covering
multiple entity types integrated into the widely used NLP framework Flair.
HunFlair outperforms other state-of-the-art standalone NER tools with an
average gain of 7.26 pp over the next best tool, can be installed with a single
command and is applied with only four lines of code. Availability: HunFlair is
freely available through the Flair framework under an MIT license:
https://github.com/flairNLP/flair and is compatible with all major operating
systems. Contact:{weberple,saengema,alan.akbik}@informatik.hu-berlin.de
| 2,020 | Computation and Language |
Narrative Interpolation for Generating and Understanding Stories | We propose a method for controlled narrative/story generation where we are
able to guide the model to produce coherent narratives with user-specified
target endings by interpolation: for example, we are told that Jim went hiking
and at the end Jim needed to be rescued, and we want the model to incrementally
generate steps along the way. The core of our method is an interpolation model
based on GPT-2 which conditions on a previous sentence and a next sentence in a
narrative and fills in the gap. Additionally, a reranker helps control for
coherence of the generated text. With human evaluation, we show that
ending-guided generation results in narratives which are coherent, faithful to
the given ending guide, and require less manual effort on the part of the human
guide writer than past approaches.
| 2,020 | Computation and Language |
Learning to Create Better Ads: Generation and Ranking Approaches for Ad
Creative Refinement | In the online advertising industry, the process of designing an ad creative
(i.e., ad text and image) requires manual labor. Typically, each advertiser
launches multiple creatives via online A/B tests to infer effective creatives
for the target audience, that are then refined further in an iterative fashion.
Due to the manual nature of this process, it is time-consuming to learn,
refine, and deploy the modified creatives. Since major ad platforms typically
run A/B tests for multiple advertisers in parallel, we explore the possibility
of collaboratively learning ad creative refinement via A/B tests of multiple
advertisers. In particular, given an input ad creative, we study approaches to
refine the given ad text and image by: (i) generating new ad text, (ii)
recommending keyphrases for new ad text, and (iii) recommending image tags
(objects in image) to select new ad image. Based on A/B tests conducted by
multiple advertisers, we form pairwise examples of inferior and superior ad
creatives, and use such pairs to train models for the above tasks. For
generating new ad text, we demonstrate the efficacy of an encoder-decoder
architecture with copy mechanism, which allows some words from the (inferior)
input text to be copied to the output while incorporating new words associated
with higher click-through-rate. For the keyphrase and image tag recommendation
task, we demonstrate the efficacy of a deep relevance matching model, as well
as the relative robustness of ranking approaches compared to ad text generation
in cold-start scenarios with unseen advertisers. We also share broadly
applicable insights from our experiments using data from the Yahoo Gemini ad
platform.
| 2,020 | Computation and Language |
Emotion Carrier Recognition from Personal Narratives | Personal Narratives (PN) - recollections of facts, events, and thoughts from
one's own experience - are often used in everyday conversations. So far, PNs
have mainly been explored for tasks such as valence prediction or emotion
classification (e.g. happy, sad). However, these tasks might overlook more
fine-grained information that could prove to be relevant for understanding PNs.
In this work, we propose a novel task for Narrative Understanding: Emotion
Carrier Recognition (ECR). Emotion carriers, the text fragments that carry the
emotions of the narrator (e.g. loss of a grandpa, high school reunion), provide
a fine-grained description of the emotion state. We explore the task of ECR in
a corpus of PNs manually annotated with emotion carriers and investigate
different machine learning models for the task. We propose evaluation
strategies for ECR including metrics that can be appropriate for different
tasks.
| 2,021 | Computation and Language |
Stock Index Prediction with Multi-task Learning and Word Polarity Over
Time | Sentiment-based stock prediction systems aim to explore sentiment or event
signals from online corpora and attempt to relate the signals to stock price
variations. Both the feature-based and neural-networks-based approaches have
delivered promising results. However, the frequently minor fluctuations of the
stock prices restrict learning the sentiment of text from price patterns, and
learning market sentiment from text can be biased if the text is irrelevant to
the underlying market. In addition, when using discrete word features, the
polarity of a certain term can change over time according to different events.
To address these issues, we propose a two-stage system that consists of a
sentiment extractor to extract the opinion on the market trend and a summarizer
that predicts the direction of the index movement of following week given the
opinions of the news over the current week. We adopt BERT with multitask
learning which additionally predicts the worthiness of the news and propose a
metric called Polarity-Over-Time to extract the word polarity among different
event periods. A Weekly-Monday prediction framework and a new dataset, the
10-year Reuters financial news dataset, are also proposed.
| 2,020 | Computation and Language |
Are Neural Open-Domain Dialog Systems Robust to Speech Recognition
Errors in the Dialog History? An Empirical Study | Large end-to-end neural open-domain chatbots are becoming increasingly
popular. However, research on building such chatbots has typically assumed that
the user input is written in nature and it is not clear whether these chatbots
would seamlessly integrate with automatic speech recognition (ASR) models to
serve the speech modality. We aim to bring attention to this important question
by empirically studying the effects of various types of synthetic and actual
ASR hypotheses in the dialog history on TransferTransfo, a state-of-the-art
Generative Pre-trained Transformer (GPT) based neural open-domain dialog system
from the NeurIPS ConvAI2 challenge. We observe that TransferTransfo trained on
written data is very sensitive to such hypotheses introduced to the dialog
history during inference time. As a baseline mitigation strategy, we introduce
synthetic ASR hypotheses to the dialog history during training and observe
marginal improvements, demonstrating the need for further research into
techniques to make end-to-end open-domain chatbots fully speech-robust. To the
best of our knowledge, this is the first study to evaluate the effects of
synthetic and actual ASR hypotheses on a state-of-the-art neural open-domain
dialog system and we hope it promotes speech-robustness as an evaluation
criterion in open-domain dialog.
| 2,020 | Computation and Language |
NASE: Learning Knowledge Graph Embedding for Link Prediction via Neural
Architecture Search | Link prediction is the task of predicting missing connections between
entities in the knowledge graph (KG). While various forms of models are
proposed for the link prediction task, most of them are designed based on a few
known relation patterns in several well-known datasets. Due to the diversity
and complexity nature of the real-world KGs, it is inherently difficult to
design a model that fits all datasets well. To address this issue, previous
work has tried to use Automated Machine Learning (AutoML) to search for the
best model for a given dataset. However, their search space is limited only to
bilinear model families. In this paper, we propose a novel Neural Architecture
Search (NAS) framework for the link prediction task. First, the embeddings of
the input triplet are refined by the Representation Search Module. Then, the
prediction score is searched within the Score Function Search Module. This
framework entails a more general search space, which enables us to take
advantage of several mainstream model families, and thus it can potentially
achieve better performance. We relax the search space to be continuous so that
the architecture can be optimized efficiently using gradient-based search
strategies. Experimental results on several benchmark datasets demonstrate the
effectiveness of our method compared with several state-of-the-art approaches.
| 2,020 | Computation and Language |
Very Deep Transformers for Neural Machine Translation | We explore the application of very deep Transformer models for Neural Machine
Translation (NMT). Using a simple yet effective initialization technique that
stabilizes training, we show that it is feasible to build standard
Transformer-based models with up to 60 encoder layers and 12 decoder layers.
These deep models outperform their baseline 6-layer counterparts by as much as
2.5 BLEU, and achieve new state-of-the-art benchmark results on WMT14
English-French (43.8 BLEU and 46.4 BLEU with back-translation) and WMT14
English-German (30.1 BLEU).The code and trained models will be publicly
available at: https://github.com/namisan/exdeep-nmt.
| 2,020 | Computation and Language |
COVID-SEE: Scientific Evidence Explorer for COVID-19 Related Research | We present COVID-SEE, a system for medical literature discovery based on the
concept of information exploration, which builds on several distinct text
analysis and natural language processing methods to structure and organise
information in publications, and augments search by providing a visual overview
supporting exploration of a collection to identify key articles of interest. We
developed this system over COVID-19 literature to help medical professionals
and researchers explore the literature evidence, and improve findability of
relevant information. COVID-SEE is available at http://covid-see.com.
| 2,020 | Computation and Language |
Glancing Transformer for Non-Autoregressive Neural Machine Translation | Recent work on non-autoregressive neural machine translation (NAT) aims at
improving the efficiency by parallel decoding without sacrificing the quality.
However, existing NAT methods are either inferior to Transformer or require
multiple decoding passes, leading to reduced speedup. We propose the Glancing
Language Model (GLM), a method to learn word interdependency for single-pass
parallel generation models. With GLM, we develop Glancing Transformer (GLAT)
for machine translation. With only single-pass parallel decoding, GLAT is able
to generate high-quality translation with 8-15 times speedup. Experiments on
multiple WMT language directions show that GLAT outperforms all previous single
pass non-autoregressive methods, and is nearly comparable to Transformer,
reducing the gap to 0.25-0.9 BLEU points.
| 2,021 | Computation and Language |
Victim or Perpetrator? Analysis of Violent Characters Portrayals from
Movie Scripts | Violent content in the media can influence viewers' perception of the
society. For example, frequent depictions of certain demographics as victims or
perpetrators of violence can shape stereotyped attitudes. We propose that
computational methods can aid in the large-scale analysis of violence in
movies. The method we develop characterizes aspects of violent content solely
from the language used in the scripts. Thus, our method is applicable to a
movie in the earlier stages of content creation even before it is produced.
This is complementary to previous works which rely on audio or video post
production. In this work, we identify stereotypes in character roles (i.e.,
victim, perpetrator and narrator) based on the demographics of the actor casted
for that role. Our results highlight two significant differences in the
frequency of portrayals as well as the demographics of the interaction between
victims and perpetrators : (1) female characters appear more often as victims,
and (2) perpetrators are more likely to be White if the victim is Black or
Latino. To date, we are the first to show that language used in movie scripts
is a strong indicator of violent content, and that there are systematic
portrayals of certain demographics as victims and perpetrators in a large
dataset. This offers novel computational tools to assist in creating awareness
of representations in storytelling
| 2,020 | Computation and Language |
FinChat: Corpus and evaluation setup for Finnish chat conversations on
everyday topics | Creating open-domain chatbots requires large amounts of conversational data
and related benchmark tasks to evaluate them. Standardized evaluation tasks are
crucial for creating automatic evaluation metrics for model development;
otherwise, comparing the models would require resource-expensive human
evaluation. While chatbot challenges have recently managed to provide a
plethora of such resources for English, resources in other languages are not
yet available. In this work, we provide a starting point for Finnish
open-domain chatbot research. We describe our collection efforts to create the
Finnish chat conversation corpus FinChat, which is made available publicly.
FinChat includes unscripted conversations on seven topics from people of
different ages. Using this corpus, we also construct a retrieval-based
evaluation task for Finnish chatbot development. We observe that off-the-shelf
chatbot models trained on conversational corpora do not perform better than
chance at choosing the right answer based on automatic metrics, while humans
can do the same task almost perfectly. Similarly, in a human evaluation,
responses to questions from the evaluation set generated by the chatbots are
predominantly marked as incoherent. Thus, FinChat provides a challenging
evaluation set, meant to encourage chatbot development in Finnish.
| 2,020 | Computation and Language |
BabelEnconding at SemEval-2020 Task 3: Contextual Similarity as a
Combination of Multilingualism and Language Models | This paper describes the system submitted by our team (BabelEnconding) to
SemEval-2020 Task 3: Predicting the Graded Effect of Context in Word
Similarity. We propose an approach that relies on translation and multilingual
language models in order to compute the contextual similarity between pairs of
words. Our hypothesis is that evidence from additional languages can leverage
the correlation with the human generated scores. BabelEnconding was applied to
both subtasks and ranked among the top-3 in six out of eight task/language
combinations and was the highest scoring system three times.
| 2,020 | Computation and Language |
UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information | Pre-trained language model word representation, such as BERT, have been
extremely successful in several Natural Language Processing tasks significantly
improving on the state-of-the-art. This can largely be attributed to their
ability to better capture semantic information contained within a sentence.
Several tasks, however, can benefit from information available at a corpus
level, such as Term Frequency-Inverse Document Frequency (TF-IDF). In this work
we test the effectiveness of integrating this information with BERT on the task
of identifying abuse on social media and show that integrating this information
with BERT does indeed significantly improve performance. We participate in
Sub-Task A (abuse detection) wherein we achieve a score within two points of
the top performing team and in Sub-Task B (target detection) wherein we are
ranked 4 of the 44 participating teams.
| 2,020 | Computation and Language |
Transformer based Multilingual document Embedding model | One of the current state-of-the-art multilingual document embedding model
LASER is based on the bidirectional LSTM neural machine translation model. This
paper presents a transformer-based sentence/document embedding model, T-LASER,
which makes three significant improvements. Firstly, the BiLSTM layers is
replaced by the attention-based transformer layers, which is more capable of
learning sequential patterns in longer texts. Secondly, due to the absence of
recurrence, T-LASER enables faster parallel computations in the encoder to
generate the text embedding. Thirdly, we augment the NMT translation loss
function with an additional novel distance constraint loss. This distance
constraint loss would further bring the embeddings of parallel sentences close
together in the vector space; we call the T-LASER model trained with distance
constraint, cT-LASER. Our cT-LASER model significantly outperforms both
BiLSTM-based LASER and the simpler transformer-based T-LASER.
| 2,020 | Computation and Language |
A Survey on Text Simplification | Text Simplification (TS) aims to reduce the linguistic complexity of content
to make it easier to understand. Research in TS has been of keen interest,
especially as approaches to TS have shifted from manual, hand-crafted rules to
automated simplification. This survey seeks to provide a comprehensive overview
of TS, including a brief description of earlier approaches used, discussion of
various aspects of simplification (lexical, semantic and syntactic), and latest
techniques being utilized in the field. We note that the research in the field
has clearly shifted towards utilizing deep learning techniques to perform TS,
with a specific focus on developing solutions to combat the lack of data
available for simplification. We also include a discussion of datasets and
evaluations metrics commonly used, along with discussion of related fields
within Natural Language Processing (NLP), like semantic similarity.
| 2,022 | Computation and Language |
Assigning function to protein-protein interactions: a weakly supervised
BioBERT based approach using PubMed abstracts | Motivation: Protein-protein interactions (PPI) are critical to the function
of proteins in both normal and diseased cells, and many critical protein
functions are mediated by interactions.Knowledge of the nature of these
interactions is important for the construction of networks to analyse
biological data. However, only a small percentage of PPIs captured in protein
interaction databases have annotations of function available, e.g. only 4% of
PPI are functionally annotated in the IntAct database. Here, we aim to label
the function type of PPIs by extracting relationships described in PubMed
abstracts.
Method: We create a weakly supervised dataset from the IntAct PPI database
containing interacting protein pairs with annotated function and associated
abstracts from the PubMed database. We apply a state-of-the-art deep learning
technique for biomedical natural language processing tasks, BioBERT, to build a
model - dubbed PPI-BioBERT - for identifying the function of PPIs. In order to
extract high quality PPI functions at large scale, we use an ensemble of
PPI-BioBERT models to improve uncertainty estimation and apply an interaction
type-specific threshold to counteract the effects of variations in the number
of training samples per interaction type.
Results: We scan 18 million PubMed abstracts to automatically identify 3253
new typed PPIs, including phosphorylation and acetylation interactions, with an
overall precision of 46% (87% for acetylation) based on a human-reviewed
sample. This work demonstrates that analysis of biomedical abstracts for PPI
function extraction is a feasible approach to substantially increasing the
number of interactions annotated with function captured in online databases.
| 2,022 | Computation and Language |
Lite Training Strategies for Portuguese-English and English-Portuguese
Translation | Despite the widespread adoption of deep learning for machine translation, it
is still expensive to develop high-quality translation models. In this work, we
investigate the use of pre-trained models, such as T5 for Portuguese-English
and English-Portuguese translation tasks using low-cost hardware. We explore
the use of Portuguese and English pre-trained language models and propose an
adaptation of the English tokenizer to represent Portuguese characters, such as
diaeresis, acute and grave accents. We compare our models to the Google
Translate API and MarianMT on a subset of the ParaCrawl dataset, as well as to
the winning submission to the WMT19 Biomedical Translation Shared Task. We also
describe our submission to the WMT20 Biomedical Translation Shared Task. Our
results show that our models have a competitive performance to state-of-the-art
models while being trained on modest hardware (a single 8GB gaming GPU for nine
days). Our data, models and code are available at
https://github.com/unicamp-dl/Lite-T5-Translation.
| 2,020 | Computation and Language |
An Experimental Study of Deep Neural Network Models for Vietnamese
Multiple-Choice Reading Comprehension | Machine reading comprehension (MRC) is a challenging task in natural language
processing that makes computers understanding natural language texts and answer
questions based on those texts. There are many techniques for solving this
problems, and word representation is a very important technique that impact
most to the accuracy of machine reading comprehension problem in the popular
languages like English and Chinese. However, few studies on MRC have been
conducted in low-resource languages such as Vietnamese. In this paper, we
conduct several experiments on neural network-based model to understand the
impact of word representation to the Vietnamese multiple-choice machine reading
comprehension. Our experiments include using the Co-match model on six
different Vietnamese word embeddings and the BERT model for multiple-choice
reading comprehension. On the ViMMRC corpus, the accuracy of BERT model is
61.28% on test set.
| 2,021 | Computation and Language |
Checkworthiness in Automatic Claim Detection Models: Definitions and
Analysis of Datasets | Public, professional and academic interest in automated fact-checking has
drastically increased over the past decade, with many aiming to automate one of
the first steps in a fact-check procedure: the selection of so-called
checkworthy claims. However, there is little agreement on the definition and
characteristics of checkworthiness among fact-checkers, which is consequently
reflected in the datasets used for training and testing checkworthy claim
detection models. After elaborate analysis of checkworthy claim selection
procedures in fact-check organisations and analysis of state-of-the-art claim
detection datasets, checkworthiness is defined as the concept of having a
spatiotemporal and context-dependent worth and need to have the correctness of
the objectivity it conveys verified. This is irrespective of the claim's
perceived veracity judgement by an individual based on prior knowledge and
beliefs. Concerning the characteristics of current datasets, it is argued that
the data is not only highly imbalanced and noisy, but also too limited in scope
and language. Furthermore, we believe that the subjective concept of
checkworthiness might not be a suitable filter for claim detection.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.