Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Automatic Story Generation: Challenges and Attempts
|
The scope of this survey paper is to explore the challenges in automatic
story generation. We hope to contribute in the following ways: 1. Explore how
previous research in story generation addressed those challenges. 2. Discuss
future research directions and new technologies that may aid more advancements.
3. Shed light on emerging and often overlooked challenges such as creativity
and discourse.
| 2,021 |
Computation and Language
|
MixSpeech: Data Augmentation for Low-resource Automatic Speech
Recognition
|
In this paper, we propose MixSpeech, a simple yet effective data augmentation
method based on mixup for automatic speech recognition (ASR). MixSpeech trains
an ASR model by taking a weighted combination of two different speech features
(e.g., mel-spectrograms or MFCC) as the input, and recognizing both text
sequences, where the two recognition losses use the same combination weight. We
apply MixSpeech on two popular end-to-end speech recognition models including
LAS (Listen, Attend and Spell) and Transformer, and conduct experiments on
several low-resource datasets including TIMIT, WSJ, and HKUST. Experimental
results show that MixSpeech achieves better accuracy than the baseline models
without data augmentation, and outperforms a strong data augmentation method
SpecAugment on these recognition tasks. Specifically, MixSpeech outperforms
SpecAugment with a relative PER improvement of 10.6$\%$ on TIMIT dataset, and
achieves a strong WER of 4.7$\%$ on WSJ dataset.
| 2,021 |
Computation and Language
|
LET: Linguistic Knowledge Enhanced Graph Transformer for Chinese Short
Text Matching
|
Chinese short text matching is a fundamental task in natural language
processing. Existing approaches usually take Chinese characters or words as
input tokens. They have two limitations: 1) Some Chinese words are polysemous,
and semantic information is not fully utilized. 2) Some models suffer potential
issues caused by word segmentation. Here we introduce HowNet as an external
knowledge base and propose a Linguistic knowledge Enhanced graph Transformer
(LET) to deal with word ambiguity. Additionally, we adopt the word lattice
graph as input to maintain multi-granularity information. Our model is also
complementary to pre-trained language models. Experimental results on two
Chinese datasets show that our models outperform various typical text matching
approaches. Ablation study also indicates that both semantic information and
multi-granularity information are important for text matching modeling.
| 2,021 |
Computation and Language
|
Sentiment Analysis of Persian-English Code-mixed Texts
|
The rapid production of data on the internet and the need to understand how
users are feeling from a business and research perspective has prompted the
creation of numerous automatic monolingual sentiment detection systems. More
recently however, due to the unstructured nature of data on social media, we
are observing more instances of multilingual and code-mixed texts. This
development in content type has created a new demand for code-mixed sentiment
analysis systems. In this study we collect, label and thus create a dataset of
Persian-English code-mixed tweets. We then proceed to introduce a model which
uses BERT pretrained embeddings as well as translation models to automatically
learn the polarity scores of these Tweets. Our model outperforms the baseline
models that use Na\"ive Bayes and Random Forest methods.
| 2,021 |
Computation and Language
|
LazyFormer: Self Attention with Lazy Update
|
Improving the efficiency of Transformer-based language pre-training is an
important task in NLP, especially for the self-attention module, which is
computationally expensive. In this paper, we propose a simple but effective
solution, called \emph{LazyFormer}, which computes the self-attention
distribution infrequently. LazyFormer composes of multiple lazy blocks, each of
which contains multiple Transformer layers. In each lazy block, the
self-attention distribution is only computed once in the first layer and then
is reused in all upper layers. In this way, the cost of computation could be
largely saved. We also provide several training tricks for LazyFormer.
Extensive experiments demonstrate the effectiveness of the proposed method.
| 2,021 |
Computation and Language
|
IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with
Special Tokens, Re-Ranking, Siamese Encoders and Back Translation
|
This paper introduces our systems for all three subtasks of SemEval-2021 Task
4: Reading Comprehension of Abstract Meaning. To help our model better
represent and understand abstract concepts in natural language, we well-design
many simple and effective approaches adapted to the backbone model (RoBERTa).
Specifically, we formalize the subtasks into the multiple-choice question
answering format and add special tokens to abstract concepts, then, the final
prediction of question answering is considered as the result of subtasks.
Additionally, we employ many finetuning tricks to improve the performance.
Experimental results show that our approaches achieve significant performance
compared with the baseline systems. Our approaches achieve eighth rank on
subtask-1 and tenth rank on subtask-2.
| 2,021 |
Computation and Language
|
ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language
Model for Reading Comprehension of Abstract Meaning
|
This paper presents our systems for the three Subtasks of SemEval Task4:
Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms
used to learn our models and the process of tuning the algorithms and selecting
the best model. Inspired by the similarity of the ReCAM task and the language
pre-training, we propose a simple yet effective technology, namely, negative
augmentation with language model. Evaluation results demonstrate the
effectiveness of our proposed approach. Our models achieve the 4th rank on both
official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an
accuracy of 92.8%, respectively. We further conduct comprehensive model
analysis and observe interesting error cases, which may promote future
researches.
| 2,023 |
Computation and Language
|
Spanish Biomedical and Clinical Language Embeddings
|
We computed both Word and Sub-word Embeddings using FastText. For Sub-word
embeddings we selected Byte Pair Encoding (BPE) algorithm to represent the
sub-words. We evaluated the Biomedical Word Embeddings obtaining better results
than previous versions showing the implication that with more data, we obtain
better representations.
| 2,021 |
Computation and Language
|
QNLP in Practice: Running Compositional Models of Meaning on a Quantum
Computer
|
Quantum Natural Language Processing (QNLP) deals with the design and
implementation of NLP models intended to be run on quantum hardware. In this
paper, we present results on the first NLP experiments conducted on Noisy
Intermediate-Scale Quantum (NISQ) computers for datasets of size greater than
100 sentences. Exploiting the formal similarity of the compositional model of
meaning by Coecke, Sadrzadeh and Clark (2010) with quantum theory, we create
representations for sentences that have a natural mapping to quantum circuits.
We use these representations to implement and successfully train NLP models
that solve simple sentence classification tasks on quantum hardware. We conduct
quantum simulations that compare the syntax-sensitive model of Coecke et al.
with two baselines that use less or no syntax; specifically, we implement the
quantum analogues of a "bag-of-words" model, where syntax is not taken into
account at all, and of a word-sequence model, where only word order is
respected. We demonstrate that all models converge smoothly both in simulations
and when run on quantum hardware, and that the results are the expected ones
based on the nature of the tasks and the datasets used. Another important goal
of this paper is to describe in a way accessible to AI and NLP researchers the
main principles, process and challenges of experiments on quantum hardware. Our
aim in doing this is to take the first small steps in this unexplored research
territory and pave the way for practical Quantum Natural Language Processing.
| 2,023 |
Computation and Language
|
Emotion-Aware, Emotion-Agnostic, or Automatic: Corpus Creation
Strategies to Obtain Cognitive Event Appraisal Annotations
|
Appraisal theories explain how the cognitive evaluation of an event leads to
a particular emotion. In contrast to theories of basic emotions or affect
(valence/arousal), this theory has not received a lot of attention in natural
language processing. Yet, in psychology it has been proven powerful: Smith and
Ellsworth (1985) showed that the appraisal dimensions attention, certainty,
anticipated effort, pleasantness, responsibility/control and situational
control discriminate between (at least) 15 emotion classes. We study different
annotation strategies for these dimensions, based on the event-focused enISEAR
corpus (Troiano et al., 2019). We analyze two manual annotation settings: (1)
showing the text to annotate while masking the experienced emotion label; (2)
revealing the emotion associated with the text. Setting 2 enables the
annotators to develop a more realistic intuition of the described event, while
Setting 1 is a more standard annotation procedure, purely relying on text. We
evaluate these strategies in two ways: by measuring inter-annotator agreement
and by fine-tuning RoBERTa to predict appraisal variables. Our results show
that knowledge of the emotion increases annotators' reliability. Further, we
evaluate a purely automatic rule-based labeling strategy (inferring appraisal
from annotated emotion classes). Training on automatically assigned labels
leads to a competitive performance of our classifier, even when tested on
manual annotations. This is an indicator that it might be possible to
automatically create appraisal corpora for every domain for which emotion
corpora already exist.
| 2,021 |
Computation and Language
|
Are pre-trained text representations useful for multilingual and
multi-dimensional language proficiency modeling?
|
Development of language proficiency models for non-native learners has been
an active area of interest in NLP research for the past few years. Although
language proficiency is multidimensional in nature, existing research typically
considers a single "overall proficiency" while building models. Further,
existing approaches also considers only one language at a time. This paper
describes our experiments and observations about the role of pre-trained and
fine-tuned multilingual embeddings in performing multi-dimensional,
multilingual language proficiency classification. We report experiments with
three languages -- German, Italian, and Czech -- and model seven dimensions of
proficiency ranging from vocabulary control to sociolinguistic appropriateness.
Our results indicate that while fine-tuned embeddings are useful for
multilingual proficiency modeling, none of the features achieve consistently
best performance for all dimensions of language proficiency. All code, data and
related supplementary material can be found at:
https://github.com/nishkalavallabhi/MultidimCEFRScoring.
| 2,021 |
Computation and Language
|
A Primer on Contrastive Pretraining in Language Processing: Methods,
Lessons Learned and Perspectives
|
Modern natural language processing (NLP) methods employ self-supervised
pretraining objectives such as masked language modeling to boost the
performance of various application tasks. These pretraining methods are
frequently extended with recurrence, adversarial or linguistic property
masking, and more recently with contrastive learning objectives. Contrastive
self-supervised training objectives enabled recent successes in image
representation pretraining by learning to contrast input-input pairs of
augmented images as either similar or dissimilar. However, in NLP, automated
creation of text input augmentations is still very challenging because a single
token can invert the meaning of a sentence. For this reason, some contrastive
NLP pretraining methods contrast over input-label pairs, rather than over
input-input pairs, using methods from Metric Learning and Energy Based Models.
In this survey, we summarize recent self-supervised and supervised contrastive
NLP pretraining methods and describe where they are used to improve language
modeling, few or zero-shot learning, pretraining data-efficiency and specific
NLP end-tasks. We introduce key contrastive learning concepts with lessons
learned from prior research and structure works by applications and cross-field
relations. Finally, we point to open challenges and future directions for
contrastive NLP to encourage bringing contrastive NLP pretraining closer to
recent successes in image representation pretraining.
| 2,021 |
Computation and Language
|
Investigating the Limitations of Transformers with Simple Arithmetic
Tasks
|
The ability to perform arithmetic tasks is a remarkable trait of human
intelligence and might form a critical component of more complex reasoning
tasks. In this work, we investigate if the surface form of a number has any
influence on how sequence-to-sequence language models learn simple arithmetic
tasks such as addition and subtraction across a wide range of values. We find
that how a number is represented in its surface form has a strong influence on
the model's accuracy. In particular, the model fails to learn addition of
five-digit numbers when using subwords (e.g., "32"), and it struggles to learn
with character-level representations (e.g., "3 2"). By introducing position
tokens (e.g., "3 10e1 2"), the model learns to accurately add and subtract
numbers up to 60 digits. We conclude that modern pretrained language models can
easily learn arithmetic from very few examples, as long as we use the proper
surface representation. This result bolsters evidence that subword tokenizers
and positional encodings are components in current transformer designs that
might need improvement. Moreover, we show that regardless of the number of
parameters and training examples, models cannot learn addition rules that are
independent of the length of the numbers seen during training. Code to
reproduce our experiments is available at
https://github.com/castorini/transformers-arithmetic
| 2,021 |
Computation and Language
|
Retrieval Augmentation for Deep Neural Networks
|
Deep neural networks have achieved state-of-the-art results in various vision
and/or language tasks. Despite the use of large training datasets, most models
are trained by iterating over single input-output pairs, discarding the
remaining examples for the current prediction. In this work, we actively
exploit the training data, using the information from nearest training examples
to aid the prediction both during training and testing. Specifically, our
approach uses the target of the most similar training example to initialize the
memory state of an LSTM model, or to guide attention mechanisms. We apply this
approach to image captioning and sentiment analysis, respectively through image
and text retrieval. Results confirm the effectiveness of the proposed approach
for the two tasks, on the widely used Flickr8 and IMDB datasets. Our code is
publicly available at http://github.com/RitaRamo/retrieval-augmentation-nn.
| 2,021 |
Computation and Language
|
ANEA: Distant Supervision for Low-Resource Named Entity Recognition
|
Distant supervision allows obtaining labeled training corpora for
low-resource settings where only limited hand-annotated data exists. However,
to be used effectively, the distant supervision must be easy to gather. In this
work, we present ANEA, a tool to automatically annotate named entities in texts
based on entity lists. It spans the whole pipeline from obtaining the lists to
analyzing the errors of the distant supervision. A tuning step allows the user
to improve the automatic annotation with their linguistic insights without
labelling or checking all tokens manually. In six low-resource scenarios, we
show that the F1-score can be increased by on average 18 points through
distantly supervised data obtained by ANEA.
| 2,021 |
Computation and Language
|
Automated essay scoring using efficient transformer-based language
models
|
Automated Essay Scoring (AES) is a cross-disciplinary effort involving
Education, Linguistics, and Natural Language Processing (NLP). The efficacy of
an NLP model in AES tests it ability to evaluate long-term dependencies and
extrapolate meaning even when text is poorly written. Large pretrained
transformer-based language models have dominated the current state-of-the-art
in many NLP tasks, however, the computational requirements of these models make
them expensive to deploy in practice. The goal of this paper is to challenge
the paradigm in NLP that bigger is better when it comes to AES. To do this, we
evaluate the performance of several fine-tuned pretrained NLP models with a
modest number of parameters on an AES dataset. By ensembling our models, we
achieve excellent results with fewer parameters than most pretrained
transformer-based models.
| 2,021 |
Computation and Language
|
PharmKE: Knowledge Extraction Platform for Pharmaceutical Texts using
Transfer Learning
|
The challenge of recognizing named entities in a given text has been a very
dynamic field in recent years. This is due to the advances in neural network
architectures, increase of computing power and the availability of diverse
labeled datasets, which deliver pre-trained, highly accurate models. These
tasks are generally focused on tagging common entities, but domain-specific
use-cases require tagging custom entities which are not part of the pre-trained
models. This can be solved by either fine-tuning the pre-trained models, or by
training custom models. The main challenge lies in obtaining reliable labeled
training and test datasets, and manual labeling would be a highly tedious task.
In this paper we present PharmKE, a text analysis platform focused on the
pharmaceutical domain, which applies deep learning through several stages for
thorough semantic analysis of pharmaceutical articles. It performs text
classification using state-of-the-art transfer learning models, and thoroughly
integrates the results obtained through a proposed methodology. The methodology
is used to create accurately labeled training and test datasets, which are then
used to train models for custom entity labeling tasks, centered on the
pharmaceutical domain. The obtained results are compared to the fine-tuned BERT
and BioBERT models trained on the same dataset. Additionally, the PharmKE
platform integrates the results obtained from named entity recognition tasks to
resolve co-references of entities and analyze the semantic relations in every
sentence, thus setting up a baseline for additional text analysis tasks, such
as question answering and fact extraction. The recognized entities are also
used to expand the knowledge graph generated by DBpedia Spotlight for a given
pharmaceutical text.
| 2,023 |
Computation and Language
|
DOCENT: Learning Self-Supervised Entity Representations from Large
Document Collections
|
This paper explores learning rich self-supervised entity representations from
large amounts of the associated text. Once pre-trained, these models become
applicable to multiple entity-centric tasks such as ranked retrieval, knowledge
base completion, question answering, and more. Unlike other methods that
harvest self-supervision signals based merely on a local context within a
sentence, we radically expand the notion of context to include any available
text related to an entity. This enables a new class of powerful, high-capacity
representations that can ultimately distill much of the useful information
about an entity from multiple text sources, without any human supervision.
We present several training strategies that, unlike prior approaches, learn
to jointly predict words and entities -- strategies we compare experimentally
on downstream tasks in the TV-Movies domain, such as MovieLens tag prediction
from user reviews and natural language movie search. As evidenced by results,
our models match or outperform competitive baselines, sometimes with little or
no fine-tuning, and can scale to very large corpora.
Finally, we make our datasets and pre-trained models publicly available. This
includes Reviews2Movielens (see https://goo.gle/research-docent ), mapping the
up to 1B word corpus of Amazon movie reviews (He and McAuley, 2016) to
MovieLens tags (Harper and Konstan, 2016), as well as Reddit Movie Suggestions
(see https://urikz.github.io/docent ) with natural language queries and
corresponding community recommendations.
| 2,021 |
Computation and Language
|
Chess as a Testbed for Language Model State Tracking
|
Transformer language models have made tremendous strides in natural language
understanding tasks. However, the complexity of natural language makes it
challenging to ascertain how accurately these models are tracking the world
state underlying the text. Motivated by this issue, we consider the task of
language modeling for the game of chess. Unlike natural language, chess
notations describe a simple, constrained, and deterministic domain. Moreover,
we observe that the appropriate choice of chess notation allows for directly
probing the world state, without requiring any additional probing-related
machinery. We find that: (a) With enough training data, transformer language
models can learn to track pieces and predict legal moves with high accuracy
when trained solely on move sequences. (b) For small training sets providing
access to board state information during training can yield significant
improvements. (c) The success of transformer language models is dependent on
access to the entire game history i.e. "full attention". Approximating this
full attention results in a significant performance drop. We propose this
testbed as a benchmark for future work on the development and analysis of
transformer language models.
| 2,022 |
Computation and Language
|
Predicting gender and age categories in English conversations using
lexical, non-lexical, and turn-taking features
|
This paper examines gender and age salience and (stereo)typicality in British
English talk with the aim to predict gender and age categories based on
lexical, phrasal and turn-taking features. We examine the SpokenBNC, a corpus
of around 11.4 million words of British English conversations and identify
behavioural differences between speakers that are labelled for gender and age
categories. We explore differences in language use and turn-taking dynamics and
identify a range of characteristics that set the categories apart. We find that
female speakers tend to produce more and slightly longer turns, while turns by
male speakers feature a higher type-token ratio and a distinct range of minimal
particles such as "eh", "uh" and "em". Across age groups, we observe, for
instance, that swear words and laughter characterize young speakers' talk,
while old speakers tend to produce more truncated words. We then use the
observed characteristics to predict gender and age labels of speakers per
conversation and per turn as a classification task, showing that non-lexical
utterances such as minimal particles that are usually left out of dialog data
can contribute to setting the categories apart.
| 2,021 |
Computation and Language
|
Multi-task transfer learning for finding actionable information from
crisis-related messages on social media
|
The Incident streams (IS) track is a research challenge aimed at finding
important information from social media during crises for emergency response
purposes. More specifically, given a stream of crisis-related tweets, the IS
challenge asks a participating system to 1) classify what the types of users'
concerns or needs are expressed in each tweet, known as the information type
(IT) classification task and 2) estimate how critical each tweet is with regard
to emergency response, known as the priority level prediction task. In this
paper, we describe our multi-task transfer learning approach for this
challenge. Our approach leverages state-of-the-art transformer models including
both encoder-based models such as BERT and a sequence-to-sequence based T5 for
joint transfer learning on the two tasks. Based on this approach, we submitted
several runs to the track. The returned evaluation results show that our runs
substantially outperform other participating runs in both IT classification and
priority level prediction.
| 2,021 |
Computation and Language
|
Methods for the Design and Evaluation of HCI+NLP Systems
|
HCI and NLP traditionally focus on different evaluation methods. While HCI
involves a small number of people directly and deeply, NLP traditionally relies
on standardized benchmark evaluations that involve a larger number of people
indirectly. We present five methodological proposals at the intersection of HCI
and NLP and situate them in the context of ML-based NLP models. Our goal is to
foster interdisciplinary collaboration and progress in both fields by
emphasizing what the fields can learn from each other.
| 2,021 |
Computation and Language
|
Gradient-guided Loss Masking for Neural Machine Translation
|
To mitigate the negative effect of low quality training data on the
performance of neural machine translation models, most existing strategies
focus on filtering out harmful data before training starts. In this paper, we
explore strategies that dynamically optimize data usage during the training
process using the model's gradients on a small set of clean data. At each
training step, our algorithm calculates the gradient alignment between the
training data and the clean data to mask out data with negative alignment. Our
method has a natural intuition: good training data should update the model
parameters in a similar direction as the clean data. Experiments on three WMT
language pairs show that our method brings significant improvement over strong
baselines, and the improvements are generalizable across test data from
different domains.
| 2,021 |
Computation and Language
|
Natural Language Video Localization: A Revisit in Span-based Question
Answering Framework
|
Natural Language Video Localization (NLVL) aims to locate a target moment
from an untrimmed video that semantically corresponds to a text query. Existing
approaches mainly solve the NLVL problem from the perspective of computer
vision by formulating it as ranking, anchor, or regression tasks. These methods
suffer from large performance degradation when localizing on long videos. In
this work, we address the NLVL from a new perspective, i.e., span-based
question answering (QA), by treating the input video as a text passage. We
propose a video span localizing network (VSLNet), on top of the standard
span-based QA framework (named VSLBase), to address NLVL. VSLNet tackles the
differences between NLVL and span-based QA through a simple yet effective
query-guided highlighting (QGH) strategy. QGH guides VSLNet to search for the
matching video span within a highlighted region. To address the performance
degradation on long videos, we further extend VSLNet to VSLNet-L by applying a
multi-scale split-and-concatenation strategy. VSLNet-L first splits the
untrimmed video into short clip segments; then, it predicts which clip segment
contains the target moment and suppresses the importance of other segments.
Finally, the clip segments are concatenated, with different confidences, to
locate the target moment accurately. Extensive experiments on three benchmark
datasets show that the proposed VSLNet and VSLNet-L outperform the
state-of-the-art methods; VSLNet-L addresses the issue of performance
degradation on long videos. Our study suggests that the span-based QA framework
is an effective strategy to solve the NLVL problem.
| 2,021 |
Computation and Language
|
Evaluate On-the-job Learning Dialogue Systems and a Case Study for
Natural Language Understanding
|
On-the-job learning consists in continuously learning while being used in
production, in an open environment, meaning that the system has to deal on its
own with situations and elements never seen before. The kind of systems that
seem to be especially adapted to on-the-job learning are dialogue systems,
since they can take advantage of their interactions with users to collect
feedback to adapt and improve their components over time. Some dialogue systems
performing on-the-job learning have been built and evaluated but no general
methodology has yet been defined. Thus in this paper, we propose a first
general methodology for evaluating on-the-job learning dialogue systems. We
also describe a task-oriented dialogue system which improves on-the-job its
natural language component through its user interactions. We finally evaluate
our system with the described methodology.
| 2,021 |
Computation and Language
|
A Meta-embedding-based Ensemble Approach for ICD Coding Prediction
|
International Classification of Diseases (ICD) are the de facto codes used
globally for clinical coding. These codes enable healthcare providers to claim
reimbursement and facilitate efficient storage and retrieval of diagnostic
information. The problem of automatically assigning ICD codes has been
approached in literature as a multilabel classification, using neural models on
unstructured data. Our proposed approach enhances the performance of neural
models by effectively training word vectors using routine medical data as well
as external knowledge from scientific articles. Furthermore, we exploit the
geometric properties of the two sets of word vectors and combine them into a
common dimensional space, using meta-embedding techniques. We demonstrate the
efficacy of this approach for a multimodal setting, using unstructured and
structured information. We empirically show that our approach improves the
current state-of-the-art deep learning architectures and benefits ensemble
models.
| 2,022 |
Computation and Language
|
Improving Longer-range Dialogue State Tracking
|
Dialogue state tracking (DST) is a pivotal component in task-oriented
dialogue systems. While it is relatively easy for a DST model to capture belief
states in short conversations, the task of DST becomes more challenging as the
length of a dialogue increases due to the injection of more distracting
contexts. In this paper, we aim to improve the overall performance of DST with
a special focus on handling longer dialogues. We tackle this problem from three
perspectives: 1) A model designed to enable hierarchical slot status
prediction; 2) Balanced training procedure for generic and task-specific
language understanding; 3) Data perturbation which enhances the model's ability
in handling longer conversations. We conduct experiments on the MultiWOZ
benchmark, and demonstrate the effectiveness of each component via a set of
ablation tests, especially on longer conversations.
| 2,021 |
Computation and Language
|
Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go
|
The proliferation of harmful content on online platforms is a major societal
problem, which comes in many different forms including hate speech, offensive
language, bullying and harassment, misinformation, spam, violence, graphic
content, sexual abuse, self harm, and many other. Online platforms seek to
moderate such content to limit societal harm, to comply with legislation, and
to create a more inclusive environment for their users. Researchers have
developed different methods for automatically detecting harmful content, often
focusing on specific sub-problems or on narrow communities, as what is
considered harmful often depends on the platform and on the context. We argue
that there is currently a dichotomy between what types of harmful content
online platforms seek to curb, and what research efforts there are to
automatically detect such content. We thus survey existing methods as well as
content moderation policies by online platforms in this light and we suggest
directions for future work.
| 2,023 |
Computation and Language
|
COVID-19 Tweets Analysis through Transformer Language Models
|
Understanding the public sentiment and perception in a healthcare crisis is
essential for developing appropriate crisis management techniques. While some
studies have used Twitter data for predictive modelling during COVID-19,
fine-grained sentiment analysis of the opinion of people on social media during
this pandemic has not yet been done. In this study, we perform an in-depth,
fine-grained sentiment analysis of tweets in COVID-19. For this purpose, we
perform supervised training of four transformer language models on the
downstream task of multi-label classification of tweets into seven tone
classes: [confident, anger, fear, joy, sadness, analytical, tentative]. We
achieve a LRAP (Label Ranking Average Precision) score of 0.9267 through
RoBERTa. This trained transformer model is able to correctly predict, with high
accuracy, the tone of a tweet. We then leverage this model for predicting tones
for 200,000 tweets on COVID-19. We then perform a country-wise analysis of the
tone of tweets, and extract useful indicators of the psychological condition
about the people in this pandemic.
| 2,021 |
Computation and Language
|
EDS-MEMBED: Multi-sense embeddings based on enhanced distributional
semantic structures via a graph walk over word senses
|
Several language applications often require word semantics as a core part of
their processing pipeline, either as precise meaning inference or semantic
similarity. Multi-sense embeddings (M-SE) can be exploited for this important
requirement. M-SE seeks to represent each word by their distinct senses in
order to resolve the conflation of meanings of words as used in different
contexts. Previous works usually approach this task by training a model on a
large corpus and often ignore the effect and usefulness of the semantic
relations offered by lexical resources. However, even with large training data,
coverage of all possible word senses is still an issue. In addition, a
considerable percentage of contextual semantic knowledge are never learned
because a huge amount of possible distributional semantic structures are never
explored. In this paper, we leverage the rich semantic structures in WordNet
using a graph-theoretic walk technique over word senses to enhance the quality
of multi-sense embeddings. This algorithm composes enriched texts from the
original texts. Furthermore, we derive new distributional semantic similarity
measures for M-SE from prior ones. We adapt these measures to word sense
disambiguation (WSD) aspect of our experiment. We report evaluation results on
11 benchmark datasets involving WSD and Word Similarity tasks and show that our
method for enhancing distributional semantic structures improves embeddings
quality on the baselines. Despite the small training data, it achieves
state-of-the-art performance on some of the datasets.
| 2,021 |
Computation and Language
|
A Survey on Stance Detection for Mis- and Disinformation Identification
|
Understanding attitudes expressed in texts, also known as stance detection,
plays an important role in systems for detecting false information online, be
it misinformation (unintentionally false) or disinformation (intentionally
false information). Stance detection has been framed in different ways,
including (a) as a component of fact-checking, rumour detection, and detecting
previously fact-checked claims, or (b) as a task in its own right. While there
have been prior efforts to contrast stance detection with other related tasks
such as argumentation mining and sentiment analysis, there is no existing
survey on examining the relationship between stance detection and mis- and
disinformation detection. Here, we aim to bridge this gap by reviewing and
analysing existing work in this area, with mis- and disinformation in focus,
and discussing lessons learnt and future challenges.
| 2,022 |
Computation and Language
|
N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking
|
Augmentation of task-oriented dialogues has followed standard methods used
for plain-text such as back-translation, word-level manipulation, and
paraphrasing despite its richly annotated structure. In this work, we introduce
an augmentation framework that utilizes belief state annotations to match turns
from various dialogues and form new synthetic dialogues in a bottom-up manner.
Unlike other augmentation strategies, it operates with as few as five examples.
Our augmentation strategy yields significant improvements when both adapting a
DST model to a new domain, and when adapting a language model to the DST task,
on evaluations with TRADE and TOD-BERT models. Further analysis shows that our
model performs better on seen values during training, and it is also more
robust to unseen values. We conclude that exploiting belief state annotations
enhances dialogue augmentation and results in improved models in n-shot
training scenarios.
| 2,022 |
Computation and Language
|
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
Bias in NLP
|
When trained on large, unfiltered crawls from the internet, language models
pick up and reproduce all kinds of undesirable biases that can be found in the
data: they often generate racist, sexist, violent or otherwise toxic language.
As large models require millions of training examples to achieve good
performance, it is difficult to completely prevent them from being exposed to
such content. In this paper, we first demonstrate a surprising finding:
pretrained language models recognize, to a considerable degree, their
undesirable biases and the toxicity of the content they produce. We refer to
this capability as self-diagnosis. Based on this finding, we then propose a
decoding algorithm that, given only a textual description of the undesired
behavior, reduces the probability of a language model producing problematic
text. We refer to this approach as self-debiasing. Self-debiasing does not rely
on manually curated word lists, nor does it require any training data or
changes to the model's parameters. While we by no means eliminate the issue of
language models generating biased text, we believe our approach to be an
important step in this direction.
| 2,021 |
Computation and Language
|
NLP-CUET@DravidianLangTech-EACL2021: Offensive Language Detection from
Multilingual Code-Mixed Text using Transformers
|
The increasing accessibility of the internet facilitated social media usage
and encouraged individuals to express their opinions liberally. Nevertheless,
it also creates a place for content polluters to disseminate offensive posts or
contents. Most of such offensive posts are written in a cross-lingual manner
and can easily evade the online surveillance systems. This paper presents an
automated system that can identify offensive text from multilingual code-mixed
data. In the task, datasets provided in three languages including Tamil,
Malayalam and Kannada code-mixed with English where participants are asked to
implement separate models for each language. To accomplish the tasks, we
employed two machine learning techniques (LR, SVM), three deep learning (LSTM,
LSTM+Attention) techniques and three transformers (m-BERT, Indic-BERT, XLM-R)
based methods. Results show that XLM-R outperforms other techniques in Tamil
and Malayalam languages while m-BERT achieves the highest score in the Kannada
language. The proposed models gained weighted $f_1$ score of $0.76$ (for
Tamil), $0.93$ (for Malayalam), and $0.71$ (for Kannada) with a rank of
$3^{rd}$, $5^{th}$ and $4^{th}$ respectively.
| 2,021 |
Computation and Language
|
NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection
using Cross-lingual Representation Learner
|
In recent years, several systems have been developed to regulate the spread
of negativity and eliminate aggressive, offensive or abusive contents from the
online platforms. Nevertheless, a limited number of researches carried out to
identify positive, encouraging and supportive contents. In this work, our goal
is to identify whether a social media post/comment contains hope speech or not.
We propose three distinct models to identify hope speech in English, Tamil and
Malayalam language to serve this purpose. To attain this goal, we employed
various machine learning (support vector machine, logistic regression,
ensemble), deep learning (convolutional neural network + long short term
memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods.
Results indicate that XLM-Roberta outdoes all other techniques by gaining a
weighted $f_1$-score of $0.93$, $0.60$ and $0.85$ respectively for English,
Tamil and Malayalam language. Our team has achieved $1^{st}$, $2^{nd}$ and
$1^{st}$ rank in these three tasks respectively.
| 2,021 |
Computation and Language
|
Knowledge-Base Enriched Word Embeddings for Biomedical Domain
|
Word embeddings have been shown adept at capturing the semantic and syntactic
regularities of the natural language text, as a result of which these
representations have found their utility in a wide variety of downstream
content analysis tasks. Commonly, these word embedding techniques derive the
distributed representation of words based on the local context information.
However, such approaches ignore the rich amount of explicit information present
in knowledge-bases. This is problematic, as it might lead to poor
representation for words with insufficient local context such as domain
specific words. Furthermore, the problem becomes pronounced in domain such as
bio-medicine where the presence of these domain specific words are relatively
high. Towards this end, in this project, we propose a new word embedding based
model for biomedical domain that jointly leverages the information from
available corpora and domain knowledge in order to generate knowledge-base
powered embeddings. Unlike existing approaches, the proposed methodology is
simple but adept at capturing the precise knowledge available in domain
resources in an accurate way. Experimental results on biomedical concept
similarity and relatedness task validates the effectiveness of the proposed
approach.
| 2,021 |
Computation and Language
|
Generalized and Transferable Patient Language Representation for
Phenotyping with Limited Data
|
The paradigm of representation learning through transfer learning has the
potential to greatly enhance clinical natural language processing. In this
work, we propose a multi-task pre-training and fine-tuning approach for
learning generalized and transferable patient representations from medical
language. The model is first pre-trained with different but related
high-prevalence phenotypes and further fine-tuned on downstream target tasks.
Our main contribution focuses on the impact this technique can have on
low-prevalence phenotypes, a challenging task due to the dearth of data. We
validate the representation from pre-training, and fine-tune the multi-task
pre-trained models on low-prevalence phenotypes including 38 circulatory
diseases, 23 respiratory diseases, and 17 genitourinary diseases. We find
multi-task pre-training increases learning efficiency and achieves consistently
high performance across the majority of phenotypes. Most important, the
multi-task pre-training is almost always either the best-performing model or
performs tolerably close to the best-performing model, a property we refer to
as robust. All these results lead us to conclude that this multi-task transfer
learning architecture is a robust approach for developing generalized and
transferable patient language representations for numerous phenotypes.
| 2,021 |
Computation and Language
|
BERT-based Acronym Disambiguation with Multiple Training Strategies
|
Acronym disambiguation (AD) task aims to find the correct expansions of an
ambiguous ancronym in a given sentence. Although it is convenient to use
acronyms, sometimes they could be difficult to understand. Identifying the
appropriate expansions of an acronym is a practical task in natural language
processing. Since few works have been done for AD in scientific field, we
propose a binary classification model incorporating BERT and several training
strategies including dynamic negative sample selection, task adaptive
pretraining, adversarial training and pseudo labeling in this paper.
Experiments on SciAD show the effectiveness of our proposed model and our score
ranks 1st in SDU@AAAI-21 shared task 2: Acronym Disambiguation.
| 2,021 |
Computation and Language
|
RoBERTa-wwm-ext Fine-Tuning for Chinese Text Classification
|
Bidirectional Encoder Representations from Transformers (BERT) have shown to
be a promising way to dramatically improve the performance across various
Natural Language Processing tasks [Devlin et al., 2019]. Meanwhile, progress
made over the past few years by various Neural Net-work has also proved the
effectiveness of Neural Network in the field of Natural Language Processing. In
this project, RoBERTa-wwm-ext [Cui et al., 2019] pre-train language model was
adopted and fine-tuned for Chinese text classification. The models were able to
classify Chinese texts into two categories, containing descriptions of legal
behavior and descriptions of illegal behavior. Four different models are also
proposed in the paper. Those models will use RoBERTa-wwm-extas their embedding
layer and feed the embedding into different neural networks. The motivation
be-hind proposing these models is straightforward. By introducing complex
output layer architecture, the overall performance of the models could be
improved. All the models were trained on a data set derived from Chinese public
court records, and the performance of different models were compared.The
experiment shows that the performance of pro-posed models failed to beat the
original RoBERTa-wwm-ext model in terms of accuracy and training efficiency.
| 2,021 |
Computation and Language
|
Citizen Participation and Machine Learning for a Better Democracy
|
The development of democratic systems is a crucial task as confirmed by its
selection as one of the Millennium Sustainable Development Goals by the United
Nations. In this article, we report on the progress of a project that aims to
address barriers, one of which is information overload, to achieving effective
direct citizen participation in democratic decision-making processes. The main
objectives are to explore if the application of Natural Language Processing
(NLP) and machine learning can improve citizens' experience of digital citizen
participation platforms. Taking as a case study the "Decide Madrid" Consul
platform, which enables citizens to post proposals for policies they would like
to see adopted by the city council, we used NLP and machine learning to provide
new ways to (a) suggest to citizens proposals they might wish to support; (b)
group citizens by interests so that they can more easily interact with each
other; (c) summarise comments posted in response to proposals; (d) assist
citizens in aggregating and developing proposals. Evaluation of the results
confirms that NLP and machine learning have a role to play in addressing some
of the barriers users of platforms such as Consul currently experience.
| 2,021 |
Computation and Language
|
Towards Conversational Humor Analysis and Design
|
Well-defined jokes can be divided neatly into a setup and a punchline. While
most works on humor today talk about a joke as a whole, the idea of generating
punchlines to a setup has applications in conversational humor, where funny
remarks usually occur with a non-funny context. Thus, this paper is based
around two core concepts: Classification and the Generation of a punchline from
a particular setup based on the Incongruity Theory. We first implement a
feature-based machine learning model to classify humor. For humor generation,
we use a neural model, and then merge the classical rule-based approaches with
the neural approach to create a hybrid model. The idea behind being: combining
insights gained from other tasks with the setup-punchline model and thus
applying it to existing text generation approaches. We then use and compare our
model with human written jokes with the help of human evaluators in a
double-blind study.
| 2,021 |
Computation and Language
|
CREATe: Clinical Report Extraction and Annotation Technology
|
Clinical case reports are written descriptions of the unique aspects of a
particular clinical case, playing an essential role in sharing clinical
experiences about atypical disease phenotypes and new therapies. However, to
our knowledge, there has been no attempt to develop an end-to-end system to
annotate, index, or otherwise curate these reports. In this paper, we propose a
novel computational resource platform, CREATe, for extracting, indexing, and
querying the contents of clinical case reports. CREATe fosters an environment
of sustainable resource support and discovery, enabling researchers to overcome
the challenges of information science. An online video of the demonstration can
be viewed at https://youtu.be/Q8owBQYTjDc.
| 2,021 |
Computation and Language
|
RuSentEval: Linguistic Source, Encoder Force!
|
The success of pre-trained transformer language models has brought a great
deal of interest on how these models work, and what they learn about language.
However, prior research in the field is mainly devoted to English, and little
is known regarding other languages. To this end, we introduce RuSentEval, an
enhanced set of 14 probing tasks for Russian, including ones that have not been
explored yet. We apply a combination of complementary probing methods to
explore the distribution of various linguistic properties in five multilingual
transformers for two typologically contrasting languages -- Russian and
English. Our results provide intriguing findings that contradict the common
understanding of how linguistic knowledge is represented, and demonstrate that
some properties are learned in a similar manner despite the language
differences.
| 2,021 |
Computation and Language
|
Token-Modification Adversarial Attacks for Natural Language Processing:
A Survey
|
Many adversarial attacks target natural language processing systems, most of
which succeed through modifying the individual tokens of a document. Despite
the apparent uniqueness of each of these attacks, fundamentally they are simply
a distinct configuration of four components: a goal function, allowable
transformations, a search method, and constraints. In this survey, we
systematically present the different components used throughout the literature,
using an attack-independent framework which allows for easy comparison and
categorisation of components. Our work aims to serve as a comprehensive guide
for newcomers to the field and to spark targeted research into refining the
individual attack components.
| 2,024 |
Computation and Language
|
BERT-based knowledge extraction method of unstructured domain text
|
With the development and business adoption of knowledge graph, there is an
increasing demand for extracting entities and relations of knowledge graphs
from unstructured domain documents. This makes the automatic knowledge
extraction for domain text quite meaningful. This paper proposes a knowledge
extraction method based on BERT, which is used to extract knowledge points from
unstructured specific domain texts (such as insurance clauses in the insurance
industry) automatically to save manpower of knowledge graph construction.
Different from the commonly used methods which are based on rules, templates or
entity extraction models, this paper converts the domain knowledge points into
question and answer pairs and uses the text around the answer in documents as
the context. The method adopts a BERT-based model similar to BERT's SQuAD
reading comprehension task. The model is fine-tuned. And it is used to directly
extract knowledge points from more insurance clauses. According to the test
results, the model performance is good.
| 2,021 |
Computation and Language
|
Combat COVID-19 Infodemic Using Explainable Natural Language Processing
Models
|
Misinformation of COVID-19 is prevalent on social media as the pandemic
unfolds, and the associated risks are extremely high. Thus, it is critical to
detect and combat such misinformation. Recently, deep learning models using
natural language processing techniques, such as BERT (Bidirectional Encoder
Representations from Transformers), have achieved great successes in detecting
misinformation. In this paper, we proposed an explainable natural language
processing model based on DistilBERT and SHAP (Shapley Additive exPlanations)
to combat misinformation about COVID-19 due to their efficiency and
effectiveness. First, we collected a dataset of 984 claims about COVID-19 with
fact checking. By augmenting the data using back-translation, we doubled the
sample size of the dataset and the DistilBERT model was able to obtain good
performance (accuracy: 0.972; areas under the curve: 0.993) in detecting
misinformation about COVID-19. Our model was also tested on a larger dataset
for AAAI2021 - COVID-19 Fake News Detection Shared Task and obtained good
performance (accuracy: 0.938; areas under the curve: 0.985). The performance on
both datasets was better than traditional machine learning models. Second, in
order to boost public trust in model prediction, we employed SHAP to improve
model explainability, which was further evaluated using a between-subjects
experiment with three conditions, i.e., text (T), text+SHAP explanation (TSE),
and text+SHAP explanation+source and evidence (TSESE). The participants were
significantly more likely to trust and share information related to COVID-19 in
the TSE and TSESE conditions than in the T condition. Our results provided good
implications in detecting misinformation about COVID-19 and improving public
trust.
| 2,021 |
Computation and Language
|
Long Document Summarization in a Low Resource Setting using Pretrained
Language Models
|
Abstractive summarization is the task of compressing a long document into a
coherent short document while retaining salient information. Modern abstractive
summarization methods are based on deep neural networks which often require
large training datasets. Since collecting summarization datasets is an
expensive and time-consuming task, practical industrial settings are usually
low-resource. In this paper, we study a challenging low-resource setting of
summarizing long legal briefs with an average source document length of 4268
words and only 120 available (document, summary) pairs. To account for data
scarcity, we used a modern pretrained abstractive summarizer BART (Lewis et
al., 2020), which only achieves 17.9 ROUGE-L as it struggles with long
documents. We thus attempt to compress these long documents by identifying
salient sentences in the source which best ground the summary, using a novel
algorithm based on GPT-2 (Radford et al., 2019) language model perplexity
scores, that operates within the low resource regime. On feeding the compressed
documents to BART, we observe a 6.0 ROUGE-L improvement. Our method also beats
several competitive salience detection baselines. Furthermore, the identified
salient sentences tend to agree with an independent human labeling by domain
experts.
| 2,021 |
Computation and Language
|
RAGA: Relation-aware Graph Attention Networks for Global Entity
Alignment
|
Entity alignment (EA) is the task to discover entities referring to the same
real-world object from different knowledge graphs (KGs), which is the most
crucial step in integrating multi-source KGs. The majority of the existing
embeddings-based entity alignment methods embed entities and relations into a
vector space based on relation triples of KGs for local alignment. As these
methods insufficiently consider the multiple relations between entities, the
structure information of KGs has not been fully leveraged. In this paper, we
propose a novel framework based on Relation-aware Graph Attention Networks to
capture the interactions between entities and relations. Our framework adopts
the self-attention mechanism to spread entity information to the relations and
then aggregate relation information back to entities. Furthermore, we propose a
global alignment algorithm to make one-to-one entity alignments with a
fine-grained similarity matrix. Experiments on three real-world cross-lingual
datasets show that our framework outperforms the state-of-the-art methods.
| 2,021 |
Computation and Language
|
M6: A Chinese Multimodal Pretrainer
|
In this work, we construct the largest dataset for multimodal pretraining in
Chinese, which consists of over 1.9TB images and 292GB texts that cover a wide
range of domains. We propose a cross-modal pretraining method called M6,
referring to Multi-Modality to Multi-Modality Multitask Mega-transformer, for
unified pretraining on the data of single modality and multiple modalities. We
scale the model size up to 10 billion and 100 billion parameters, and build the
largest pretrained model in Chinese. We apply the model to a series of
downstream applications, and demonstrate its outstanding performance in
comparison with strong baselines. Furthermore, we specifically design a
downstream task of text-guided image generation, and show that the finetuned M6
can create high-quality images with high resolution and abundant details.
| 2,021 |
Computation and Language
|
Vy\=akarana: A Colorless Green Benchmark for Syntactic Evaluation in
Indic Languages
|
While there has been significant progress towards developing NLU resources
for Indic languages, syntactic evaluation has been relatively less explored.
Unlike English, Indic languages have rich morphosyntax, grammatical genders,
free linear word-order, and highly inflectional morphology. In this paper, we
introduce Vy\=akarana: a benchmark of Colorless Green sentences in Indic
languages for syntactic evaluation of multilingual language models. The
benchmark comprises four syntax-related tasks: PoS Tagging, Syntax Tree-depth
Prediction, Grammatical Case Marking, and Subject-Verb Agreement. We use the
datasets from the evaluation tasks to probe five multilingual language models
of varying architectures for syntax in Indic languages. Due to its prevalence,
we also include a code-switching setting in our experiments. Our results show
that the token-level and sentence-level representations from the Indic language
models (IndicBERT and MuRIL) do not capture the syntax in Indic languages as
efficiently as the other highly multilingual language models. Further, our
layer-wise probing experiments reveal that while mBERT, DistilmBERT, and XLM-R
localize the syntax in middle layers, the Indic language models do not show
such syntactic localization.
| 2,021 |
Computation and Language
|
Inductive biases, pretraining and fine-tuning jointly account for brain
responses to speech
|
Our ability to comprehend speech remains, to date, unrivaled by deep learning
models. This feat could result from the brain's ability to fine-tune generic
sound representations for speech-specific processes. To test this hypothesis,
we compare i) five types of deep neural networks to ii) human brain responses
elicited by spoken sentences and recorded in 102 Dutch subjects using
functional Magnetic Resonance Imaging (fMRI). Each network was either trained
on an acoustics scene classification, a speech-to-text task (based on Bengali,
English, or Dutch), or not trained. The similarity between each model and the
brain is assessed by correlating their respective activations after an optimal
linear projection. The differences in brain-similarity across networks revealed
three main results. First, speech representations in the brain can be accounted
for by random deep networks. Second, learning to classify acoustic scenes leads
deep nets to increase their brain similarity. Third, learning to process
phonetically-related speech inputs (i.e., Dutch vs English) leads deep nets to
reach higher levels of brain-similarity than learning to process
phonetically-distant speech inputs (i.e. Dutch vs Bengali). Together, these
results suggest that the human brain fine-tunes its heavily-trained auditory
hierarchy to learn to process speech.
| 2,021 |
Computation and Language
|
Adapting MARBERT for Improved Arabic Dialect Identification: Submission
to the NADI 2021 Shared Task
|
In this paper, we tackle the Nuanced Arabic Dialect Identification (NADI)
shared task (Abdul-Mageed et al., 2021) and demonstrate state-of-the-art
results on all of its four subtasks. Tasks are to identify the geographic
origin of short Dialectal (DA) and Modern Standard Arabic (MSA) utterances at
the levels of both country and province. Our final model is an ensemble of
variants built on top of MARBERT that achieves an F1-score of 34.03% for DA at
the country-level development set -- an improvement of 7.63% from previous
work.
| 2,021 |
Computation and Language
|
Sentiment Analysis of Users' Reviews on COVID-19 Contact Tracing Apps
with a Benchmark Dataset
|
Contact tracing has been globally adopted in the fight to control the
infection rate of COVID-19. Thanks to digital technologies, such as smartphones
and wearable devices, contacts of COVID-19 patients can be easily traced and
informed about their potential exposure to the virus. To this aim, several
interesting mobile applications have been developed. However, there are
ever-growing concerns over the working mechanism and performance of these
applications. The literature already provides some interesting exploratory
studies on the community's response to the applications by analyzing
information from different sources, such as news and users' reviews of the
applications. However, to the best of our knowledge, there is no existing
solution that automatically analyzes users' reviews and extracts the evoked
sentiments. In this work, we propose a pipeline starting from manual annotation
via a crowd-sourcing study and concluding on the development and training of AI
models for automatic sentiment analysis of users' reviews. In total, we employ
eight different methods achieving up to an average F1-Scores 94.8% indicating
the feasibility of automatic sentiment analysis of users' reviews on the
COVID-19 contact tracing applications. We also highlight the key advantages,
drawbacks, and users' concerns over the applications. Moreover, we also collect
and annotate a large-scale dataset composed of 34,534 reviews manually
annotated from the contract tracing applications of 46 distinct countries. The
presented analysis and the dataset are expected to provide a baseline/benchmark
for future research in the domain.
| 2,021 |
Computation and Language
|
Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in
Language
|
Current NLP datasets targeting ambiguity can be solved by a native speaker
with relative ease. We present Cryptonite, a large-scale dataset based on
cryptic crosswords, which is both linguistically complex and naturally sourced.
Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a
misleading surface reading, whose solving requires disambiguating semantic,
syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues
pose a challenge even for experienced solvers, though top-tier experts can
solve them with almost 100% accuracy. Cryptonite is a challenging task for
current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6%
accuracy, on par with the accuracy of a rule-based clue solver (8.6%).
| 2,021 |
Computation and Language
|
On the Effectiveness of Dataset Embeddings in Mono-lingual,Multi-lingual
and Zero-shot Conditions
|
Recent complementary strands of research have shown that leveraging
information on the data source through encoding their properties into
embeddings can lead to performance increase when training a single model on
heterogeneous data sources. However, it remains unclear in which situations
these dataset embeddings are most effective, because they are used in a large
variety of settings, languages and tasks. Furthermore, it is usually assumed
that gold information on the data source is available, and that the test data
is from a distribution seen during training. In this work, we compare the
effect of dataset embeddings in mono-lingual settings, multi-lingual settings,
and with predicted data source label in a zero-shot setting. We evaluate on
three morphosyntactic tasks: morphological tagging, lemmatization, and
dependency parsing, and use 104 datasets, 66 languages, and two different
dataset grouping strategies. Performance increases are highest when the
datasets are of the same language, and we know from which distribution the
test-instance is drawn. In contrast, for setups where the data is from an
unseen distribution, performance increase vanishes.
| 2,021 |
Computation and Language
|
DEUS: A Data-driven Approach to Estimate User Satisfaction in Multi-turn
Dialogues
|
Digital assistants are experiencing rapid growth due to their ability to
assist users with day-to-day tasks where most dialogues are happening
multi-turn. However, evaluating multi-turn dialogues remains challenging,
especially at scale. We suggest a context-sensitive method to estimate the
turn-level satisfaction for dialogue considering various types of user
preferences. The costs of interactions between users and dialogue systems are
formulated using a budget consumption concept. We assume users have an initial
interaction budget for a dialogue formed based on the task complexity and that
each turn has a cost. When the task is completed, or the budget has been
exhausted, users quit the dialogue. We demonstrate our method's effectiveness
by extensive experimentation with a simulated dialogue platform and real
multi-turn dialogues.
| 2,021 |
Computation and Language
|
ToxCCIn: Toxic Content Classification with Interpretability
|
Despite the recent successes of transformer-based models in terms of
effectiveness on a variety of tasks, their decisions often remain opaque to
humans. Explanations are particularly important for tasks like offensive
language or toxicity detection on social media because a manual appeal process
is often in place to dispute automatically flagged content. In this work, we
propose a technique to improve the interpretability of these models, based on a
simple and powerful assumption: a post is at least as toxic as its most toxic
span. We incorporate this assumption into transformer models by scoring a post
based on the maximum toxicity of its spans and augmenting the training process
to identify correct spans. We find this approach effective and can produce
explanations that exceed the quality of those provided by Logistic Regression
analysis (often regarded as a highly-interpretable model), according to a human
study.
| 2,021 |
Computation and Language
|
Deep Bag-of-Sub-Emotions for Depression Detection in Social Media
|
This paper presents the Deep Bag-of-Sub-Emotions (DeepBoSE), a novel deep
learning model for depression detection in social media. The model is
formulated such that it internally computes a differentiable Bag-of-Features
(BoF) representation that incorporates emotional information. This is achieved
by a reinterpretation of classical weighting schemes like term
frequency-inverse document frequency into probabilistic deep learning
operations. An important advantage of the proposed method is that it can be
trained under the transfer learning paradigm, which is useful to enhance
conventional BoF models that cannot be directly integrated into deep learning
architectures. Experiments were performed in the eRisk17 and eRisk18 datasets
for the depression detection task; results show that DeepBoSE outperforms
conventional BoF representations and it is competitive with the state of the
art, achieving a F1-score over the positive class of 0.64 in eRisk17 and 0.65
in eRisk18.
| 2,021 |
Computation and Language
|
Emotion Dynamics in Movie Dialogues
|
Emotion dynamics is a framework for measuring how an individual's emotions
change over time. It is a powerful tool for understanding how we behave and
interact with the world. In this paper, we introduce a framework to track
emotion dynamics through one's utterances. Specifically we introduce a number
of utterance emotion dynamics (UED) metrics inspired by work in Psychology. We
use this approach to trace emotional arcs of movie characters. We analyze
thousands of such character arcs to test hypotheses that inform our broader
understanding of stories. Notably, we show that there is a tendency for
characters to use increasingly more negative words and become increasingly
emotionally discordant with each other until about 90 percent of the narrative
length. UED also has applications in behavior studies, social sciences, and
public health.
| 2,021 |
Computation and Language
|
Contrastive Explanations for Model Interpretability
|
Contrastive explanations clarify why an event occurred in contrast to
another. They are more inherently intuitive to humans to both produce and
comprehend. We propose a methodology to produce contrastive explanations for
classification models by modifying the representation to disregard
non-contrastive information, and modifying model behavior to only be based on
contrastive reasoning. Our method is based on projecting model representation
to a latent space that captures only the features that are useful (to the
model) to differentiate two potential decisions. We demonstrate the value of
contrastive explanations by analyzing two different scenarios, using both
high-level abstract concept attribution and low-level input token/span
attribution, on two widely used text classification tasks. Specifically, we
produce explanations for answering: for which label, and against which
alternative label, is some aspect of the input useful? And which aspects of the
input are useful for and against particular decisions? Overall, our findings
shed light on the ability of label-contrastive explanations to provide a more
accurate and finer-grained interpretability of a model's decision.
| 2,021 |
Computation and Language
|
Hindi-Urdu Adposition and Case Supersenses v1.0
|
These are the guidelines for the application of SNACS (Semantic Network of
Adposition and Case Supersenses; Schneider et al. 2018) to Modern Standard
Hindi of Delhi. SNACS is an inventory of 50 supersenses (semantic labels) for
labelling the use of adpositions and case markers with respect to both
lexical-semantic function and relation to the underlying context. The English
guidelines (Schneider et al., 2020) were used as a model for this document.
Besides the case system, Hindi has an extremely rich adpositional system
built on the oblique genitive, with productive incorporation of loanwords even
in present-day Hinglish.
This document is aligned with version 2.5 of the English guidelines.
| 2,021 |
Computation and Language
|
Unsupervised Word Segmentation with Bi-directional Neural Language Model
|
We present an unsupervised word segmentation model, in which the learning
objective is to maximize the generation probability of a sentence given its all
possible segmentation. Such generation probability can be factorized into the
likelihood of each possible segment given the context in a recursive way. In
order to better capture the long- and short-term dependencies, we propose to
use bi-directional neural language models to better capture the features of
segment's context. Two decoding algorithms are also described to combine the
context features from both directions to generate the final segmentation, which
helps to reconcile word boundary ambiguities. Experimental results showed that
our context-sensitive unsupervised segmentation model achieved state-of-the-art
at different evaluation settings on various data sets for Chinese, and the
comparable result for Thai.
| 2,021 |
Computation and Language
|
Towards Efficiently Diversifying Dialogue Generation via Embedding
Augmentation
|
Dialogue generation models face the challenge of producing generic and
repetitive responses. Unlike previous augmentation methods that mostly focus on
token manipulation and ignore the essential variety within a single sample
using hard labels, we propose to promote the generation diversity of the neural
dialogue models via soft embedding augmentation along with soft labels in this
paper. Particularly, we select some key input tokens and fuse their embeddings
together with embeddings from their semantic-neighbor tokens. The new
embeddings serve as the input of the model to replace the original one.
Besides, soft labels are used in loss calculation, resulting in multi-target
supervision for a given input. Our experimental results on two datasets
illustrate that our proposed method is capable of generating more diverse
responses than raw models while remains a similar n-gram accuracy that ensures
the quality of generated responses.
| 2,021 |
Computation and Language
|
An End-to-End Network for Emotion-Cause Pair Extraction
|
The task of Emotion-Cause Pair Extraction (ECPE) aims to extract all
potential clause-pairs of emotions and their corresponding causes in a
document. Unlike the more well-studied task of Emotion Cause Extraction (ECE),
ECPE does not require the emotion clauses to be provided as annotations.
Previous works on ECPE have either followed a multi-stage approach where
emotion extraction, cause extraction, and pairing are done independently or use
complex architectures to resolve its limitations. In this paper, we propose an
end-to-end model for the ECPE task. Due to the unavailability of an English
language ECPE corpus, we adapt the NTCIR-13 ECE corpus and establish a baseline
for the ECPE task on this dataset. On this dataset, the proposed method
produces significant performance improvements (~6.5 increase in F1 score) over
the multi-stage approach and achieves comparable performance to the
state-of-the-art methods.
| 2,021 |
Computation and Language
|
Probing Product Description Generation via Posterior Distillation
|
In product description generation (PDG), the user-cared aspect is critical
for the recommendation system, which can not only improve user's experiences
but also obtain more clicks. High-quality customer reviews can be considered as
an ideal source to mine user-cared aspects. However, in reality, a large number
of new products (known as long-tailed commodities) cannot gather sufficient
amount of customer reviews, which brings a big challenge in the product
description generation task. Existing works tend to generate the product
description solely based on item information, i.e., product attributes or title
words, which leads to tedious contents and cannot attract customers
effectively. To tackle this problem, we propose an adaptive posterior network
based on Transformer architecture that can utilize user-cared information from
customer reviews. Specifically, we first extend the self-attentive Transformer
encoder to encode product titles and attributes. Then, we apply an adaptive
posterior distillation module to utilize useful review information, which
integrates user-cared aspects to the generation process. Finally, we apply a
Transformer-based decoding phase with copy mechanism to automatically generate
the product description. Besides, we also collect a large-scare Chinese product
description dataset to support our work and further research in this field.
Experimental results show that our model is superior to traditional generative
models in both automatic indicators and human evaluation.
| 2,021 |
Computation and Language
|
Interpretable Multi-Modal Hate Speech Detection
|
With growing role of social media in shaping public opinions and beliefs
across the world, there has been an increased attention to identify and counter
the problem of hate speech on social media. Hate speech on online spaces has
serious manifestations, including social polarization and hate crimes. While
prior works have proposed automated techniques to detect hate speech online,
these techniques primarily fail to look beyond the textual content. Moreover,
few attempts have been made to focus on the aspects of interpretability of such
models given the social and legal implications of incorrect predictions. In
this work, we propose a deep neural multi-modal model that can: (a) detect hate
speech by effectively capturing the semantics of the text along with
socio-cultural context in which a particular hate expression is made, and (b)
provide interpretable insights into decisions of our model. By performing a
thorough evaluation of different modeling techniques, we demonstrate that our
model is able to outperform the existing state-of-the-art hate speech
classification approaches. Finally, we show the importance of social and
cultural context features towards unearthing clusters associated with different
categories of hate.
| 2,019 |
Computation and Language
|
Disentangling Syntax and Semantics in the Brain with Deep Networks
|
The activations of language transformers like GPT-2 have been shown to
linearly map onto brain activity during speech comprehension. However, the
nature of these activations remains largely unknown and presumably conflate
distinct linguistic classes. Here, we propose a taxonomy to factorize the
high-dimensional activations of language models into four combinatorial
classes: lexical, compositional, syntactic, and semantic representations. We
then introduce a statistical method to decompose, through the lens of GPT-2's
activations, the brain activity of 345 subjects recorded with functional
magnetic resonance imaging (fMRI) during the listening of ~4.6 hours of
narrated text. The results highlight two findings. First, compositional
representations recruit a more widespread cortical network than lexical ones,
and encompass the bilateral temporal, parietal and prefrontal cortices. Second,
contrary to previous claims, syntax and semantics are not associated with
separated modules, but, instead, appear to share a common and distributed
neural substrate. Overall, this study introduces a versatile framework to
isolate, in the brain activity, the distributed representations of linguistic
constructs.
| 2,021 |
Computation and Language
|
Hate Towards the Political Opponent: A Twitter Corpus Study of the 2020
US Elections on the Basis of Offensive Speech and Stance Detection
|
The 2020 US Elections have been, more than ever before, characterized by
social media campaigns and mutual accusations. We investigate in this paper if
this manifests also in online communication of the supporters of the candidates
Biden and Trump, by uttering hateful and offensive communication. We formulate
an annotation task, in which we join the tasks of hateful/offensive speech
detection and stance detection, and annotate 3000 Tweets from the campaign
period, if they express a particular stance towards a candidate. Next to the
established classes of favorable and against, we add mixed and neutral stances
and also annotate if a candidate is mentioned without an opinion expression.
Further, we annotate if the tweet is written in an offensive style. This
enables us to analyze if supporters of Joe Biden and the Democratic Party
communicate differently than supporters of Donald Trump and the Republican
Party. A BERT baseline classifier shows that the detection if somebody is a
supporter of a candidate can be performed with high quality (.89 F1 for Trump
and .91 F1 for Biden), while the detection that somebody expresses to be
against a candidate is more challenging (.79 F1 and .64 F1, respectively). The
automatic detection of hate/offensive speech remains challenging (with .53 F1).
Our corpus is publicly available and constitutes a novel resource for
computational modelling of offensive language under consideration of stances.
| 2,021 |
Computation and Language
|
Emotion Ratings: How Intensity, Annotation Confidence and Agreements are
Entangled
|
When humans judge the affective content of texts, they also implicitly assess
the correctness of such judgment, that is, their confidence. We hypothesize
that people's (in)confidence that they performed well in an annotation task
leads to (dis)agreements among each other. If this is true, confidence may
serve as a diagnostic tool for systematic differences in annotations. To probe
our assumption, we conduct a study on a subset of the Corpus of Contemporary
American English, in which we ask raters to distinguish neutral sentences from
emotion-bearing ones, while scoring the confidence of their answers. Confidence
turns out to approximate inter-annotator disagreements. Further, we find that
confidence is correlated to emotion intensity: perceiving stronger affect in
text prompts annotators to more certain classification performances. This
insight is relevant for modelling studies of intensity, as it opens the
question wether automatic regressors or classifiers actually predict intensity,
or rather human's self-perceived confidence.
| 2,021 |
Computation and Language
|
AraBERT and Farasa Segmentation Based Approach For Sarcasm and Sentiment
Detection in Arabic Tweets
|
This paper presents our strategy to tackle the EACL WANLP-2021 Shared Task 2:
Sarcasm and Sentiment Detection. One of the subtasks aims at developing a
system that identifies whether a given Arabic tweet is sarcastic in nature or
not, while the other aims to identify the sentiment of the Arabic tweet. We
approach the task in two steps. The first step involves pre processing the
provided ArSarcasm-v2 dataset by performing insertions, deletions and
segmentation operations on various parts of the text. The second step involves
experimenting with multiple variants of two transformer based models,
AraELECTRA and AraBERT. Our final approach was ranked seventh and fourth in the
Sarcasm and Sentiment Detection subtasks respectively.
| 2,021 |
Computation and Language
|
Conversational Norms for Human-Robot Dialogues
|
This paper describes a recently initiated research project aiming at
supporting development of computerised dialogue systems that handle breaches of
conversational norms such as the Gricean maxims, which describe how dialogue
participants ideally form their utterances in order to be informative,
relevant, brief, etc. Our approach is to model dialogue and norms with
co-operating distributed grammar systems (CDGSs), and to develop methods to
detect breaches and to handle them in dialogue systems for verbal human-robot
interaction.
| 2,021 |
Computation and Language
|
Distributional Formal Semantics
|
Natural language semantics has recently sought to combine the complementary
strengths of formal and distributional approaches to meaning. More
specifically, proposals have been put forward to augment formal semantic
machinery with distributional meaning representations, thereby introducing the
notion of semantic similarity into formal semantics, or to define
distributional systems that aim to incorporate formal notions such as
entailment and compositionality. However, given the fundamentally different
'representational currency' underlying formal and distributional approaches -
models of the world versus linguistic co-occurrence - their unification has
proven extremely difficult. Here, we define a Distributional Formal Semantics
that integrates distributionality into a formal semantic system on the level of
formal models. This approach offers probabilistic, distributed meaning
representations that are also inherently compositional, and that naturally
capture fundamental semantic notions such as quantification and entailment.
Furthermore, we show how the probabilistic nature of these representations
allows for probabilistic inference, and how the information-theoretic notion of
"information" (measured in terms of Entropy and Surprisal) naturally follows
from it. Finally, we illustrate how meaning representations can be derived
incrementally from linguistic input using a recurrent neural network model, and
how the resultant incremental semantic construction procedure intuitively
captures key semantic phenomena, including negation, presupposition, and
anaphoricity.
| 2,021 |
Computation and Language
|
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
|
There is an ongoing debate in the NLP community whether modern language
models contain linguistic knowledge, recovered through so-called probes. In
this paper, we study whether linguistic knowledge is a necessary condition for
the good performance of modern language models, which we call the
\textit{rediscovery hypothesis}. In the first place, we show that language
models that are significantly compressed but perform well on their pretraining
objectives retain good scores when probed for linguistic structures. This
result supports the rediscovery hypothesis and leads to the second contribution
of our paper: an information-theoretic framework that relates language modeling
objectives with linguistic information. This framework also provides a metric
to measure the impact of linguistic information on the word prediction task. We
reinforce our analytical results with various experiments, both on synthetic
and on real NLP tasks in English.
| 2,021 |
Computation and Language
|
A Data-Centric Framework for Composable NLP Workflows
|
Empirical natural language processing (NLP) systems in application domains
(e.g., healthcare, finance, education) involve interoperation among multiple
components, ranging from data ingestion, human annotation, to text retrieval,
analysis, generation, and visualization. We establish a unified open-source
framework to support fast development of such sophisticated NLP workflows in a
composable manner. The framework introduces a uniform data representation to
encode heterogeneous results by a wide range of NLP tasks. It offers a large
repository of processors for NLP tasks, visualization, and annotation, which
can be easily assembled with full interoperability under the unified
representation. The highly extensible framework allows plugging in custom
processors from external off-the-shelf NLP and deep learning libraries. The
whole framework is delivered through two modularized yet integratable
open-source projects, namely Forte (for workflow infrastructure and NLP
function processors) and Stave (for user interaction, visualization, and
annotation).
| 2,021 |
Computation and Language
|
Data Augmentation for Abstractive Query-Focused Multi-Document
Summarization
|
The progress in Query-focused Multi-Document Summarization (QMDS) has been
limited by the lack of sufficient largescale high-quality training datasets. We
present two QMDS training datasets, which we construct using two data
augmentation methods: (1) transferring the commonly used single-document
CNN/Daily Mail summarization dataset to create the QMDSCNN dataset, and (2)
mining search-query logs to create the QMDSIR dataset. These two datasets have
complementary properties, i.e., QMDSCNN has real summaries but queries are
simulated, while QMDSIR has real queries but simulated summaries. To cover both
these real summary and query aspects, we build abstractive end-to-end neural
network models on the combined datasets that yield new state-of-the-art
transfer results on DUC datasets. We also introduce new hierarchical encoders
that enable a more efficient encoding of the query together with multiple
documents. Empirical results demonstrate that our data augmentation and
encoding methods outperform baseline models on automatic metrics, as well as on
human evaluations along multiple attributes.
| 2,021 |
Computation and Language
|
Dual Reinforcement-Based Specification Generation for Image De-Rendering
|
Advances in deep learning have led to promising progress in inferring
graphics programs by de-rendering computer-generated images. However, current
methods do not explore which decoding methods lead to better inductive bias for
inferring graphics programs. In our work, we first explore the effectiveness of
LSTM-RNN versus Transformer networks as decoders for order-independent graphics
programs. Since these are sequence models, we must choose an ordering of the
objects in the graphics programs for likelihood training. We found that the
LSTM performance was highly sensitive to the sequence ordering (random order
vs. pattern-based order), while Transformer performance was roughly independent
of the sequence ordering. Further, we present a policy gradient based
reinforcement learning approach for better inductive bias in the decoder via
multiple diverse rewards based both on the graphics program specification and
the rendered image. We also explore the combination of these complementary
rewards. We achieve state-of-the-art results on two graphics program generation
datasets.
| 2,021 |
Computation and Language
|
MultiSubs: A Large-scale Multimodal and Multilingual Dataset
|
This paper introduces a large-scale multimodal and multilingual dataset that
aims to facilitate research on grounding words to images in their contextual
usage in language. The dataset consists of images selected to unambiguously
illustrate concepts expressed in sentences from movie subtitles. The dataset is
a valuable resource as (i) the images are aligned to text fragments rather than
whole sentences; (ii) multiple images are possible for a text fragment and a
sentence; (iii) the sentences are free-form and real-world like; (iv) the
parallel texts are multilingual. We set up a fill-in-the-blank game for humans
to evaluate the quality of the automatic image selection process of our
dataset. We show the utility of the dataset on two automatic tasks: (i)
fill-in-the-blank; (ii) lexical translation. Results of the human evaluation
and automatic models demonstrate that images can be a useful complement to the
textual context. The dataset will benefit research on visual grounding of words
especially in the context of free-form sentences, and can be obtained from
https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.
| 2,022 |
Computation and Language
|
CogNet: Bridging Linguistic Knowledge, World Knowledge and Commonsense
Knowledge
|
In this paper, we present CogNet, a knowledge base (KB) dedicated to
integrating three types of knowledge: (1) linguistic knowledge from FrameNet,
which schematically describes situations, objects and events. (2) world
knowledge from YAGO, Freebase, DBpedia and Wikidata, which provides explicit
knowledge about specific instances. (3) commonsense knowledge from ConceptNet,
which describes implicit general facts. To model these different types of
knowledge consistently, we introduce a three-level unified frame-styled
representation architecture. To integrate free-form commonsense knowledge with
other structured knowledge, we propose a strategy that combines automated
labeling and crowdsourced annotation. At present, CogNet integrates 1,000+
semantic frames from linguistic KBs, 20,000,000+ frame instances from world
KBs, as well as 90,000+ commonsense assertions from commonsense KBs. All these
data can be easily queried and explored on our online platform, and free to
download in RDF format for utilization under a CC-BY-SA 4.0 license. The demo
and data are available at http://cognet.top/.
| 2,021 |
Computation and Language
|
Random Feature Attention
|
Transformers are state-of-the-art models for a variety of sequence modeling
tasks. At their core is an attention function which models pairwise
interactions between the inputs at every timestep. While attention is powerful,
it does not scale efficiently to long sequences due to its quadratic time and
space complexity in the sequence length. We propose RFA, a linear time and
space attention that uses random feature methods to approximate the softmax
function, and explore its application in transformers. RFA can be used as a
drop-in replacement for conventional softmax attention and offers a
straightforward way of learning with recency bias through an optional gating
mechanism. Experiments on language modeling and machine translation demonstrate
that RFA achieves similar or better performance compared to strong transformer
baselines. In the machine translation experiment, RFA decodes twice as fast as
a vanilla transformer. Compared to existing efficient transformer variants, RFA
is competitive in terms of both accuracy and efficiency on three long text
classification datasets. Our analysis shows that RFA's efficiency gains are
especially notable on long sequences, suggesting that RFA will be particularly
useful in tasks that require working with large inputs, fast decoding speed, or
low memory footprints.
| 2,021 |
Computation and Language
|
An Iterative Contextualization Algorithm with Second-Order Attention
|
Combining the representations of the words that make up a sentence into a
cohesive whole is difficult, since it needs to account for the order of words,
and to establish how the words present relate to each other. The solution we
propose consists in iteratively adjusting the context. Our algorithm starts
with a presumably erroneous value of the context, and adjusts this value with
respect to the tokens at hand. In order to achieve this, representations of
words are built combining their symbolic embedding with a positional encoding
into single vectors. The algorithm then iteratively weighs and aggregates these
vectors using our novel second-order attention mechanism. Our models report
strong results in several well-known text classification tasks.
| 2,021 |
Computation and Language
|
Gradual Fine-Tuning for Low-Resource Domain Adaptation
|
Fine-tuning is known to improve NLP models by adapting an initial model
trained on more plentiful but less domain-salient examples to data in a target
domain. Such domain adaptation is typically done using one stage of
fine-tuning. We demonstrate that gradually fine-tuning in a multi-stage process
can yield substantial further gains and can be applied without modifying the
model or learning objective.
| 2,021 |
Computation and Language
|
Zero-Shot Cross-Lingual Dependency Parsing through Contextual Embedding
Transformation
|
Linear embedding transformation has been shown to be effective for zero-shot
cross-lingual transfer tasks and achieve surprisingly promising results.
However, cross-lingual embedding space mapping is usually studied in static
word-level embeddings, where a space transformation is derived by aligning
representations of translation pairs that are referred from dictionaries. We
move further from this line and investigate a contextual embedding alignment
approach which is sense-level and dictionary-free. To enhance the quality of
the mapping, we also provide a deep view of properties of contextual
embeddings, i.e., anisotropy problem and its solution. Experiments on zero-shot
dependency parsing through the concept-shared space built by our embedding
transformation substantially outperform state-of-the-art methods using
multilingual embeddings.
| 2,021 |
Computation and Language
|
Data Augmentation with Hierarchical SQL-to-Question Generation for
Cross-domain Text-to-SQL Parsing
|
Data augmentation has attracted a lot of research attention in the deep
learning era for its ability in alleviating data sparseness. The lack of
labeled data for unseen evaluation databases is exactly the major challenge for
cross-domain text-to-SQL parsing. Previous works either require human
intervention to guarantee the quality of generated data, or fail to handle
complex SQL queries. This paper presents a simple yet effective data
augmentation framework. First, given a database, we automatically produce a
large number of SQL queries based on an abstract syntax tree grammar. For
better distribution matching, we require that at least 80% of SQL patterns in
the training data are covered by generated queries. Second, we propose a
hierarchical SQL-to-question generation model to obtain high-quality natural
language questions, which is the major contribution of this work. Finally, we
design a simple sampling strategy that can greatly improve training efficiency
given large amounts of generated data. Experiments on three cross-domain
datasets, i.e., WikiSQL and Spider in English, and DuSQL in Chinese, show that
our proposed data augmentation framework can consistently improve performance
over strong baselines, and the hierarchical generation component is the key for
the improvement.
| 2,021 |
Computation and Language
|
An Attention Based Neural Network for Code Switching Detection: English
& Roman Urdu
|
Code-switching is a common phenomenon among people with diverse lingual
background and is widely used on the internet for communication purposes. In
this paper, we present a Recurrent Neural Network combined with the Attention
Model for Language Identification in Code-Switched Data in English and low
resource Roman Urdu. The attention model enables the architecture to learn the
important features of the languages hence classifying the code switched data.
We demonstrated our approach by comparing the results with state of the art
models i.e. Hidden Markov Models, Conditional Random Field and Bidirectional
LSTM. The models evaluation, using confusion matrix metrics, showed that the
attention mechanism provides improved the precision and accuracy as compared to
the other models.
| 2,021 |
Computation and Language
|
Meta-Curriculum Learning for Domain Adaptation in Neural Machine
Translation
|
Meta-learning has been sufficiently validated to be beneficial for
low-resource neural machine translation (NMT). However, we find that
meta-trained NMT fails to improve the translation performance of the domain
unseen at the meta-training stage. In this paper, we aim to alleviate this
issue by proposing a novel meta-curriculum learning for domain adaptation in
NMT. During meta-training, the NMT first learns the similar curricula from each
domain to avoid falling into a bad local optimum early, and finally learns the
curricula of individualities to improve the model robustness for learning
domain-specific knowledge. Experimental results on 10 different low-resource
domains show that meta-curriculum learning can improve the translation
performance of both familiar and unfamiliar domains. All the codes and data are
freely available at https://github.com/NLP2CT/Meta-Curriculum.
| 2,021 |
Computation and Language
|
Lex2vec: making Explainable Word Embeddings via Lexical Resources
|
In this technical report, we propose an algorithm, called Lex2vec that
exploits lexical resources to inject information into word embeddings and name
the embedding dimensions by means of knowledge bases. We evaluate the optimal
parameters to extract a number of informative labels that is readable and has a
good coverage for the embedding dimensions.
| 2,021 |
Computation and Language
|
An Empirical Study of Compound PCFGs
|
Compound probabilistic context-free grammars (C-PCFGs) have recently
established a new state of the art for unsupervised phrase-structure grammar
induction. However, due to the high space and time complexities of chart-based
representation and inference, it is difficult to investigate C-PCFGs
comprehensively. In this work, we rely on a fast implementation of C-PCFGs to
conduct an evaluation complementary to that of~\citet{kim-etal-2019-compound}.
We start by analyzing and ablating C-PCFGs on English treebanks. Our findings
suggest that (1) C-PCFGs are data-efficient and can generalize to unseen
sentence/constituent lengths; and (2) C-PCFGs make the best use of
sentence-level information in generating preterminal rule probabilities. We
further conduct a multilingual evaluation of C-PCFGs. The experimental results
show that the best configurations of C-PCFGs, which are tuned on English, do
not always generalize to morphology-rich languages.
| 2,023 |
Computation and Language
|
Few-shot Learning for Slot Tagging with Attentive Relational Network
|
Metric-based learning is a well-known family of methods for few-shot
learning, especially in computer vision. Recently, they have been used in many
natural language processing applications but not for slot tagging. In this
paper, we explore metric-based learning methods in the slot tagging task and
propose a novel metric-based learning architecture - Attentive Relational
Network. Our proposed method extends relation networks, making them more
suitable for natural language processing applications in general, by leveraging
pretrained contextual embeddings such as ELMO and BERT and by using attention
mechanism. The results on SNIPS data show that our proposed method outperforms
other state-of-the-art metric-based learning methods.
| 2,021 |
Computation and Language
|
Combining Prediction and Interpretation in Decision Trees (PrInDT) -- a
Linguistic Example
|
In this paper, we show that conditional inference trees and ensembles are
suitable methods for modeling linguistic variation. As against earlier
linguistic applications, however, we claim that their suitability is strongly
increased if we combine prediction and interpretation. To that end, we have
developed a statistical method, PrInDT (Prediction and Interpretation with
Decision Trees), which we introduce and discuss in the present paper.
| 2,021 |
Computation and Language
|
OAG-BERT: Towards A Unified Backbone Language Model For Academic
Knowledge Services
|
Academic knowledge services have substantially facilitated the development of
the science enterprise by providing a plenitude of efficient research tools.
However, many applications highly depend on ad-hoc models and expensive human
labeling to understand scientific contents, hindering deployments into real
products. To build a unified backbone language model for different
knowledge-intensive academic applications, we pre-train an academic language
model OAG-BERT that integrates both the heterogeneous entity knowledge and
scientific corpora in the Open Academic Graph (OAG) -- the largest public
academic graph to date. In OAG-BERT, we develop strategies for pre-training
text and entity data along with zero-shot inference techniques. In OAG-BERT, we
develop strategies for pre-training text and entity data along with zero-shot
inference techniques. Its zero-shot capability furthers the path to mitigate
the need of expensive annotations. OAG-BERT has been deployed for real-world
applications, such as the reviewer recommendation function for National Nature
Science Foundation of China (NSFC) -- one of the largest funding agencies in
China -- and paper tagging in AMiner. All codes and pre-trained models are
available via the CogDL toolkit.
| 2,022 |
Computation and Language
|
NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash
Commands
|
The NLC2CMD Competition hosted at NeurIPS 2020 aimed to bring the power of
natural language processing to the command line. Participants were tasked with
building models that can transform descriptions of command line tasks in
English to their Bash syntax. This is a report on the competition with details
of the task, metrics, data, attempted solutions, and lessons learned.
| 2,021 |
Computation and Language
|
NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven
Conversation
|
In this paper, we propose a Chinese multi-turn topic-driven conversation
dataset, NaturalConv, which allows the participants to chat anything they want
as long as any element from the topic is mentioned and the topic shift is
smooth. Our corpus contains 19.9K conversations from six domains, and 400K
utterances with an average turn number of 20.1. These conversations contain
in-depth discussions on related topics or widely natural transition between
multiple topics. We believe either way is normal for human conversation. To
facilitate the research on this corpus, we provide results of several benchmark
models. Comparative results show that for this dataset, our current models are
not able to provide significant improvement by introducing background
knowledge/topic. Therefore, the proposed dataset should be a good benchmark for
further research to evaluate the validity and naturalness of multi-turn
conversation systems. Our dataset is available at
https://ai.tencent.com/ailab/nlp/dialogue/#datasets.
| 2,021 |
Computation and Language
|
Detecting Extraneous Content in Podcasts
|
Podcast episodes often contain material extraneous to the main content, such
as advertisements, interleaved within the audio and the written descriptions.
We present classifiers that leverage both textual and listening patterns in
order to detect such content in podcast descriptions and audio transcripts. We
demonstrate that our models are effective by evaluating them on the downstream
task of podcast summarization and show that we can substantively improve ROUGE
scores and reduce the extraneous content generated in the summaries.
| 2,021 |
Computation and Language
|
A Novel Context-Aware Multimodal Framework for Persian Sentiment
Analysis
|
Most recent works on sentiment analysis have exploited the text modality.
However, millions of hours of video recordings posted on social media platforms
everyday hold vital unstructured information that can be exploited to more
effectively gauge public perception. Multimodal sentiment analysis offers an
innovative solution to computationally understand and harvest sentiments from
videos by contextually exploiting audio, visual and textual cues. In this
paper, we, firstly, present a first of its kind Persian multimodal dataset
comprising more than 800 utterances, as a benchmark resource for researchers to
evaluate multimodal sentiment analysis approaches in Persian language.
Secondly, we present a novel context-aware multimodal sentiment analysis
framework, that simultaneously exploits acoustic, visual and textual cues to
more accurately determine the expressed sentiment. We employ both
decision-level (late) and feature-level (early) fusion methods to integrate
affective cross-modal information. Experimental results demonstrate that the
contextual integration of multimodal features such as textual, acoustic and
visual features deliver better performance (91.39%) compared to unimodal
features (89.24%).
| 2,021 |
Computation and Language
|
Natural Language Understanding for Argumentative Dialogue Systems in the
Opinion Building Domain
|
This paper introduces a natural language understanding (NLU) framework for
argumentative dialogue systems in the information-seeking and opinion building
domain. The proposed framework consists of two sub-models, namely intent
classifier and argument similarity. Intent classifier model stacks BiLSTM with
attention mechanism on top of the pre-trained BERT model and fine-tune the
model for recognizing the user intent, whereas the argument similarity model
employs BERT+BiLSTM for identifying system arguments the user refers to in his
or her natural language utterances. Our model is evaluated in an argumentative
dialogue system that engages the user to inform him-/herself about a
controversial topic by exploring pro and con arguments and build his/her
opinion towards the topic. In order to evaluate the proposed approach, we
collect user utterances for the interaction with the respective system labeling
intent and referenced argument in an extensive online study. The data
collection includes multiple topics and two different user types (native
English speakers from the UK and non-native English speakers from China).
Additionally, we evaluate the proposed intent classifier and argument
similarity models separately on the publicly available Banking77 and STS
benchmark datasets. The evaluation indicates a clear advantage of the utilized
techniques over baseline approaches on several datasets, as well as the
robustness of the proposed approach against new topics and different language
proficiency as well as the cultural background of the user. Furthermore,
results show that our intent classifier model outperforms DIET, DistillBERT,
and BERT fine-tuned models in few-shot setups (i.e., with 10, 20, or 30 labeled
examples per intent) and full data setup.
| 2,022 |
Computation and Language
|
An Emotion-controlled Dialog Response Generation Model with Dynamic
Vocabulary
|
In response generation task, proper sentimental expressions can obviously
improve the human-like level of the responses. However, for real application in
online systems, high QPS (queries per second, an indicator of the flow capacity
of on-line systems) is required, and a dynamic vocabulary mechanism has been
proved available in improving speed of generative models. In this paper, we
proposed an emotion-controlled dialog response generation model based on the
dynamic vocabulary mechanism, and the experimental results show the benefit of
this model.
| 2,021 |
Computation and Language
|
A Survey on Spoken Language Understanding: Recent Advances and New
Frontiers
|
Spoken Language Understanding (SLU) aims to extract the semantics frame of
user queries, which is a core component in a task-oriented dialog system. With
the burst of deep neural networks and the evolution of pre-trained language
models, the research of SLU has obtained significant breakthroughs. However,
there remains a lack of a comprehensive survey summarizing existing approaches
and recent trends, which motivated the work presented in this article. In this
paper, we survey recent advances and new frontiers in SLU. Specifically, we
give a thorough review of this research field, covering different aspects
including (1) new taxonomy: we provide a new perspective for SLU filed,
including single model vs. joint model, implicit joint modeling vs. explicit
joint modeling in joint model, non pre-trained paradigm vs. pre-trained
paradigm;(2) new frontiers: some emerging areas in complex SLU as well as the
corresponding challenges; (3) abundant open-source resources: to help the
community, we have collected, organized the related papers, baseline projects
and leaderboard on a public website where SLU researchers could directly access
to the recent progress. We hope that this survey can shed a light on future
research in SLU field.
| 2,021 |
Computation and Language
|
An empirical analysis of phrase-based and neural machine translation
|
Two popular types of machine translation (MT) are phrase-based and neural
machine translation systems. Both of these types of systems are composed of
multiple complex models or layers. Each of these models and layers learns
different linguistic aspects of the source language. However, for some of these
models and layers, it is not clear which linguistic phenomena are learned or
how this information is learned. For phrase-based MT systems, it is often clear
what information is learned by each model, and the question is rather how this
information is learned, especially for its phrase reordering model. For neural
machine translation systems, the situation is even more complex, since for many
cases it is not exactly clear what information is learned and how it is
learned.
To shed light on what linguistic phenomena are captured by MT systems, we
analyze the behavior of important models in both phrase-based and neural MT
systems. We consider phrase reordering models from phrase-based MT systems to
investigate which words from inside of a phrase have the biggest impact on
defining the phrase reordering behavior. Additionally, to contribute to the
interpretability of neural MT systems we study the behavior of the attention
model, which is a key component in neural MT systems and the closest model in
functionality to phrase reordering models in phrase-based systems. The
attention model together with the encoder hidden state representations form the
main components to encode source side linguistic information in neural MT. To
this end, we also analyze the information captured in the encoder hidden state
representations of a neural MT system. We investigate the extent to which
syntactic and lexical-semantic information from the source side is captured by
hidden state representations of different neural MT architectures.
| 2,021 |
Computation and Language
|
Advances in Multi-turn Dialogue Comprehension: A Survey
|
Training machines to understand natural language and interact with humans is
an elusive and essential task of artificial intelligence. A diversity of
dialogue systems has been designed with the rapid development of deep learning
techniques, especially the recent pre-trained language models (PrLMs). Among
these studies, the fundamental yet challenging type of task is dialogue
comprehension whose role is to teach the machines to read and comprehend the
dialogue context before responding. In this paper, we review the previous
methods from the technical perspective of dialogue modeling for the dialogue
comprehension task. We summarize the characteristics and challenges of dialogue
comprehension in contrast to plain-text reading comprehension. Then, we discuss
three typical patterns of dialogue modeling. In addition, we categorize
dialogue-related pre-training techniques which are employed to enhance PrLMs in
dialogue scenarios. Finally, we highlight the technical advances in recent
years and point out the lessons from the empirical analysis and the prospects
towards a new frontier of researches.
| 2,021 |
Computation and Language
|
An Empirical Study of End-to-end Simultaneous Speech Translation
Decoding Strategies
|
This paper proposes a decoding strategy for end-to-end simultaneous speech
translation. We leverage end-to-end models trained in offline mode and conduct
an empirical study for two language pairs (English-to-German and
English-to-Portuguese). We also investigate different output token
granularities including characters and Byte Pair Encoding (BPE) units. The
results show that the proposed decoding approach allows to control BLEU/Average
Lagging trade-off along different latency regimes. Our best decoding settings
achieve comparable results with a strong cascade model evaluated on the
simultaneous translation track of IWSLT 2020 shared task.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.