Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
You Can Do Better! If You Elaborate the Reason When Making Prediction
|
Neural predictive models have achieved remarkable performance improvements in
various natural language processing tasks. However, most neural predictive
models suffer from the lack of explainability of predictions, limiting their
practical utility. This paper proposes a neural predictive approach to make a
prediction and generate its corresponding explanation simultaneously. It
leverages the knowledge entailed in explanations as an additional distillation
signal for more efficient learning. We conduct a preliminary study on Chinese
medical multiple-choice question answering, English natural language inference,
and commonsense question answering tasks. The experimental results show that
the proposed approach can generate reasonable explanations for its predictions
even with a small-scale training corpus. The proposed method also achieves
improved prediction accuracy on three datasets, which indicates that making
predictions can benefit from generating the explanation in the decision
process.
| 2,021 |
Computation and Language
|
Supersense and Sensibility: Proxy Tasks for Semantic Annotation of
Prepositions
|
Prepositional supersense annotation is time-consuming and requires expert
training. Here, we present two sensible methods for obtaining prepositional
supersense annotations by eliciting surface substitution and similarity
judgments. Four pilot studies suggest that both methods have potential for
producing prepositional supersense annotations that are comparable in quality
to expert annotations.
| 2,021 |
Computation and Language
|
HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments
for Offensive Language and Hate Speech Detection
|
Due to the severity of the social media offensive and hateful comments in
Brazil, and the lack of research in Portuguese, this paper provides the first
large-scale expert annotated corpus of Brazilian Instagram comments for hate
speech and offensive language detection. The HateBR corpus was collected from
the comment section of Brazilian politicians' accounts on Instagram and
manually annotated by specialists, reaching a high inter-annotator agreement.
The corpus consists of 7,000 documents annotated according to three different
layers: a binary classification (offensive versus non-offensive comments),
offensiveness-level classification (highly, moderately, and slightly
offensive), and nine hate speech groups (xenophobia, racism, homophobia,
sexism, religious intolerance, partyism, apology for the dictatorship,
antisemitism, and fatphobia). We also implemented baseline experiments for
offensive language and hate speech detection and compared them with a
literature baseline. Results show that the baseline experiments on our corpus
outperform the current state-of-the-art for the Portuguese language.
| 2,022 |
Computation and Language
|
Explaining the Road Not Taken
|
It is unclear if existing interpretations of deep neural network models
respond effectively to the needs of users. This paper summarizes the common
forms of explanations (such as feature attribution, decision rules, or probes)
used in over 200 recent papers about natural language processing (NLP), and
compares them against user questions collected in the XAI Question Bank. We
found that although users are interested in explanations for the road not taken
-- namely, why the model chose one result and not a well-defined, seemly
similar legitimate counterpart -- most model interpretations cannot answer
these questions.
| 2,021 |
Computation and Language
|
'Just because you are right, doesn't mean I am wrong': Overcoming a
Bottleneck in the Development and Evaluation of Open-Ended Visual Question
Answering (VQA) Tasks
|
GQA~\citep{hudson2019gqa} is a dataset for real-world visual reasoning and
compositional question answering. We found that many answers predicted by the
best vision-language models on the GQA dataset do not match the ground-truth
answer but still are semantically meaningful and correct in the given context.
In fact, this is the case with most existing visual question answering (VQA)
datasets where they assume only one ground-truth answer for each question. We
propose Alternative Answer Sets (AAS) of ground-truth answers to address this
limitation, which is created automatically using off-the-shelf NLP tools. We
introduce a semantic metric based on AAS and modify top VQA solvers to support
multiple plausible answers for a question. We implement this approach on the
GQA dataset and show the performance improvements. Code and data are available
in this link \url{https://github.com/luomancs/alternative_answer_set.git}.
| 2,022 |
Computation and Language
|
On Hallucination and Predictive Uncertainty in Conditional Language
Generation
|
Despite improvements in performances on different natural language generation
tasks, deep neural models are prone to hallucinating facts that are incorrect
or nonexistent. Different hypotheses are proposed and examined separately for
different tasks, but no systematic explanations are available across these
tasks. In this study, we draw connections between hallucinations and predictive
uncertainty in conditional language generation. We investigate their
relationship in both image captioning and data-to-text generation and propose a
simple extension to beam search to reduce hallucination. Our analysis shows
that higher predictive uncertainty corresponds to a higher chance of
hallucination. Epistemic uncertainty is more indicative of hallucination than
aleatoric or total uncertainties. It helps to achieve better results of trading
performance in standard metric for less hallucination with the proposed beam
search variant.
| 2,021 |
Computation and Language
|
PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS
|
This paper introduces PnG BERT, a new encoder model for neural TTS. This
model is augmented from the original BERT model, by taking both phoneme and
grapheme representations of text as input, as well as the word-level alignment
between them. It can be pre-trained on a large text corpus in a self-supervised
manner, and fine-tuned in a TTS task. Experimental results show that a neural
TTS model using a pre-trained PnG BERT as its encoder yields more natural
prosody and more accurate pronunciation than a baseline model using only
phoneme input with no pre-training. Subjective side-by-side preference
evaluations show that raters have no statistically significant preference
between the speech synthesized using a PnG BERT and ground truth recordings
from professional speakers.
| 2,021 |
Computation and Language
|
InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem?
|
Sentence insertion is an interesting NLP problem but received insufficient
attention. Existing approaches in sentence ordering, text coherence, and
question answering are neither suitable nor good enough at solving it. To
bridge this gap, we propose InsertGNN, a simple yet effective model that
represents the problem as a graph and adopts a hierarchical graph neural
network (GNN) to learn the connection between sentences. We evaluate our method
in our newly collected TOEFL dataset and further verify its effectiveness on
the larger arXiv dataset using cross-domain learning. Extensive experiments
demonstrate that InsertGNN outperforms all baselines by a large margin with an
accuracy of 70\%, rivaling the average human test scores.
| 2,023 |
Computation and Language
|
PENELOPIE: Enabling Open Information Extraction for the Greek Language
through Machine Translation
|
In this paper we present our submission for the EACL 2021 SRW; a methodology
that aims at bridging the gap between high and low-resource languages in the
context of Open Information Extraction, showcasing it on the Greek language.
The goals of this paper are twofold: First, we build Neural Machine Translation
(NMT) models for English-to-Greek and Greek-to-English based on the Transformer
architecture. Second, we leverage these NMT models to produce English
translations of Greek text as input for our NLP pipeline, to which we apply a
series of pre-processing and triple extraction tasks. Finally, we
back-translate the extracted triples to Greek. We conduct an evaluation of both
our NMT and OIE methods on benchmark datasets and demonstrate that our approach
outperforms the current state-of-the-art for the Greek natural language.
| 2,021 |
Computation and Language
|
A More Fine-Grained Aspect-Sentiment-Opinion Triplet Extraction Task
|
Aspect Sentiment Triplet Extraction (ASTE) aims to extract aspect term,
sentiment and opinion term triplets from sentences and tries to provide a
complete solution for aspect-based sentiment analysis (ABSA). However, some
triplets extracted by ASTE are confusing, since the sentiment in a triplet
extracted by ASTE is the sentiment that the sentence expresses toward the
aspect term rather than the sentiment of the aspect term and opinion term pair.
In this paper, we introduce a more fine-grained Aspect-Sentiment-Opinion
Triplet Extraction (ASOTE) Task. ASOTE also extracts aspect term, sentiment and
opinion term triplets. However, the sentiment in a triplet extracted by ASOTE
is the sentiment of the aspect term and opinion term pair. We build four
datasets for ASOTE based on several popular ABSA benchmarks. We propose a
Position-aware BERT-based Framework (PBF) to address this task. PBF first
extracts aspect terms from sentences. For each extracted aspect term, PBF first
generates aspect term-specific sentence representations considering both the
meaning and the position of the aspect term, then extracts associated opinion
terms and predicts the sentiments of the aspect term and opinion term pairs
based on the sentence representations. Experimental results on the four
datasets show the effectiveness of PBF.
| 2,021 |
Computation and Language
|
Whitening Sentence Representations for Better Semantics and Faster
Retrieval
|
Pre-training models such as BERT have achieved great success in many natural
language processing tasks. However, how to obtain better sentence
representation through these pre-training models is still worthy to exploit.
Previous work has shown that the anisotropy problem is an critical bottleneck
for BERT-based sentence representation which hinders the model to fully utilize
the underlying semantic features. Therefore, some attempts of boosting the
isotropy of sentence distribution, such as flow-based model, have been applied
to sentence representations and achieved some improvement. In this paper, we
find that the whitening operation in traditional machine learning can similarly
enhance the isotropy of sentence representations and achieve competitive
results. Furthermore, the whitening technique is also capable of reducing the
dimensionality of the sentence representation. Our experimental results show
that it can not only achieve promising performance but also significantly
reduce the storage cost and accelerate the model retrieval speed.
| 2,021 |
Computation and Language
|
Centrality Meets Centroid: A Graph-based Approach for Unsupervised
Document Summarization
|
Unsupervised document summarization has re-acquired lots of attention in
recent years thanks to its simplicity and data independence. In this paper, we
propose a graph-based unsupervised approach for extractive document
summarization. Instead of ranking sentences by salience and extracting
sentences one by one, our approach works at a summary-level by utilizing graph
centrality and centroid. We first extract summary candidates as subgraphs based
on centrality from the sentence graph and then select from the summary
candidates by matching to the centroid. We perform extensive experiments on two
bench-marked summarization datasets, and the results demonstrate the
effectiveness of our model compared to state-of-the-art baselines.
| 2,021 |
Computation and Language
|
Extending Multi-Sense Word Embedding to Phrases and Sentences for
Unsupervised Semantic Applications
|
Most unsupervised NLP models represent each word with a single point or
single region in semantic space, while the existing multi-sense word embeddings
cannot represent longer word sequences like phrases or sentences. We propose a
novel embedding method for a text sequence (a phrase or a sentence) where each
sequence is represented by a distinct set of multi-mode codebook embeddings to
capture different semantic facets of its meaning. The codebook embeddings can
be viewed as the cluster centers which summarize the distribution of possibly
co-occurring words in a pre-trained word embedding space. We introduce an
end-to-end trainable neural model that directly predicts the set of cluster
centers from the input text sequence during test time. Our experiments show
that the per-sentence codebook embeddings significantly improve the
performances in unsupervised sentence similarity and extractive summarization
benchmarks. In phrase similarity experiments, we discover that the multi-facet
embeddings provide an interpretable semantic representation but do not
outperform the single-facet baseline.
| 2,021 |
Computation and Language
|
Changing the Mind of Transformers for Topically-Controllable Language
Generation
|
Large Transformer-based language models can aid human authors by suggesting
plausible continuations of text written so far. However, current interactive
writing assistants do not allow authors to guide text generation in desired
topical directions. To address this limitation, we design a framework that
displays multiple candidate upcoming topics, of which a user can select a
subset to guide the generation. Our framework consists of two components: (1) a
method that produces a set of candidate topics by predicting the centers of
word clusters in the possible continuations, and (2) a text generation model
whose output adheres to the chosen topics. The training of both components is
self-supervised, using only unlabeled text. Our experiments demonstrate that
our topic options are better than those of standard clustering approaches, and
our framework often generates fluent sentences related to the chosen topics, as
judged by automated metrics and crowdsourced workers.
| 2,021 |
Computation and Language
|
Multi-facet Universal Schema
|
Universal schema (USchema) assumes that two sentence patterns that share the
same entity pairs are similar to each other. This assumption is widely adopted
for solving various types of relation extraction (RE) tasks. Nevertheless, each
sentence pattern could contain multiple facets, and not every facet is similar
to all the facets of another sentence pattern co-occurring with the same entity
pair. To address the violation of the USchema assumption, we propose
multi-facet universal schema that uses a neural model to represent each
sentence pattern as multiple facet embeddings and encourage one of these facet
embeddings to be close to that of another sentence pattern if they co-occur
with the same entity pair. In our experiments, we demonstrate that multi-facet
embeddings significantly outperform their single-facet embedding counterpart,
compositional universal schema (CUSchema) (Verga et al., 2016), in distantly
supervised relation extraction tasks. Moreover, we can also use multiple
embeddings to detect the entailment relation between two sentence patterns when
no manual label is available.
| 2,021 |
Computation and Language
|
NLP for Ghanaian Languages
|
NLP Ghana is an open-source non-profit organization aiming to advance the
development and adoption of state-of-the-art NLP techniques and digital
language tools to Ghanaian languages and problems. In this paper, we first
present the motivation and necessity for the efforts of the organization; by
introducing some popular Ghanaian languages while presenting the state of NLP
in Ghana. We then present the NLP Ghana organization and outline its aims,
scope of work, some of the methods employed and contributions made thus far in
the NLP community in Ghana.
| 2,021 |
Computation and Language
|
Retrieving Event-related Human Brain Dynamics from Natural Sentence
Reading
|
Electroencephalography (EEG) signals recordings when people reading natural
languages are commonly used as a cognitive method to interpret human language
understanding in neuroscience and psycholinguistics. Previous studies have
demonstrated that the human fixation and activation in word reading associated
with some brain regions, but it is not clear when and how to measure the brain
dynamics across time and frequency domains. In this study, we propose the first
analysis of event-related brain potentials (ERPs), and event-related spectral
perturbations (ERSPs) on benchmark datasets which consist of sentence-level
simultaneous EEG and related eye-tracking recorded from human natural reading
experiment tasks. Our results showed peaks evoked at around 162 ms after the
stimulus (starting to read each sentence) in the occipital area, indicating the
brain retriving lexical and semantic visual information processing approaching
200 ms from the sentence onset. Furthermore, the occipital ERP around 200ms
presents negative power and positive power in short and long reaction times. In
addition, the occipital ERSP around 200ms demonstrated increased high gamma and
decreased low beta and low gamma power, relative to the baseline. Our results
implied that most of the semantic-perception responses occurred around the
200ms in alpha, beta and gamma bands of EEG signals. Our findings also provide
potential impacts on promoting cognitive natural language processing models
evaluation from EEG dynamics.
| 2,021 |
Computation and Language
|
Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end
speech recognition
|
This paper proposes an adaptation method for end-to-end speech recognition.
In this method, multiple automatic speech recognition (ASR) 1-best hypotheses
are integrated in the computation of the connectionist temporal classification
(CTC) loss function. The integration of multiple ASR hypotheses helps
alleviating the impact of errors in the ASR hypotheses to the computation of
the CTC loss when ASR hypotheses are used. When being applied in
semi-supervised adaptation scenarios where part of the adaptation data do not
have labels, the CTC loss of the proposed method is computed from different ASR
1-best hypotheses obtained by decoding the unlabeled adaptation data.
Experiments are performed in clean and multi-condition training scenarios where
the CTC-based end-to-end ASR systems are trained on Wall Street Journal (WSJ)
clean training data and CHiME-4 multi-condition training data, respectively,
and tested on Aurora-4 test data. The proposed adaptation method yields 6.6%
and 5.8% relative word error rate (WER) reductions in clean and multi-condition
training scenarios, respectively, compared to a baseline system which is
adapted with part of the adaptation data having manual transcriptions using
back-propagation fine-tuning.
| 2,021 |
Computation and Language
|
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
of the Embedding Layers in NLP Models
|
Recent studies have revealed a security threat to natural language processing
(NLP) models, called the Backdoor Attack. Victim models can maintain
competitive performance on clean samples while behaving abnormally on samples
with a specific trigger word inserted. Previous backdoor attacking methods
usually assume that attackers have a certain degree of data knowledge, either
the dataset which users would use or proxy datasets for a similar task, for
implementing the data poisoning procedure. However, in this paper, we find that
it is possible to hack the model in a data-free way by modifying one single
word embedding vector, with almost no accuracy sacrificed on clean samples.
Experimental results on sentiment analysis and sentence-pair classification
tasks show that our method is more efficient and stealthier. We hope this work
can raise the awareness of such a critical security risk hidden in the
embedding layers of NLP models. Our code is available at
https://github.com/lancopku/Embedding-Poisoning.
| 2,021 |
Computation and Language
|
English-Twi Parallel Corpus for Machine Translation
|
We present a parallel machine translation training corpus for English and
Akuapem Twi of 25,421 sentence pairs. We used a transformer-based translator to
generate initial translations in Akuapem Twi, which were later verified and
corrected where necessary by native speakers to eliminate any occurrence of
translationese. In addition, 697 higher quality crowd-sourced sentences are
provided for use as an evaluation set for downstream Natural Language
Processing (NLP) tasks. The typical use case for the larger human-verified
dataset is for further training of machine translation models in Akuapem Twi.
The higher quality 697 crowd-sourced dataset is recommended as a testing
dataset for machine translation of English to Twi and Twi to English models.
Furthermore, the Twi part of the crowd-sourced data may also be used for other
tasks, such as representation learning, classification, etc. We fine-tune the
transformer translation model on the training corpus and report benchmarks on
the crowd-sourced test set.
| 2,021 |
Computation and Language
|
CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic
Negotiation Systems
|
Automated systems that negotiate with humans have broad applications in
pedagogy and conversational AI. To advance the development of practical
negotiation systems, we present CaSiNo: a novel corpus of over a thousand
negotiation dialogues in English. Participants take the role of campsite
neighbors and negotiate for food, water, and firewood packages for their
upcoming trip. Our design results in diverse and linguistically rich
negotiations while maintaining a tractable, closed-domain environment. Inspired
by the literature in human-human negotiations, we annotate persuasion
strategies and perform correlation analysis to understand how the dialogue
behaviors are associated with the negotiation performance. We further propose
and evaluate a multi-task framework to recognize these strategies in a given
utterance. We find that multi-task learning substantially improves the
performance for all strategy labels, especially for the ones that are the most
skewed. We release the dataset, annotations, and the code to propel future work
in human-machine negotiations: https://github.com/kushalchawla/CaSiNo
| 2,021 |
Computation and Language
|
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint
|
Wav2vec 2.0 is a state-of-the-art speech recognition model which maps speech
audio waveforms into latent representations. The largest version of wav2vec 2.0
contains 317 million parameters. Hence, the inference latency of wav2vec 2.0
will be a bottleneck in production, leading to high costs and a significant
environmental footprint. To improve wav2vec's applicability to a production
setting, we explore multiple model compression methods borrowed from the domain
of large language models. Using a teacher-student approach, we distilled the
knowledge from the original wav2vec 2.0 model into a student model, which is 2
times faster and 4.8 times smaller than the original model. This increase in
performance is accomplished with only a 7% degradation in word error rate
(WER). Our quantized model is 3.6 times smaller than the original model, with
only a 0.1% degradation in WER. To the best of our knowledge, this is the first
work that compresses wav2vec 2.0.
| 2,021 |
Computation and Language
|
Text Normalization for Low-Resource Languages of Africa
|
Training data for machine learning models can come from many different
sources, which can be of dubious quality. For resource-rich languages like
English, there is a lot of data available, so we can afford to throw out the
dubious data. For low-resource languages where there is much less data
available, we can't necessarily afford to throw out the dubious data, in case
we end up with a training set which is too small to train a model. In this
study, we examine the effects of text normalization and data set quality for a
set of low-resource languages of Africa -- Afrikaans, Amharic, Hausa, Igbo,
Malagasy, Somali, Swahili, and Zulu. We describe our text normalizer which we
built in the Pynini framework, a Python library for finite state transducers,
and our experiments in training language models for African languages using the
Natural Language Toolkit (NLTK), an open-source Python library for NLP.
| 2,021 |
Computation and Language
|
Industry Scale Semi-Supervised Learning for Natural Language
Understanding
|
This paper presents a production Semi-Supervised Learning (SSL) pipeline
based on the student-teacher framework, which leverages millions of unlabeled
examples to improve Natural Language Understanding (NLU) tasks. We investigate
two questions related to the use of unlabeled data in production SSL context:
1) how to select samples from a huge unlabeled data pool that are beneficial
for SSL training, and 2) how do the selected data affect the performance of
different state-of-the-art SSL techniques. We compare four widely used SSL
techniques, Pseudo-Label (PL), Knowledge Distillation (KD), Virtual Adversarial
Training (VAT) and Cross-View Training (CVT) in conjunction with two data
selection methods including committee-based selection and submodular
optimization based selection. We further examine the benefits and drawbacks of
these techniques when applied to intent classification (IC) and named entity
recognition (NER) tasks, and provide guidelines specifying when each of these
methods might be beneficial to improve large scale NLU systems.
| 2,021 |
Computation and Language
|
Unsupervised Machine Translation On Dravidian Languages
|
Unsupervised neural machine translation (UNMT) is beneficial especially for
low resource languages such as those from the Dravidian family. However, UNMT
systems tend to fail in realistic scenarios involving actual low resource
languages. Recent works propose to utilize auxiliary parallel data and have
achieved state-of-the-art results. In this work, we focus on unsupervised
translation between English and Kannada, a low resource Dravidian language. We
additionally utilize a limited amount of auxiliary data between English and
other related Dravidian languages. We show that unifying the writing systems is
essential in unsupervised translation between the Dravidian languages. We
explore several model architectures that use the auxiliary data in order to
maximize knowledge sharing and enable UNMT for distant language pairs. Our
experiments demonstrate that it is crucial to include auxiliary languages that
are similar to our focal language, Kannada. Furthermore, we propose a metric to
measure language similarity and show that it serves as a good indicator for
selecting the auxiliary languages.
| 2,021 |
Computation and Language
|
Data Augmentation in a Hybrid Approach for Aspect-Based Sentiment
Analysis
|
Data augmentation is a way to increase the diversity of available data by
applying constrained transformations on the original data. This strategy has
been widely used in image classification but has to the best of our knowledge
not yet been used in aspect-based sentiment analysis (ABSA). ABSA is a text
analysis technique that determines aspects and their associated sentiment in
opinionated text. In this paper, we investigate the effect of data augmentation
on a state-of-the-art hybrid approach for aspect-based sentiment analysis
(HAABSA). We apply modified versions of easy data augmentation (EDA),
backtranslation, and word mixup. We evaluate the proposed techniques on the
SemEval 2015 and SemEval 2016 datasets. The best result is obtained with the
adjusted version of EDA, which yields a 0.5 percentage point improvement on the
SemEval 2016 dataset and 1 percentage point increase on the SemEval 2015
dataset compared to the original HAABSA model.
| 2,021 |
Computation and Language
|
Explaining a Neural Attention Model for Aspect-Based Sentiment
Classification Using Diagnostic Classification
|
Many high performance machine learning models for Aspect-Based Sentiment
Classification (ABSC) produce black box models, and therefore barely explain
how they classify a certain sentiment value towards an aspect. In this paper,
we propose explanation models, that inspect the internal dynamics of a
state-of-the-art neural attention model, the LCR-Rot-hop, by using a technique
called Diagnostic Classification. Our diagnostic classifier is a simple neural
network, which evaluates whether the internal layers of the LCR-Rot-hop model
encode useful word information for classification, i.e., the part of speech,
the sentiment value, the presence of aspect relation, and the aspect-related
sentiment value of words. We conclude that the lower layers in the LCR-Rot-hop
model encode the part of speech and the sentiment value, whereas the higher
layers represent the presence of a relation with the aspect and the
aspect-related sentiment value of words.
| 2,021 |
Computation and Language
|
Transformer visualization via dictionary learning: contextualized
embedding as a linear superposition of transformer factors
|
Transformer networks have revolutionized NLP representation learning since
they were introduced. Though a great effort has been made to explain the
representation in transformers, it is widely recognized that our understanding
is not sufficient. One important reason is that there lack enough visualization
tools for detailed analysis. In this paper, we propose to use dictionary
learning to open up these "black boxes" as linear superpositions of transformer
factors. Through visualization, we demonstrate the hierarchical semantic
structures captured by the transformer factors, e.g., word-level polysemy
disambiguation, sentence-level pattern formation, and long-range dependency.
While some of these patterns confirm the conventional prior linguistic
knowledge, the rest are relatively unexpected, which may provide new insights.
We hope this visualization tool can bring further knowledge and a better
understanding of how transformer networks work. The code is available at
https://github.com/zeyuyun1/TransformerVis
| 2,023 |
Computation and Language
|
Contextual Text Embeddings for Twi
|
Transformer-based language models have been changing the modern Natural
Language Processing (NLP) landscape for high-resource languages such as
English, Chinese, Russian, etc. However, this technology does not yet exist for
any Ghanaian language. In this paper, we introduce the first of such models for
Twi or Akan, the most widely spoken Ghanaian language. The specific
contribution of this research work is the development of several pretrained
transformer language models for the Akuapem and Asante dialects of Twi, paving
the way for advances in application areas such as Named Entity Recognition
(NER), Neural Machine Translation (NMT), Sentiment Analysis (SA) and
Part-of-Speech (POS) tagging. Specifically, we introduce four different
flavours of ABENA -- A BERT model Now in Akan that is fine-tuned on a set of
Akan corpora, and BAKO - BERT with Akan Knowledge only, which is trained from
scratch. We open-source the model through the Hugging Face model hub and
demonstrate its use via a simple sentiment classification example.
| 2,021 |
Computation and Language
|
Grounding Open-Domain Instructions to Automate Web Support Tasks
|
Grounding natural language instructions on the web to perform previously
unseen tasks enables accessibility and automation. We introduce a task and
dataset to train AI agents from open-domain, step-by-step instructions
originally written for people. We build RUSS (Rapid Universal Support Service)
to tackle this problem. RUSS consists of two models: First, a BERT-LSTM with
pointers parses instructions to ThingTalk, a domain-specific language we design
for grounding natural language on the web. Then, a grounding model retrieves
the unique IDs of any webpage elements requested in ThingTalk. RUSS may
interact with the user through a dialogue (e.g. ask for an address) or execute
a web operation (e.g. click a button) inside the web runtime. To augment
training, we synthesize natural language instructions mapped to ThingTalk. Our
dataset consists of 80 different customer service problems from help websites,
with a total of 741 step-by-step instructions and their corresponding actions.
RUSS achieves 76.7% end-to-end accuracy predicting agent actions from single
instructions. It outperforms state-of-the-art models that directly map
instructions to actions without ThingTalk. Our user study shows that RUSS is
preferred by actual users over web navigation.
| 2,021 |
Computation and Language
|
XRJL-HKUST at SemEval-2021 Task 4: WordNet-Enhanced Dual Multi-head
Co-Attention for Reading Comprehension of Abstract Meaning
|
This paper presents our submitted system to SemEval 2021 Task 4: Reading
Comprehension of Abstract Meaning. Our system uses a large pre-trained language
model as the encoder and an additional dual multi-head co-attention layer to
strengthen the relationship between passages and question-answer pairs,
following the current state-of-the-art model DUMA. The main difference is that
we stack the passage-question and question-passage attention modules instead of
calculating parallelly to simulate re-considering process. We also add a layer
normalization module to improve the performance of our model. Furthermore, to
incorporate our known knowledge about abstract concepts, we retrieve the
definitions of candidate answers from WordNet and feed them to the model as
extra inputs. Our system, called WordNet-enhanced DUal Multi-head Co-Attention
(WN-DUMA), achieves 86.67% and 89.99% accuracy on the official blind test set
of subtask 1 and subtask 2 respectively.
| 2,021 |
Computation and Language
|
Autocorrect in the Process of Translation -- Multi-task Learning
Improves Dialogue Machine Translation
|
Automatic translation of dialogue texts is a much needed demand in many real
life scenarios. However, the currently existing neural machine translation
delivers unsatisfying results. In this paper, we conduct a deep analysis of a
dialogue corpus and summarize three major issues on dialogue translation,
including pronoun dropping (\droppro), punctuation dropping (\droppun), and
typos (\typo). In response to these challenges, we propose a joint learning
method to identify omission and typo, and utilize context to translate dialogue
utterances. To properly evaluate the performance, we propose a manually
annotated dataset with 1,931 Chinese-English parallel utterances from 300
dialogues as a benchmark testbed for dialogue translation. Our experiments show
that the proposed method improves translation quality by 3.2 BLEU over the
baselines. It also elevates the recovery rate of omitted pronouns from 26.09%
to 47.16%. We will publish the code and dataset publicly at
https://github.com/rgwt123/DialogueMT.
| 2,021 |
Computation and Language
|
AfriKI: Machine-in-the-Loop Afrikaans Poetry Generation
|
This paper proposes a generative language model called AfriKI. Our approach
is based on an LSTM architecture trained on a small corpus of contemporary
fiction. With the aim of promoting human creativity, we use the model as an
authoring tool to explore machine-in-the-loop Afrikaans poetry generation. To
our knowledge, this is the first study to attempt creative text generation in
Afrikaans.
| 2,021 |
Computation and Language
|
Locally-Contextual Nonlinear CRFs for Sequence Labeling
|
Linear chain conditional random fields (CRFs) combined with contextual word
embeddings have achieved state of the art performance on sequence labeling
tasks. In many of these tasks, the identity of the neighboring words is often
the most useful contextual information when predicting the label of a given
word. However, contextual embeddings are usually trained in a task-agnostic
manner. This means that although they may encode information about the
neighboring words, it is not guaranteed. It can therefore be beneficial to
design the sequence labeling architecture to directly extract this information
from the embeddings. We propose locally-contextual nonlinear CRFs for sequence
labeling. Our approach directly incorporates information from the neighboring
embeddings when predicting the label for a given word, and parametrizes the
potential functions using deep neural networks. Our model serves as a drop-in
replacement for the linear chain CRF, consistently outperforming it in our
ablation study. On a variety of tasks, our results are competitive with those
of the best published methods. In particular, we outperform the previous state
of the art on chunking on CoNLL 2000 and named entity recognition on OntoNotes
5.0 English.
| 2,021 |
Computation and Language
|
Grounding Dialogue Systems via Knowledge Graph Aware Decoding with
Pre-trained Transformers
|
Generating knowledge grounded responses in both goal and non-goal oriented
dialogue systems is an important research challenge. Knowledge Graphs (KG) can
be viewed as an abstraction of the real world, which can potentially facilitate
a dialogue system to produce knowledge grounded responses. However, integrating
KGs into the dialogue generation process in an end-to-end manner is a
non-trivial task. This paper proposes a novel architecture for integrating KGs
into the response generation process by training a BERT model that learns to
answer using the elements of the KG (entities and relations) in a multi-task,
end-to-end setting. The k-hop subgraph of the KG is incorporated into the model
during training and inference using Graph Laplacian. Empirical evaluation
suggests that the model achieves better knowledge groundedness (measured via
Entity F1 score) compared to other state-of-the-art models for both goal and
non-goal oriented dialogues.
| 2,021 |
Computation and Language
|
The Unfolding Structure of Arguments in Online Debates: The case of a
No-Deal Brexit
|
In the last decade, political debates have progressively shifted to social
media. Rhetorical devices employed by online actors and factions that operate
in these debating arenas can be captured and analysed to conduct a statistical
reading of societal controversies and their argumentation dynamics. In this
paper, we propose a five-step methodology, to extract, categorize and explore
the latent argumentation structures of online debates. Using Twitter data about
a "no-deal" Brexit, we focus on the expected effects in case of materialisation
of this event. First, we extract cause-effect claims contained in tweets using
RegEx that exploit verbs related to Creation, Destruction and Causation.
Second, we categorise extracted "no-deal" effects using a Structural Topic
Model estimated on unigrams and bigrams. Third, we select controversial effect
topics and explore within-topic argumentation differences between self-declared
partisan user factions. We hence type topics using estimated covariate effects
on topic propensities, then, using the topics correlation network, we study the
topological structure of the debate to identify coherent topical
constellations. Finally, we analyse the debate time dynamics and infer
lead/follow relations among factions. Results show that the proposed
methodology can be employed to perform a statistical rhetorics analysis of
debates, and map the architecture of controversies across time. In particular,
the "no-deal" Brexit debate is shown to have an assortative argumentation
structure heavily characterized by factional constellations of arguments, as
well as by polarized narrative frames invoked through verbs related to Creation
and Destruction. Our findings highlight the benefits of implementing a systemic
approach to the analysis of debates, which allows the unveiling of topical and
factional dependencies between arguments employed in online debates.
| 2,021 |
Computation and Language
|
Representing ELMo embeddings as two-dimensional text online
|
We describe a new addition to the WebVectors toolkit which is used to serve
word embedding models over the Web. The new ELMoViz module adds support for
contextualized embedding architectures, in particular for ELMo models. The
provided visualizations follow the metaphor of `two-dimensional text' by
showing lexical substitutes: words which are most semantically similar in
context to the words of the input sentence. The system allows the user to
change the ELMo layers from which token embeddings are inferred. It also
conveys corpus information about the query words and their lexical substitutes
(namely their frequency tiers and parts of speech). The module is well
integrated into the rest of the WebVectors toolkit, providing lexical
hyperlinks to word representations in static embedding models. Two web services
have already implemented the new functionality with pre-trained ELMo models for
Russian, Norwegian and English.
| 2,021 |
Computation and Language
|
Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention
|
Most chatbot literature that focuses on improving the fluency and coherence
of a chatbot, is dedicated to making chatbots more human-like. However, very
little work delves into what really separates humans from chatbots -- humans
intrinsically understand the effect their responses have on the interlocutor
and often respond with an intention such as proposing an optimistic view to
make the interlocutor feel better. This paper proposes an innovative framework
to train chatbots to possess human-like intentions. Our framework includes a
guiding chatbot and an interlocutor model that plays the role of humans. The
guiding chatbot is assigned an intention and learns to induce the interlocutor
to reply with responses matching the intention, for example, long responses,
joyful responses, responses with specific words, etc. We examined our framework
using three experimental setups and evaluated the guiding chatbot with four
different metrics to demonstrate flexibility and performance advantages.
Additionally, we performed trials with human interlocutors to substantiate the
guiding chatbot's effectiveness in influencing the responses of humans to a
certain extent. Code will be made available to the public.
| 2,021 |
Computation and Language
|
Evaluating the Morphosyntactic Well-formedness of Generated Texts
|
Text generation systems are ubiquitous in natural language processing
applications. However, evaluation of these systems remains a challenge,
especially in multilingual settings. In this paper, we propose L'AMBRE -- a
metric to evaluate the morphosyntactic well-formedness of text using its
dependency parse and morphosyntactic rules of the language. We present a way to
automatically extract various rules governing morphosyntax directly from
dependency treebanks. To tackle the noisy outputs from text generation systems,
we propose a simple methodology to train robust parsers. We show the
effectiveness of our metric on the task of machine translation through a
diachronic study of systems translating into morphologically-rich languages.
| 2,021 |
Computation and Language
|
A study of latent monotonic attention variants
|
End-to-end models reach state-of-the-art performance for speech recognition,
but global soft attention is not monotonic, which might lead to convergence
problems, to instability, to bad generalisation, cannot be used for online
streaming, and is also inefficient in calculation. Monotonicity can potentially
fix all of this. There are several ad-hoc solutions or heuristics to introduce
monotonicity, but a principled introduction is rarely found in literature so
far. In this paper, we present a mathematically clean solution to introduce
monotonicity, by introducing a new latent variable which represents the audio
position or segment boundaries. We compare several monotonic latent models to
our global soft attention baseline such as a hard attention model, a local
windowed soft attention model, and a segmental soft attention model. We can
show that our monotonic models perform as good as the global soft attention
model. We perform our experiments on Switchboard 300h. We carefully outline the
details of our training and release our code and configs.
| 2,021 |
Computation and Language
|
Collaborative construction of lexicographic and parallel datasets for
African languages: first assessment
|
Faced with a considerable lack of resources in African languages to carry out
work in Natural Language Processing (NLP), Natural Language Understanding (NLU)
and artificial intelligence, the research teams of NTeALan association has set
itself the objective of building open-source platforms for the collaborative
construction of lexicographic data in African languages. In this article, we
present our first reports after 2 years of collaborative construction of
lexicographic resources useful for African NLP tools.
| 2,021 |
Computation and Language
|
BASE Layers: Simplifying Training of Large, Sparse Models
|
We introduce a new balanced assignment of experts (BASE) layer for large
language models that greatly simplifies existing high capacity sparse layers.
Sparse layers can dramatically improve the efficiency of training and inference
by routing each token to specialized expert modules that contain only a small
fraction of the model parameters. However, it can be difficult to learn
balanced routing functions that make full use of the available experts;
existing approaches typically use routing heuristics or auxiliary
expert-balancing loss functions. In contrast, we formulate token-to-expert
allocation as a linear assignment problem, allowing an optimal assignment in
which each expert receives an equal number of tokens. This optimal assignment
scheme improves efficiency by guaranteeing balanced compute loads, and also
simplifies training by not requiring any new hyperparameters or auxiliary
losses. Code is publicly released at https://github.com/pytorch/fairseq/
| 2,021 |
Computation and Language
|
CloneBot: Personalized Dialogue-Response Predictions
|
Our project task was to create a model that, given a speaker ID, chat
history, and an utterance query, can predict the response utterance in a
conversation. The model is personalized for each speaker. This task can be a
useful tool for building speech bots that talk in a human-like manner in a live
conversation. Further, we succeeded at using dense-vector encoding clustering
to be able to retrieve relevant historical dialogue context, a useful strategy
for overcoming the input limitations of neural-based models when predictions
require longer-term references from the dialogue history. In this paper, we
have implemented a state-of-the-art model using pre-training and fine-tuning
techniques built on transformer architecture and multi-headed attention blocks
for the Switchboard corpus. We also show how efficient vector clustering
algorithms can be used for real-time utterance predictions that require no
training and therefore work on offline and encrypted message histories.
| 2,021 |
Computation and Language
|
An Exploration of Data Augmentation Techniques for Improving English to
Tigrinya Translation
|
It has been shown that the performance of neural machine translation (NMT)
drops starkly in low-resource conditions, often requiring large amounts of
auxiliary data to achieve competitive results. An effective method of
generating auxiliary data is back-translation of target language sentences. In
this work, we present a case study of Tigrinya where we investigate several
back-translation methods to generate synthetic source sentences. We find that
in low-resource conditions, back-translation by pivoting through a
higher-resource language related to the target language proves most effective
resulting in substantial improvements over baselines.
| 2,021 |
Computation and Language
|
Joint Khmer Word Segmentation and Part-of-Speech Tagging Using Deep
Learning
|
Khmer text is written from left to right with optional space. Space is not
served as a word boundary but instead, it is used for readability or other
functional purposes. Word segmentation is a prior step for downstream tasks
such as part-of-speech (POS) tagging and thus, the robustness of POS tagging
highly depends on word segmentation. The conventional Khmer POS tagging is a
two-stage process that begins with word segmentation and then actual tagging of
each word, afterward. In this work, a joint word segmentation and POS tagging
approach using a single deep learning model is proposed so that word
segmentation and POS tagging can be performed spontaneously. The proposed model
was trained and tested using the publicly available Khmer POS dataset. The
validation suggested that the performance of the joint model is on par with the
conventional two-stage POS tagging.
| 2,021 |
Computation and Language
|
Self-Supervised Euphemism Detection and Identification for Content
Moderation
|
Fringe groups and organizations have a long history of using
euphemisms--ordinary-sounding words with a secret meaning--to conceal what they
are discussing. Nowadays, one common use of euphemisms is to evade content
moderation policies enforced by social media platforms. Existing tools for
enforcing policy automatically rely on keyword searches for words on a "ban
list", but these are notoriously imprecise: even when limited to swearwords,
they can still cause embarrassing false positives. When a commonly used
ordinary word acquires a euphemistic meaning, adding it to a keyword-based ban
list is hopeless: consider "pot" (storage container or marijuana?) or "heater"
(household appliance or firearm?) The current generation of social media
companies instead hire staff to check posts manually, but this is expensive,
inhumane, and not much more effective. It is usually apparent to a human
moderator that a word is being used euphemistically, but they may not know what
the secret meaning is, and therefore whether the message violates policy. Also,
when a euphemism is banned, the group that used it need only invent another
one, leaving moderators one step behind.
This paper will demonstrate unsupervised algorithms that, by analyzing words
in their sentence-level context, can both detect words being used
euphemistically, and identify the secret meaning of each word. Compared to the
existing state of the art, which uses context-free word embeddings, our
algorithm for detecting euphemisms achieves 30-400% higher detection accuracies
of unlabeled euphemisms in a text corpus. Our algorithm for revealing
euphemistic meanings of words is the first of its kind, as far as we are aware.
In the arms race between content moderators and policy evaders, our algorithms
may help shift the balance in the direction of the moderators.
| 2,021 |
Computation and Language
|
Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training
|
Emotional voice conversion (EVC) aims to change the emotional state of an
utterance while preserving the linguistic content and speaker identity. In this
paper, we propose a novel 2-stage training strategy for sequence-to-sequence
emotional voice conversion with a limited amount of emotional speech data. We
note that the proposed EVC framework leverages text-to-speech (TTS) as they
share a common goal that is to generate high-quality expressive voice. In stage
1, we perform style initialization with a multi-speaker TTS corpus, to
disentangle speaking style and linguistic content. In stage 2, we perform
emotion training with a limited amount of emotional speech data, to learn how
to disentangle emotional style and linguistic information from the speech. The
proposed framework can perform both spectrum and prosody conversion and
achieves significant improvement over the state-of-the-art baselines in both
objective and subjective evaluation.
| 2,021 |
Computation and Language
|
Few-shot learning through contextual data augmentation
|
Machine translation (MT) models used in industries with constantly changing
topics, such as translation or news agencies, need to adapt to new data to
maintain their performance over time. Our aim is to teach a pre-trained MT
model to translate previously unseen words accurately, based on very few
examples. We propose (i) an experimental setup allowing us to simulate novel
vocabulary appearing in human-submitted translations, and (ii) corresponding
evaluation metrics to compare our approaches. We extend a data augmentation
approach using a pre-trained language model to create training examples with
similar contexts for novel words. We compare different fine-tuning and data
augmentation approaches and show that adaptation on the scale of one to five
examples is possible. Combining data augmentation with randomly selected
training sentences leads to the highest BLEU score and accuracy improvements.
Impressively, with only 1 to 5 examples, our model reports better accuracy
scores than a reference system trained with on average 313 parallel examples.
| 2,021 |
Computation and Language
|
Deep Neural Approaches to Relation Triplets Extraction: A Comprehensive
Survey
|
Recently, with the advances made in continuous representation of words (word
embeddings) and deep neural architectures, many research works are published in
the area of relation extraction and it is very difficult to keep track of so
many papers. To help future research, we present a comprehensive review of the
recently published research works in relation extraction. We mostly focus on
relation extraction using deep neural networks which have achieved
state-of-the-art performance on publicly available datasets. In this survey, we
cover sentence-level relation extraction to document-level relation extraction,
pipeline-based approaches to joint extraction approaches, annotated datasets to
distantly supervised datasets along with few very recent research directions
such as zero-shot or few-shot relation extraction, noise mitigation in
distantly supervised datasets. Regarding neural architectures, we cover
convolutional models, recurrent network models, attention network models, and
graph convolutional models in this survey.
| 2,021 |
Computation and Language
|
UA-GEC: Grammatical Error Correction and Fluency Corpus for the
Ukrainian Language
|
We present a corpus professionally annotated for grammatical error correction
(GEC) and fluency edits in the Ukrainian language. To the best of our
knowledge, this is the first GEC corpus for the Ukrainian language. We
collected texts with errors (20,715 sentences) from a diverse pool of
contributors, including both native and non-native speakers. The data cover a
wide variety of writing domains, from text chats and essays to formal writing.
Professional proofreaders corrected and annotated the corpus for errors
relating to fluency, grammar, punctuation, and spelling. This corpus can be
used for developing and evaluating GEC systems in Ukrainian. More generally, it
can be used for researching multilingual and low-resource NLP, morphologically
rich languages, document-level GEC, and fluency correction. The corpus is
publicly available at https://github.com/grammarly/ua-gec
| 2,022 |
Computation and Language
|
A Neighbourhood Framework for Resource-Lean Content Flagging
|
We propose a novel framework for cross-lingual content flagging with limited
target-language data, which significantly outperforms prior work in terms of
predictive performance. The framework is based on a nearest-neighbour
architecture. It is a modern instantiation of the vanilla k-nearest neighbour
model, as we use Transformer representations in all its components. Our
framework can adapt to new source-language instances, without the need to be
retrained from scratch. Unlike prior work on neighbourhood-based approaches, we
encode the neighbourhood information based on query--neighbour interactions. We
propose two encoding schemes and we show their effectiveness using both
qualitative and quantitative analysis. Our evaluation results on eight
languages from two different datasets for abusive language detection show
sizable improvements of up to 9.5 F1 points absolute (for Italian) over strong
baselines. On average, we achieve 3.6 absolute F1 points of improvement for the
three languages in the Jigsaw Multilingual dataset and 2.14 points for the WUL
dataset.
| 2,022 |
Computation and Language
|
Defx at SemEval-2020 Task 6: Joint Extraction of Concepts and Relations
for Definition Extraction
|
Definition Extraction systems are a valuable knowledge source for both humans
and algorithms. In this paper we describe our submissions to the DeftEval
shared task (SemEval-2020 Task 6), which is evaluated on an English textbook
corpus. We provide a detailed explanation of our system for the joint
extraction of definition concepts and the relations among them. Furthermore we
provide an ablation study of our model variations and describe the results of
an error analysis.
| 2,021 |
Computation and Language
|
No Keyword is an Island: In search of covert associations
|
This paper describes how corpus-assisted discourse analysis based on keyword
(KW) identification and interpretation can benefit from employing Market basket
analysis (MBA) after KW extraction. MBA is a data mining technique used
originally in marketing that can reveal consistent associations between items
in a shopping cart, but also between keywords in a corpus of many texts. By
identifying recurring associations between KWs we can compensate for the lack
of wider context which is a major issue impeding the interpretation of isolated
KWs (esp. when analyzing large data). To showcase the advantages of MBA in
"re-contextualizing" keywords within the discourse, a pilot study on the topic
of migration was conducted contrasting anti-system and center-right Czech
internet media. was conducted. The results show that MBA is useful in
identifying the dominant strategy of anti-system news portals: to weave in a
confounding ideological undercurrent and connect the concept of migrants to a
multitude of other topics (i.e., flooding the discourse).
| 2,021 |
Computation and Language
|
Divide and Rule: Effective Pre-Training for Context-Aware Multi-Encoder
Translation Models
|
Multi-encoder models are a broad family of context-aware neural machine
translation systems that aim to improve translation quality by encoding
document-level contextual information alongside the current sentence. The
context encoding is undertaken by contextual parameters, trained on
document-level data. In this work, we discuss the difficulty of training these
parameters effectively, due to the sparsity of the words in need of context
(i.e., the training signal), and their relevant context. We propose to
pre-train the contextual parameters over split sentence pairs, which makes an
efficient use of the available data for two reasons. Firstly, it increases the
contextual training signal by breaking intra-sentential syntactic relations,
and thus pushing the model to search the context for disambiguating clues more
frequently. Secondly, it eases the retrieval of relevant context, since context
segments become shorter. We propose four different splitting methods, and
evaluate our approach with BLEU and contrastive test sets. Results show that it
consistently improves learning of contextual parameters, both in low and high
resource settings.
| 2,022 |
Computation and Language
|
Modeling Users and Online Communities for Abuse Detection: A Position on
Ethics and Explainability
|
Abuse on the Internet is an important societal problem of our time. Millions
of Internet users face harassment, racism, personal attacks, and other types of
abuse across various platforms. The psychological effects of abuse on
individuals can be profound and lasting. Consequently, over the past few years,
there has been a substantial research effort towards automated abusive language
detection in the field of NLP. In this position paper, we discuss the role that
modeling of users and online communities plays in abuse detection.
Specifically, we review and analyze the state of the art methods that leverage
user or community information to enhance the understanding and detection of
abusive language. We then explore the ethical challenges of incorporating user
and community information, laying out considerations to guide future research.
Finally, we address the topic of explainability in abusive language detection,
proposing properties that an explainable method should aim to exhibit. We
describe how user and community information can facilitate the realization of
these properties and discuss the effective operationalization of explainability
in view of the properties.
| 2,021 |
Computation and Language
|
Augmenting Poetry Composition with Verse by Verse
|
We describe Verse by Verse, our experiment in augmenting the creative process
of writing poetry with an AI. We have created a group of AI poets, styled after
various American classic poets, that are able to offer as suggestions generated
lines of verse while a user is composing a poem. In this paper, we describe the
underlying system to offer these suggestions. This includes a generative model,
which is tasked with generating a large corpus of lines of verse offline and
which are then stored in an index, and a dual-encoder model that is tasked with
recommending the next possible set of verses from our index given the previous
line of verse.
| 2,022 |
Computation and Language
|
Leveraging Neural Machine Translation for Word Alignment
|
The most common tools for word-alignment rely on a large amount of parallel
sentences, which are then usually processed according to one of the IBM model
algorithms. The training data is, however, the same as for machine translation
(MT) systems, especially for neural MT (NMT), which itself is able to produce
word-alignments using the trained attention heads. This is convenient because
word-alignment is theoretically a viable byproduct of any attention-based NMT,
which is also able to provide decoder scores for a translated sentence pair.
We summarize different approaches on how word-alignment can be extracted from
alignment scores and then explore ways in which scores can be extracted from
NMT, focusing on inferring the word-alignment scores based on output sentence
and token probabilities. We compare this to the extraction of alignment scores
from attention. We conclude with aggregating all of the sources of alignment
scores into a simple feed-forward network which achieves the best results when
combined alignment extractors are used.
| 2,021 |
Computation and Language
|
Domain-specific MT for Low-resource Languages: The case of
Bambara-French
|
Translating to and from low-resource languages is a challenge for machine
translation (MT) systems due to a lack of parallel data. In this paper we
address the issue of domain-specific MT for Bambara, an under-resourced Mande
language spoken in Mali. We present the first domain-specific parallel dataset
for MT of Bambara into and from French. We discuss challenges in working with
small quantities of domain-specific data for a low-resource language and we
present the results of machine learning experiments on this data.
| 2,021 |
Computation and Language
|
A Statistical Analysis of Summarization Evaluation Metrics using
Resampling Methods
|
The quality of a summarization evaluation metric is quantified by calculating
the correlation between its scores and human annotations across a large number
of summaries. Currently, it is unclear how precise these correlation estimates
are, nor whether differences between two metrics' correlations reflect a true
difference or if it is due to mere chance. In this work, we address these two
problems by proposing methods for calculating confidence intervals and running
hypothesis tests for correlations using two resampling methods, bootstrapping
and permutation. After evaluating which of the proposed methods is most
appropriate for summarization through two simulation experiments, we analyze
the results of applying these methods to several different automatic evaluation
metrics across three sets of human annotations. We find that the confidence
intervals are rather wide, demonstrating high uncertainty in the reliability of
automatic metrics. Further, although many metrics fail to show statistical
improvements over ROUGE, two recent works, QAEval and BERTScore, do in some
evaluation settings.
| 2,021 |
Computation and Language
|
Zero-Shot Language Transfer vs Iterative Back Translation for
Unsupervised Machine Translation
|
This work focuses on comparing different solutions for machine translation on
low resource language pairs, namely, with zero-shot transfer learning and
unsupervised machine translation. We discuss how the data size affects the
performance of both unsupervised MT and transfer learning. Additionally we also
look at how the domain of the data affects the result of unsupervised MT. The
code to all the experiments performed in this project are accessible on Github.
| 2,021 |
Computation and Language
|
Misinformation detection in Luganda-English code-mixed social media text
|
The increasing occurrence, forms, and negative effects of misinformation on
social media platforms has necessitated more misinformation detection tools.
Currently, work is being done addressing COVID-19 misinformation however, there
are no misinformation detection tools for any of the 40 distinct indigenous
Ugandan languages. This paper addresses this gap by presenting basic language
resources and a misinformation detection data set based on code-mixed
Luganda-English messages sourced from the Facebook and Twitter social media
platforms. Several machine learning methods are applied on the misinformation
detection data set to develop classification models for detecting whether a
code-mixed Luganda-English message contains misinformation or not. A 10-fold
cross validation evaluation of the classification methods in an experimental
misinformation detection task shows that a Discriminative Multinomial Naive
Bayes (DMNB) method achieves the highest accuracy and F-measure of 78.19% and
77.90% respectively. Also, Support Vector Machine and Bagging ensemble
classification models achieve comparable results. These results are promising
since the machine learning models are based on n-gram features from only the
misinformation detection dataset.
| 2,021 |
Computation and Language
|
Self-harm: detection and support on Twitter
|
Since the advent of online social media platforms such as Twitter and
Facebook, useful health-related studies have been conducted using the
information posted by online participants. Personal health-related issues such
as mental health, self-harm and depression have been studied because users
often share their stories on such platforms. Online users resort to sharing
because the empathy and support from online communities are crucial in helping
the affected individuals. A preliminary analysis shows how contents related to
non-suicidal self-injury (NSSI) proliferate on Twitter. Thus, we use Twitter to
collect relevant data, analyse, and proffer ways of supporting users prone to
NSSI behaviour. Our approach utilises a custom crawler to retrieve relevant
tweets from self-reporting users and relevant organisations interested in
combating self-harm. Through textual analysis, we identify six major categories
of self-harming users consisting of inflicted, anti-self-harm, support seekers,
recovered, pro-self-harm and at risk. The inflicted category dominates the
collection. From an engagement perspective, we show how online users respond to
the information posted by self-harm support organisations on Twitter. By noting
the most engaged organisations, we apply a useful technique to uncover the
organisations' strategy. The online participants show a strong inclination
towards online posts associated with mental health related attributes. Our
study is based on the premise that social media can be used as a tool to
support proactive measures to ease the negative impact of self-harm.
Consequently, we proffer ways to prevent potential users from engaging in
self-harm and support affected users through a set of recommendations. To
support further research, the dataset will be made available for interested
researchers.
| 2,021 |
Computation and Language
|
Integrating Subgraph-aware Relation and DirectionReasoning for Question
Answering
|
Question Answering (QA) models over Knowledge Bases (KBs) are capable of
providing more precise answers by utilizing relation information among
entities. Although effective, most of these models solely rely on fixed
relation representations to obtain answers for different question-related KB
subgraphs. Hence, the rich structured information of these subgraphs may be
overlooked by the relation representation vectors. Meanwhile, the direction
information of reasoning, which has been proven effective for the answer
prediction on graphs, has not been fully explored in existing work. To address
these challenges, we propose a novel neural model, Relation-updated
Direction-guided Answer Selector (RDAS), which converts relations in each
subgraph to additional nodes to learn structure information. Additionally, we
utilize direction information to enhance the reasoning ability. Experimental
results show that our model yields substantial improvements on two widely used
datasets.
| 2,021 |
Computation and Language
|
Multilingual and code-switching ASR challenges for low resource Indian
languages
|
Recently, there is increasing interest in multilingual automatic speech
recognition (ASR) where a speech recognition system caters to multiple low
resource languages by taking advantage of low amounts of labeled corpora in
multiple languages. With multilingualism becoming common in today's world,
there has been increasing interest in code-switching ASR as well. In
code-switching, multiple languages are freely interchanged within a single
sentence or between sentences. The success of low-resource multilingual and
code-switching ASR often depends on the variety of languages in terms of their
acoustics, linguistic characteristics as well as the amount of data available
and how these are carefully considered in building the ASR system. In this
challenge, we would like to focus on building multilingual and code-switching
ASR systems through two different subtasks related to a total of seven Indian
languages, namely Hindi, Marathi, Odia, Tamil, Telugu, Gujarati and Bengali.
For this purpose, we provide a total of ~600 hours of transcribed speech data,
comprising train and test sets, in these languages including two code-switched
language pairs, Hindi-English and Bengali-English. We also provide a baseline
recipe for both the tasks with a WER of 30.73% and 32.45% on the test sets of
multilingual and code-switching subtasks, respectively.
| 2,021 |
Computation and Language
|
Detecting over/under-translation errors for determining adequacy in
human translations
|
We present a novel approach to detecting over and under translations (OT/UT)
as part of adequacy error checks in translation evaluation. We do not restrict
ourselves to machine translation (MT) outputs and specifically target
applications with human generated translation pipeline. The goal of our system
is to identify OT/UT errors from human translated video subtitles with high
error recall. We achieve this without reference translations by learning a
model on synthesized training data. We compare various classification networks
that we trained on embeddings from pre-trained language model with our best
hybrid network of GRU + CNN achieving 89.3% accuracy on high-quality
human-annotated evaluation data in 8 languages.
| 2,021 |
Computation and Language
|
Evaluating Neural Word Embeddings for Sanskrit
|
Recently, the supervised learning paradigm's surprisingly remarkable
performance has garnered considerable attention from Sanskrit Computational
Linguists. As a result, the Sanskrit community has put laudable efforts to
build task-specific labeled data for various downstream Natural Language
Processing (NLP) tasks. The primary component of these approaches comes from
representations of word embeddings. Word embedding helps to transfer knowledge
learned from readily available unlabelled data for improving task-specific
performance in low-resource setting. Last decade, there has been much
excitement in the field of digitization of Sanskrit. To effectively use such
readily available resources, it is very much essential to perform a systematic
study on word embedding approaches for the Sanskrit language. In this work, we
investigate the effectiveness of word embeddings. We classify word embeddings
in broad categories to facilitate systematic experimentation and evaluate them
on four intrinsic tasks. We investigate the efficacy of embeddings approaches
(originally proposed for languages other than Sanskrit) for Sanskrit along with
various challenges posed by language.
| 2,021 |
Computation and Language
|
Many-to-English Machine Translation Tools, Data, and Pretrained Models
|
While there are more than 7000 languages in the world, most translation
research efforts have targeted a few high-resource languages. Commercial
translation systems support only one hundred languages or fewer, and do not
make these models available for transfer to low resource languages. In this
work, we present useful tools for machine translation research: MTData,
NLCodec, and RTG. We demonstrate their usefulness by creating a multilingual
neural machine translation model capable of translating from 500 source
languages to English. We make this multilingual model readily downloadable and
usable as a service, or as a parent model for transfer-learning to even
lower-resource languages.
| 2,021 |
Computation and Language
|
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples
for Relation Extraction
|
Recent neural-based relation extraction approaches, though achieving
promising improvement on benchmark datasets, have reported their vulnerability
towards adversarial attacks. Thus far, efforts mostly focused on generating
adversarial samples or defending adversarial attacks, but little is known about
the difference between normal and adversarial samples. In this work, we take
the first step to leverage the salience-based method to analyze those
adversarial samples. We observe that salience tokens have a direct correlation
with adversarial perturbations. We further find the adversarial perturbations
are either those tokens not existing in the training set or superficial cues
associated with relation labels. To some extent, our approach unveils the
characters against adversarial samples. We release an open-source testbed,
"DiagnoseAdv" in https://github.com/zjunlp/DiagnoseAdv.
| 2,023 |
Computation and Language
|
Mitigating Media Bias through Neutral Article Generation
|
Media bias can lead to increased political polarization, and thus, the need
for automatic mitigation methods is growing. Existing mitigation work displays
articles from multiple news outlets to provide diverse news coverage, but
without neutralizing the bias inherent in each of the displayed articles.
Therefore, we propose a new task, a single neutralized article generation out
of multiple biased articles, to facilitate more efficient access to balanced
and unbiased information. In this paper, we compile a new dataset NeuWS, define
an automatic evaluation metric, and provide baselines and multiple analyses to
serve as a solid starting point for the proposed task. Lastly, we obtain a
human evaluation to demonstrate the alignment between our metric and human
judgment.
| 2,021 |
Computation and Language
|
Low-Resource Neural Machine Translation for Southern African Languages
|
Low-resource African languages have not fully benefited from the progress in
neural machine translation because of a lack of data. Motivated by this
challenge we compare zero-shot learning, transfer learning and multilingual
learning on three Bantu languages (Shona, isiXhosa and isiZulu) and English.
Our main target is English-to-isiZulu translation for which we have just 30,000
sentence pairs, 28% of the average size of our other corpora. We show the
importance of language similarity on the performance of English-to-isiZulu
transfer learning based on English-to-isiXhosa and English-to-Shona parent
models whose BLEU scores differ by 5.2. We then demonstrate that multilingual
learning surpasses both transfer learning and zero-shot learning on our
dataset, with BLEU score improvements relative to the baseline
English-to-isiZulu model of 9.9, 6.1 and 2.0 respectively. Our best model also
improves the previous SOTA BLEU score by more than 10.
| 2,021 |
Computation and Language
|
FeTaQA: Free-form Table Question Answering
|
Existing table question answering datasets contain abundant factual questions
that primarily evaluate the query and schema comprehension capability of a
system, but they fail to include questions that require complex reasoning and
integration of information due to the constraint of the associated short-form
answers. To address these issues and to demonstrate the full challenge of table
question answering, we introduce FeTaQA, a new dataset with 10K Wikipedia-based
{table, question, free-form answer, supporting table cells} pairs. FeTaQA
yields a more challenging table question answering setting because it requires
generating free-form text answers after retrieval, inference, and integration
of multiple discontinuous facts from a structured knowledge source. Unlike
datasets of generative QA over text in which answers are prevalent with copies
of short text spans from the source, answers in our dataset are human-generated
explanations involving entities and their high-level relations. We provide two
benchmark methods for the proposed task: a pipeline method based on
semantic-parsing-based QA systems and an end-to-end method based on large
pretrained text generation models, and show that FeTaQA poses a challenge for
both methods.
| 2,021 |
Computation and Language
|
High-dimensional distributed semantic spaces for utterances
|
High-dimensional distributed semantic spaces have proven useful and effective
for aggregating and processing visual, auditory, and lexical information for
many tasks related to human-generated data. Human language makes use of a large
and varying number of features, lexical and constructional items as well as
contextual and discourse-specific data of various types, which all interact to
represent various aspects of communicative information. Some of these features
are mostly local and useful for the organisation of e.g. argument structure of
a predication; others are persistent over the course of a discourse and
necessary for achieving a reasonable level of understanding of the content.
This paper describes a model for high-dimensional representation for utterance
and text level data including features such as constructions or contextual
data, based on a mathematically principled and behaviourally plausible approach
to representing linguistic information. The implementation of the
representation is a straightforward extension of Random Indexing models
previously used for lexical linguistic items. The paper shows how the
implemented model is able to represent a broad range of linguistic features in
a common integral framework of fixed dimensionality, which is computationally
habitable, and which is suitable as a bridge between symbolic representations
such as dependency analysis and continuous representations used e.g. in
classifiers or further machine-learning approaches. This is achieved with
operations on vectors that constitute a powerful computational algebra,
accompanied with an associative memory for the vectors. The paper provides a
technical overview of the framework and a worked through implemented example of
how it can be applied to various types of linguistic features.
| 2,019 |
Computation and Language
|
WakaVT: A Sequential Variational Transformer for Waka Generation
|
Poetry generation has long been a challenge for artificial intelligence. In
the scope of Japanese poetry generation, many researchers have paid attention
to Haiku generation, but few have focused on Waka generation. To further
explore the creative potential of natural language generation systems in
Japanese poetry creation, we propose a novel Waka generation model, WakaVT,
which automatically produces Waka poems given user-specified keywords. Firstly,
an additive mask-based approach is presented to satisfy the form constraint.
Secondly, the structures of Transformer and variational autoencoder are
integrated to enhance the quality of generated content. Specifically, to obtain
novelty and diversity, WakaVT employs a sequence of latent variables, which
effectively captures word-level variability in Waka data. To improve linguistic
quality in terms of fluency, coherence, and meaningfulness, we further propose
the fused multilevel self-attention mechanism, which properly models the
hierarchical linguistic structure of Waka. To the best of our knowledge, we are
the first to investigate Waka generation with models based on Transformer
and/or variational autoencoder. Both objective and subjective evaluation
results demonstrate that our model outperforms baselines significantly.
| 2,021 |
Computation and Language
|
Mining Wikidata for Name Resources for African Languages
|
This work supports further development of language technology for the
languages of Africa by providing a Wikidata-derived resource of name lists
corresponding to common entity types (person, location, and organization).
While we are not the first to mine Wikidata for name lists, our approach
emphasizes scalability and replicability and addresses data quality issues for
languages that do not use Latin scripts. We produce lists containing
approximately 1.9 million names across 28 African languages. We describe the
data, the process used to produce it, and its limitations, and provide the
software and data for public use. Finally, we discuss the ethical
considerations of producing this resource and others of its kind.
| 2,021 |
Computation and Language
|
HLE-UPC at SemEval-2021 Task 5: Multi-Depth DistilBERT for Toxic Spans
Detection
|
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans
Detection. The purpose of this task is to detect the spans that make a text
toxic, which is a complex labour for several reasons. Firstly, because of the
intrinsic subjectivity of toxicity, and secondly, due to toxicity not always
coming from single words like insults or offends, but sometimes from whole
expressions formed by words that may not be toxic individually. Following this
idea of focusing on both single words and multi-word expressions, we study the
impact of using a multi-depth DistilBERT model, which uses embeddings from
different layers to estimate the final per-token toxicity. Our quantitative
results show that using information from multiple depths boosts the performance
of the model. Finally, we also analyze our best model qualitatively.
| 2,021 |
Computation and Language
|
AmbiFC: Fact-Checking Ambiguous Claims with Evidence
|
Automated fact-checking systems verify claims against evidence to predict
their veracity. In real-world scenarios, the retrieved evidence may not
unambiguously support or refute the claim and yield conflicting but valid
interpretations. Existing fact-checking datasets assume that the models
developed with them predict a single veracity label for each claim, thus
discouraging the handling of such ambiguity. To address this issue we present
AmbiFC, a fact-checking dataset with 10k claims derived from real-world
information needs. It contains fine-grained evidence annotations of 50k
passages from 5k Wikipedia pages. We analyze the disagreements arising from
ambiguity when comparing claims against evidence in AmbiFC, observing a strong
correlation of annotator disagreement with linguistic phenomena such as
underspecification and probabilistic reasoning. We develop models for
predicting veracity handling this ambiguity via soft labels and find that a
pipeline that learns the label distribution for sentence-level evidence
selection and veracity prediction yields the best performance. We compare
models trained on different subsets of AmbiFC and show that models trained on
the ambiguous instances perform better when faced with the identified
linguistic phenomena.
| 2,023 |
Computation and Language
|
Recognizing and Splitting Conditional Sentences for Automation of
Business Processes Management
|
Business Process Management (BPM) is the discipline which is responsible for
management of discovering, analyzing, redesigning, monitoring, and controlling
business processes. One of the most crucial tasks of BPM is discovering and
modelling business processes from text documents. In this paper, we present our
system that resolves an end-to-end problem consisting of 1) recognizing
conditional sentences from technical documents, 2) finding boundaries to
extract conditional and resultant clauses from each conditional sentence, and
3) categorizing resultant clause as Action or Consequence which later helps to
generate new steps in our business process model automatically. We created a
new dataset and three models solve this problem. Our best model achieved very
promising results of 83.82, 87.84, and 85.75 for Precision, Recall, and F1,
respectively, for extracting Condition, Action, and Consequence clauses using
Exact Match metric.
| 2,021 |
Computation and Language
|
Sampling and Filtering of Neural Machine Translation Distillation Data
|
In most of neural machine translation distillation or stealing scenarios, the
goal is to preserve the performance of the target model (teacher). The
highest-scoring hypothesis of the teacher model is commonly used to train a new
model (student). If reference translations are also available, then better
hypotheses (with respect to the references) can be upsampled and poor
hypotheses either removed or undersampled.
This paper explores the importance sampling method landscape (pruning,
hypothesis upsampling and undersampling, deduplication and their combination)
with English to Czech and English to German MT models using standard MT
evaluation metrics. We show that careful upsampling and combination with the
original data leads to better performance when compared to training only on the
original or synthesized data or their direct combination.
| 2,021 |
Computation and Language
|
SYSML: StYlometry with Structure and Multitask Learning: Implications
for Darknet Forum Migrant Analysis
|
Darknet market forums are frequently used to exchange illegal goods and
services between parties who use encryption to conceal their identities. The
Tor network is used to host these markets, which guarantees additional
anonymization from IP and location tracking, making it challenging to link
across malicious users using multiple accounts (sybils). Additionally, users
migrate to new forums when one is closed, making it difficult to link users
across multiple forums. We develop a novel stylometry-based multitask learning
approach for natural language and interaction modeling using graph embeddings
to construct low-dimensional representations of short episodes of user activity
for authorship attribution. We provide a comprehensive evaluation of our
methods across four different darknet forums demonstrating its efficacy over
the state-of-the-art, with a lift of up to 2.5X on Mean Retrieval Rank and 2X
on Recall@10.
| 2,021 |
Computation and Language
|
Configurable Privacy-Preserving Automatic Speech Recognition
|
Voice assistive technologies have given rise to far-reaching privacy and
security concerns. In this paper we investigate whether modular automatic
speech recognition (ASR) can improve privacy in voice assistive systems by
combining independently trained separation, recognition, and discretization
modules to design configurable privacy-preserving ASR systems. We evaluate
privacy concerns and the effects of applying various state-of-the-art
techniques at each stage of the system, and report results using task-specific
metrics (i.e. WER, ABX, and accuracy). We show that overlapping speech inputs
to ASR systems present further privacy concerns, and how these may be mitigated
using speech separation and optimization techniques. Our discretization module
is shown to minimize paralinguistics privacy leakage from ASR acoustic models
to levels commensurate with random guessing. We show that voice privacy can be
configurable, and argue this presents new opportunities for privacy-preserving
applications incorporating ASR.
| 2,021 |
Computation and Language
|
Canonical and Surface Morphological Segmentation for Nguni Languages
|
Morphological Segmentation involves decomposing words into morphemes, the
smallest meaning-bearing units of language. This is an important NLP task for
morphologically-rich agglutinative languages such as the Southern African Nguni
language group. In this paper, we investigate supervised and unsupervised
models for two variants of morphological segmentation: canonical and surface
segmentation. We train sequence-to-sequence models for canonical segmentation,
where the underlying morphemes may not be equal to the surface form of the
word, and Conditional Random Fields (CRF) for surface segmentation.
Transformers outperform LSTMs with attention on canonical segmentation,
obtaining an average F1 score of 72.5% across 4 languages. Feature-based CRFs
outperform bidirectional LSTM-CRFs to obtain an average of 97.1% F1 on surface
segmentation. In the unsupervised setting, an entropy-based approach using a
character-level LSTM language model fails to outperforms a Morfessor baseline,
while on some of the languages neither approach performs much better than a
random baseline. We hope that the high performance of the supervised
segmentation models will help to facilitate the development of better NLP tools
for Nguni languages.
| 2,021 |
Computation and Language
|
Low-Resource Language Modelling of South African Languages
|
Language models are the foundation of current neural network-based models for
natural language understanding and generation. However, research on the
intrinsic performance of language models on African languages has been
extremely limited, which is made more challenging by the lack of large or
standardised training and evaluation sets that exist for English and other
high-resource languages. In this paper, we evaluate the performance of
open-vocabulary language models on low-resource South African languages, using
byte-pair encoding to handle the rich morphology of these languages. We
evaluate different variants of n-gram models, feedforward neural networks,
recurrent neural networks (RNNs), and Transformers on small-scale datasets.
Overall, well-regularized RNNs give the best performance across two isiZulu and
one Sepedi datasets. Multilingual training further improves performance on
these datasets. We hope that this research will open new avenues for research
into multilingual and low-resource language modelling for African languages.
| 2,021 |
Computation and Language
|
MultiWOZ 2.4: A Multi-Domain Task-Oriented Dialogue Dataset with
Essential Annotation Corrections to Improve State Tracking Evaluation
|
The MultiWOZ 2.0 dataset has greatly stimulated the research of task-oriented
dialogue systems. However, its state annotations contain substantial noise,
which hinders a proper evaluation of model performance. To address this issue,
massive efforts were devoted to correcting the annotations. Three improved
versions (i.e., MultiWOZ 2.1-2.3) have then been released. Nonetheless, there
are still plenty of incorrect and inconsistent annotations. This work
introduces MultiWOZ 2.4, which refines the annotations in the validation set
and test set of MultiWOZ 2.1. The annotations in the training set remain
unchanged (same as MultiWOZ 2.1) to elicit robust and noise-resilient model
training. We benchmark eight state-of-the-art dialogue state tracking models on
MultiWOZ 2.4. All of them demonstrate much higher performance than on MultiWOZ
2.1.
| 2,022 |
Computation and Language
|
"TL;DR:" Out-of-Context Adversarial Text Summarization and Hashtag
Recommendation
|
This paper presents Out-of-Context Summarizer, a tool that takes arbitrary
public news articles out of context by summarizing them to coherently fit
either a liberal- or conservative-leaning agenda. The Out-of-Context Summarizer
also suggests hashtag keywords to bolster the polarization of the summary, in
case one is inclined to take it to Twitter, Parler or other platforms for
trolling. Out-of-Context Summarizer achieved 79% precision and 99% recall when
summarizing COVID-19 articles, 93% precision and 93% recall when summarizing
politically-centered articles, and 87% precision and 88% recall when taking
liberally-biased articles out of context. Summarizing valid sources instead of
synthesizing fake text, the Out-of-Context Summarizer could fairly pass the
"adversarial disclosure" test, but we didn't take this easy route in our paper.
Instead, we used the Out-of-Context Summarizer to push the debate of potential
misuse of automated text generation beyond the boilerplate text of responsible
disclosure of adversarial language models.
| 2,021 |
Computation and Language
|
Action-Based Conversations Dataset: A Corpus for Building More In-Depth
Task-Oriented Dialogue Systems
|
Existing goal-oriented dialogue datasets focus mainly on identifying slots
and values. However, customer support interactions in reality often involve
agents following multi-step procedures derived from explicitly-defined company
policies as well. To study customer service dialogue systems in more realistic
settings, we introduce the Action-Based Conversations Dataset (ABCD), a
fully-labeled dataset with over 10K human-to-human dialogues containing 55
distinct user intents requiring unique sequences of actions constrained by
policies to achieve task success. We propose two additional dialog tasks,
Action State Tracking and Cascading Dialogue Success, and establish a series of
baselines involving large-scale, pre-trained language models on this dataset.
Empirical results demonstrate that while more sophisticated networks outperform
simpler models, a considerable gap (50.8% absolute accuracy) still exists to
reach human-level performance on ABCD.
| 2,021 |
Computation and Language
|
Do RNN States Encode Abstract Phonological Processes?
|
Sequence-to-sequence models have delivered impressive results in word
formation tasks such as morphological inflection, often learning to model
subtle morphophonological details with limited training data. Despite the
performance, the opacity of neural models makes it difficult to determine
whether complex generalizations are learned, or whether a kind of separate rote
memorization of each morphophonological process takes place. To investigate
whether complex alternations are simply memorized or whether there is some
level of generalization across related sound changes in a sequence-to-sequence
model, we perform several experiments on Finnish consonant gradation -- a
complex set of sound changes triggered in some words by certain suffixes. We
find that our models often -- though not always -- encode 17 different
consonant gradation processes in a handful of dimensions in the RNN. We also
show that by scaling the activations in these dimensions we can control whether
consonant gradation occurs and the direction of the gradation.
| 2,021 |
Computation and Language
|
CURIE: An Iterative Querying Approach for Reasoning About Situations
|
Recently, models have been shown to predict the effects of unexpected
situations, e.g., would cloudy skies help or hinder plant growth? Given a
context, the goal of such situational reasoning is to elicit the consequences
of a new situation (st) that arises in that context. We propose a method to
iteratively build a graph of relevant consequences explicitly in a structured
situational graph (st-graph) using natural language queries over a finetuned
language model (M). Across multiple domains, CURIE generates st-graphs that
humans find relevant and meaningful in eliciting the consequences of a new
situation. We show that st-graphs generated by CURIE improve a situational
reasoning end task (WIQA-QA) by 3 points on accuracy by simply augmenting their
input with our generated situational graphs, especially for a hard subset that
requires background knowledge and multi-hop reasoning.
| 2,021 |
Computation and Language
|
Tusom2021: A Phonetically Transcribed Speech Dataset from an Endangered
Language for Universal Phone Recognition Experiments
|
There is growing interest in ASR systems that can recognize phones in a
language-independent fashion. There is additionally interest in building
language technologies for low-resource and endangered languages. However, there
is a paucity of realistic data that can be used to test such systems and
technologies. This paper presents a publicly available, phonetically
transcribed corpus of 2255 utterances (words and short phrases) in the
endangered Tangkhulic language East Tusom (no ISO 639-3 code), a Tibeto-Burman
language variety spoken mostly in India. Because the dataset is transcribed in
terms of phones, rather than phonemes, it is a better match for universal phone
recognition systems than many larger (phonemically transcribed) datasets. This
paper describes the dataset and the methodology used to produce it. It further
presents basic benchmarks of state-of-the-art universal phone recognition
systems on the dataset as baselines for future experiments.
| 2,021 |
Computation and Language
|
Sketch and Customize: A Counterfactual Story Generator
|
Recent text generation models are easy to generate relevant and fluent text
for the given text, while lack of causal reasoning ability when we change some
parts of the given text. Counterfactual story rewriting is a recently proposed
task to test the causal reasoning ability for text generation models, which
requires a model to predict the corresponding story ending when the condition
is modified to a counterfactual one. Previous works have shown that the
traditional sequence-to-sequence model cannot well handle this problem, as it
often captures some spurious correlations between the original and
counterfactual endings, instead of the causal relations between conditions and
endings. To address this issue, we propose a sketch-and-customize generation
model guided by the causality implicated in the conditions and endings. In the
sketch stage, a skeleton is extracted by removing words which are conflict to
the counterfactual condition, from the original ending. In the customize stage,
a generation model is used to fill proper words in the skeleton under the
guidance of the counterfactual condition. In this way, the obtained
counterfactual ending is both relevant to the original ending and consistent
with the counterfactual condition. Experimental results show that the proposed
model generates much better endings, as compared with the traditional
sequence-to-sequence model.
| 2,021 |
Computation and Language
|
Humor@IITK at SemEval-2021 Task 7: Large Language Models for Quantifying
Humor and Offensiveness
|
Humor and Offense are highly subjective due to multiple word senses, cultural
knowledge, and pragmatic competence. Hence, accurately detecting humorous and
offensive texts has several compelling use cases in Recommendation Systems and
Personalized Content Moderation. However, due to the lack of an extensive
labeled dataset, most prior works in this domain haven't explored large neural
models for subjective humor understanding. This paper explores whether large
neural models and their ensembles can capture the intricacies associated with
humor/offense detection and rating. Our experiments on the SemEval-2021 Task 7:
HaHackathon show that we can develop reasonable humor and offense detection
systems with such models. Our models are ranked third in subtask 1b and
consistently ranked around the top 33% of the leaderboard for the remaining
subtasks.
| 2,021 |
Computation and Language
|
Multitask Recalibrated Aggregation Network for Medical Code Prediction
|
Medical coding translates professionally written medical reports into
standardized codes, which is an essential part of medical information systems
and health insurance reimbursement. Manual coding by trained human coders is
time-consuming and error-prone. Thus, automated coding algorithms have been
developed, building especially on the recent advances in machine learning and
deep neural networks. To solve the challenges of encoding lengthy and noisy
clinical documents and capturing code associations, we propose a multitask
recalibrated aggregation network. In particular, multitask learning shares
information across different coding schemes and captures the dependencies
between different medical codes. Feature recalibration and aggregation in
shared modules enhance representation learning for lengthy notes. Experiments
with a real-world MIMIC-III dataset show significantly improved predictive
performance.
| 2,021 |
Computation and Language
|
Use of 'off-the-shelf' information extraction algorithms in clinical
informatics: a feasibility study of MetaMap annotation of Italian medical
notes
|
Information extraction from narrative clinical notes is useful for patient
care, as well as for secondary use of medical data, for research or clinical
purposes. Many studies focused on information extraction from English clinical
texts, but less dealt with clinical notes in languages other than English. This
study tested the feasibility of using 'off the shelf' information extraction
algorithms to identify medical concepts from Italian clinical notes. We used
MetaMap to map medical concepts to the Unified Medical Language System (UMLS).
The study addressed two questions: (Q1) to understand if it would be possible
to properly map medical terms found in clinical notes and related to the
semantic group of 'Disorders' to the Italian UMLS resources; (Q2) to
investigate if it would be feasible to use MetaMap as it is to extract these
medical concepts from Italian clinical notes. Results in EXP1 showed that the
Italian UMLS Metathesaurus sources covered 91% of the medical terms of the
'Disorders' semantic group, as found in the studied dataset. Even if MetaMap
was built to analyze texts written in English, it worked properly also with
texts written in Italian. MetaMap identified correctly about half of the
concepts in the Italian clinical notes. Using MetaMap's annotation on Italian
clinical notes instead of a simple text search improved our results of about 15
percentage points. MetaMap showed recall, precision and F-measure of 0.53, 0.98
and 0.69, respectively. Most of the failures were due to the impossibility for
MetaMap to generate Italian meaningful variants. MetaMap's performance in
annotating automatically translated English clinical notes was in line with
findings in the literature, with similar recall (0.75), F-measure (0.83) and
even higher precision (0.95).
| 2,016 |
Computation and Language
|
Effect of depth order on iterative nested named entity recognition
models
|
This paper studies the effect of the order of depth of mention on nested
named entity recognition (NER) models. NER is an essential task in the
extraction of biomedical information, and nested entities are common since
medical concepts can assemble to form larger entities. Conventional NER systems
only predict disjointed entities. Thus, iterative models for nested NER use
multiple predictions to enumerate all entities, imposing a predefined order
from largest to smallest or smallest to largest. We design an order-agnostic
iterative model and a procedure to choose a custom order during training and
prediction. To accommodate for this task, we propose a modification of the
Transformer architecture to take into account the entities predicted in the
previous steps. We provide a set of experiments to study the model's
capabilities and the effects of the order on performance. Finally, we show that
the smallest to largest order gives the best results.
| 2,021 |
Computation and Language
|
IITK@LCP at SemEval 2021 Task 1: Classification for Lexical Complexity
Regression Task
|
This paper describes our contribution to SemEval 2021 Task 1: Lexical
Complexity Prediction. In our approach, we leverage the ELECTRA model and
attempt to mirror the data annotation scheme. Although the task is a regression
task, we show that we can treat it as an aggregation of several classification
and regression models. This somewhat counter-intuitive approach achieved an MAE
score of 0.0654 for Sub-Task 1 and MAE of 0.0811 on Sub-Task 2. Additionally,
we used the concept of weak supervision signals from Gloss-BERT in our work,
and it significantly improved the MAE score in Sub-Task 1.
| 2,021 |
Computation and Language
|
What Taggers Fail to Learn, Parsers Need the Most
|
We present an error analysis of neural UPOS taggers to evaluate why using
gold standard tags has such a large positive contribution to parsing
performance while using predicted UPOS tags either harms performance or offers
a negligible improvement. We evaluate what neural dependency parsers implicitly
learn about word types and how this relates to the errors taggers make to
explain the minimal impact using predicted tags has on parsers. We also present
a short analysis on what contexts result in reductions in tagging performance.
We then mask UPOS tags based on errors made by taggers to tease away the
contribution of UPOS tags which taggers succeed and fail to classify correctly
and the impact of tagging errors.
| 2,021 |
Computation and Language
|
TAPAS at SemEval-2021 Task 9: Reasoning over tables with intermediate
pre-training
|
We present the TAPAS contribution to the Shared Task on Statement
Verification and Evidence Finding with Tables (SemEval 2021 Task 9, Wang et al.
(2021)). SEM TAB FACT Task A is a classification task of recognizing if a
statement is entailed, neutral or refuted by the content of a given table. We
adopt the binary TAPAS model of Eisenschlos et al. (2020) to this task. We
learn two binary classification models: A first model to predict if a statement
is neutral or non-neutral and a second one to predict if it is entailed or
refuted. As the shared task training set contains only entailed or refuted
examples, we generate artificial neutral examples to train the first model.
Both models are pre-trained using a MASKLM objective, intermediate
counter-factual and synthetic data (Eisenschlos et al., 2020) and TABFACT (Chen
et al., 2020), a large table entailment dataset. We find that the artificial
neutral examples are somewhat effective at training the first model, achieving
68.03 test F1 versus the 60.47 of a majority baseline. For the second stage, we
find that the pre-training on the intermediate data and TABFACT improves the
results over MASKLM pre-training (68.03 vs 57.01).
| 2,021 |
Computation and Language
|
Mining Trends of COVID-19 Vaccine Beliefs on Twitter with Lexical
Embeddings
|
Social media plays a pivotal role in disseminating news globally and acts as
a platform for people to express their opinions on various topics. A wide
variety of views accompanies COVID-19 vaccination drives across the globe,
often colored by emotions, which change along with rising cases, approval of
vaccines, and multiple factors discussed online. This study aims at analyzing
the temporal evolution of different Emotion categories: Hesitation, Rage,
Sorrow, Anticipation, Faith, and Contentment with Influencing Factors: Vaccine
Rollout, Misinformation, Health Effects, and Inequities as lexical categories
created from Tweets belonging to five countries with vital vaccine roll-out
programs, namely, India, United States of America, Brazil, United Kingdom, and
Australia. We extracted a corpus of nearly 1.8 million Twitter posts related to
COVID-19 vaccination. Using cosine distance from selected seed words, we
expanded the vocabulary of each category and tracked the longitudinal change in
their strength from June 2020 to April 2021. We used community detection
algorithms to find modules in positive correlation networks. Our findings
suggest that tweets expressing hesitancy towards vaccines contain the highest
mentions of health-related effects in all countries. Our results indicated that
the patterns of hesitancy were variable across geographies and can help us
learn targeted interventions. We also observed a significant change in the
linear trends of categories like hesitation and contentment before and after
approval of vaccines. Negative emotions like rage and sorrow gained the highest
importance in the alluvial diagram. They formed a significant module with all
the influencing factors in April 2021, when India observed the second wave of
COVID-19 cases. The relationship between Emotions and Influencing Factors was
found to be variable across the countries.
| 2,021 |
Computation and Language
|
Type Prediction Systems
|
Inferring semantic types for entity mentions within text documents is an
important asset for many downstream NLP tasks, such as Semantic Role Labelling,
Entity Disambiguation, Knowledge Base Question Answering, etc. Prior works have
mostly focused on supervised solutions that generally operate on relatively
small-to-medium-sized type systems. In this work, we describe two systems aimed
at predicting type information for the following two tasks, namely, a
TypeSuggest module, an unsupervised system designed to predict types for a set
of user-entered query terms, and an Answer Type prediction module, that
provides a solution for the task of determining the correct type of the answer
expected to a given query. Our systems generalize to arbitrary type systems of
any sizes, thereby making it a highly appealing solution to extract type
information at any granularity.
| 2,021 |
Computation and Language
|
Attention Forcing for Machine Translation
|
Auto-regressive sequence-to-sequence models with attention mechanisms have
achieved state-of-the-art performance in various tasks including Text-To-Speech
(TTS) and Neural Machine Translation (NMT). The standard training approach,
teacher forcing, guides a model with the reference output history. At inference
stage, the generated output history must be used. This mismatch can impact
performance. However, it is highly challenging to train the model using the
generated output. Several approaches have been proposed to address this
problem, normally by selectively using the generated output history. To make
training stable, these approaches often require a heuristic schedule or an
auxiliary classifier. This paper introduces attention forcing for NMT. This
approach guides the model with the generated output history and reference
attention, and can reduce the training-inference mismatch without a schedule or
a classifier. Attention forcing has been successful in TTS, but its application
to NMT is more challenging, due to the discrete and multi-modal nature of the
output space. To tackle this problem, this paper adds a selection scheme to
vanilla attention forcing, which automatically selects a suitable training
approach for each pair of training data. Experiments show that attention
forcing can improve the overall translation quality and the diversity of the
translations.
| 2,021 |
Computation and Language
|
Intent Recognition and Unsupervised Slot Identification for Low
Resourced Spoken Dialog Systems
|
Intent Recognition and Slot Identification are crucial components in spoken
language understanding (SLU) systems. In this paper, we present a novel
approach towards both these tasks in the context of low resourced and unwritten
languages. We present an acoustic based SLU system that converts speech to its
phonetic transcription using a universal phone recognition system. We build a
word-free natural language understanding module that does intent recognition
and slot identification from these phonetic transcription. Our proposed SLU
system performs competitively for resource rich scenarios and significantly
outperforms existing approaches as the amount of available data reduces. We
observe more than 10% improvement for intent classification in Tamil and more
than 5% improvement for intent classification in Sinhala. We also present a
novel approach towards unsupervised slot identification using normalized
attention scores. This approach can be used for unsupervised slot labelling,
data augmentation and to generate data for a new slot in a one-shot way with
only one speech recording
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.