Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Constrained Text Generation with Global Guidance -- Case Study on
CommonGen
|
This paper studies constrained text generation, which is to generate
sentences under certain pre-conditions. We focus on CommonGen, the task of
generating text based on a set of concepts, as a representative task of
constrained text generation. Traditional methods mainly rely on supervised
training to maximize the likelihood of target sentences.However, global
constraints such as common sense and coverage cannot be incorporated into the
likelihood objective of the autoregressive decoding process. In this paper, we
consider using reinforcement learning to address the limitation, measuring
global constraints including fluency, common sense and concept coverage with a
comprehensive score, which serves as the reward for reinforcement learning.
Besides, we design a guided decoding method at the word, fragment and sentence
levels. Experiments demonstrate that our method significantly increases the
concept coverage and outperforms existing models in various automatic
evaluations.
| 2,021 |
Computation and Language
|
Are NLP Models really able to Solve Simple Math Word Problems?
|
The problem of designing NLP solvers for math word problems (MWP) has seen
sustained research activity and steady gains in the test accuracy. Since
existing solvers achieve high performance on the benchmark datasets for
elementary level MWPs containing one-unknown arithmetic word problems, such
problems are often considered "solved" with the bulk of research attention
moving to more complex MWPs. In this paper, we restrict our attention to
English MWPs taught in grades four and lower. We provide strong evidence that
the existing MWP solvers rely on shallow heuristics to achieve high performance
on the benchmark datasets. To this end, we show that MWP solvers that do not
have access to the question asked in the MWP can still solve a large fraction
of MWPs. Similarly, models that treat MWPs as bag-of-words can also achieve
surprisingly high accuracy. Further, we introduce a challenge dataset, SVAMP,
created by applying carefully chosen variations over examples sampled from
existing datasets. The best accuracy achieved by state-of-the-art models is
substantially lower on SVAMP, thus showing that much remains to be done even
for the simplest of the MWPs.
| 2,021 |
Computation and Language
|
Automatic Romanization of Arabic Bibliographic Records
|
International library standards require cataloguers to tediously input
Romanization of their catalogue records for the benefit of library users
without specific language expertise. In this paper, we present the first
reported results on the task of automatic Romanization of undiacritized Arabic
bibliographic entries. This complex task requires the modeling of Arabic
phonology, morphology, and even semantics. We collected a 2.5M word corpus of
parallel Arabic and Romanized bibliographic entries, and benchmarked a number
of models that vary in terms of complexity and resource dependence. Our best
system reaches 89.3% exact word Romanization on a blind test set. We make our
data and code publicly available.
| 2,021 |
Computation and Language
|
Explaining and Improving BERT Performance on Lexical Semantic Change
Detection
|
Type- and token-based embedding architectures are still competing in lexical
semantic change detection. The recent success of type-based models in
SemEval-2020 Task 1 has raised the question why the success of token-based
models on a variety of other NLP tasks does not translate to our field. We
investigate the influence of a range of variables on clusterings of BERT
vectors and show that its low performance is largely due to orthographic
information on the target word, which is encoded even in the higher layers of
BERT representations. By reducing the influence of orthography we considerably
improve BERT's performance.
| 2,021 |
Computation and Language
|
A Simple Post-Processing Technique for Improving Readability Assessment
of Texts using Word Mover's Distance
|
Assessing the proper difficulty levels of reading materials or texts in
general is the first step towards effective comprehension and learning. In this
study, we improve the conventional methodology of automatic readability
assessment by incorporating the Word Mover's Distance (WMD) of ranked texts as
an additional post-processing technique to further ground the difficulty level
given by a model. Results of our experiments on three multilingual datasets in
Filipino, German, and English show that the post-processing technique
outperforms previous vanilla and ranking-based models using SVM.
| 2,021 |
Computation and Language
|
Visual Cues and Error Correction for Translation Robustness
|
Neural Machine Translation models are sensitive to noise in the input texts,
such as misspelled words and ungrammatical constructions. Existing robustness
techniques generally fail when faced with unseen types of noise and their
performance degrades on clean texts. In this paper, we focus on three types of
realistic noise that are commonly generated by humans and introduce the idea of
visual context to improve translation robustness for noisy texts. In addition,
we describe a novel error correction training regime that can be used as an
auxiliary task to further improve translation robustness. Experiments on
English-French and English-German translation show that both multimodal and
error correction components improve model robustness to noisy texts, while
still retaining translation quality on clean texts.
| 2,022 |
Computation and Language
|
Cooperative Self-training of Machine Reading Comprehension
|
Pretrained language models have significantly improved the performance of
downstream language understanding tasks, including extractive question
answering, by providing high-quality contextualized word embeddings. However,
training question answering models still requires large amounts of annotated
data for specific domains. In this work, we propose a cooperative self-training
framework, RGX, for automatically generating more non-trivial question-answer
pairs to improve model performance. RGX is built upon a masked answer
extraction task with an interactive learning environment containing an answer
entity Recognizer, a question Generator, and an answer eXtractor. Given a
passage with a masked entity, the generator generates a question around the
entity, and the extractor is trained to extract the masked entity with the
generated question and raw texts. The framework allows the training of question
generation and answering models on any text corpora without annotation.
Experiment results show that RGX outperforms the state-of-the-art (SOTA)
pretrained language models and transfer learning approaches on standard
question-answering benchmarks, and yields the new SOTA performance under given
model size and transfer learning settings.
| 2,022 |
Computation and Language
|
Abolitionist Networks: Modeling Language Change in Nineteenth-Century
Activist Newspapers
|
The abolitionist movement of the nineteenth-century United States remains
among the most significant social and political movements in US history.
Abolitionist newspapers played a crucial role in spreading information and
shaping public opinion around a range of issues relating to the abolition of
slavery. These newspapers also serve as a primary source of information about
the movement for scholars today, resulting in powerful new accounts of the
movement and its leaders. This paper supplements recent qualitative work on the
role of women in abolition's vanguard, as well as the role of the Black press,
with a quantitative text modeling approach. Using diachronic word embeddings,
we identify which newspapers tended to lead lexical semantic innovations -- the
introduction of new usages of specific words -- and which newspapers tended to
follow. We then aggregate the evidence across hundreds of changes into a
weighted network with the newspapers as nodes; directed edge weights represent
the frequency with which each newspaper led the other in the adoption of a
lexical semantic change. Analysis of this network reveals pathways of lexical
semantic influence, distinguishing leaders from followers, as well as others
who stood apart from the semantic changes that swept through this period. More
specifically, we find that two newspapers edited by women -- THE PROVINCIAL
FREEMAN and THE LILY -- led a large number of semantic changes in our corpus,
lending additional credence to the argument that a multiracial coalition of
women led the abolitionist movement in terms of both thought and action. It
also contributes additional complexity to the scholarship that has sought to
tease apart the relation of the abolitionist movement to the women's suffrage
movement, and the vexed racial politics that characterized their relation.
| 2,021 |
Computation and Language
|
Few-Shot Text Classification with Triplet Networks, Data Augmentation,
and Curriculum Learning
|
Few-shot text classification is a fundamental NLP task in which a model aims
to classify text into a large number of categories, given only a few training
examples per category. This paper explores data augmentation -- a technique
particularly suitable for training with limited data -- for this few-shot,
highly-multiclass text classification setting. On four diverse text
classification tasks, we find that common data augmentation techniques can
improve the performance of triplet networks by up to 3.0% on average.
To further boost performance, we present a simple training strategy called
curriculum data augmentation, which leverages curriculum learning by first
training on only original examples and then introducing augmented data as
training progresses. We explore a two-stage and a gradual schedule, and find
that, compared with standard single-stage training, curriculum data
augmentation trains faster, improves performance, and remains robust to high
amounts of noising from augmentation.
| 2,021 |
Computation and Language
|
A Review on Semi-Supervised Relation Extraction
|
Relation extraction (RE) plays an important role in extracting knowledge from
unstructured text but requires a large amount of labeled corpus. To reduce the
expensive annotation efforts, semisupervised learning aims to leverage both
labeled and unlabeled data. In this paper, we review and compare three typical
methods in semi-supervised RE with deep learning or meta-learning:
self-ensembling, which forces consistent under perturbations but may confront
insufficient supervision; self-training, which iteratively generates pseudo
labels and retrain itself with the enlarged labeled set; dual learning, which
leverages a primal task and a dual task to give mutual feedback. Mean-teacher
(Tarvainen and Valpola, 2017), LST (Li et al., 2019), and DualRE (Lin et al.,
2019) are elaborated as the representatives to alleviate the weakness of these
three methods, respectively.
| 2,021 |
Computation and Language
|
Approximating How Single Head Attention Learns
|
Why do models often attend to salient words, and how does this evolve
throughout training? We approximate model training as a two stage process:
early on in training when the attention weights are uniform, the model learns
to translate individual input word `i` to `o` if they co-occur frequently.
Later, the model learns to attend to `i` while the correct output is $o$
because it knows `i` translates to `o`. To formalize, we define a model
property, Knowledge to Translate Individual Words (KTIW) (e.g. knowing that `i`
translates to `o`), and claim that it drives the learning of the attention.
This claim is supported by the fact that before the attention mechanism is
learned, KTIW can be learned from word co-occurrence statistics, but not the
other way around. Particularly, we can construct a training distribution that
makes KTIW hard to learn, the learning of the attention fails, and the model
cannot even learn the simple task of copying the input words to the output. Our
approximation explains why models sometimes attend to salient words, and
inspires a toy example where a multi-head attention model can overcome the
above hard training distribution by improving learning dynamics rather than
expressiveness. We end by discussing the limitation of our approximation
framework and suggest future directions.
| 2,021 |
Computation and Language
|
Improving Diversity of Neural Text Generation via Inverse Probability
Weighting
|
The neural text generation suffers from the text degeneration issue such as
repetition. Traditional stochastic sampling methods only focus on truncating
the unreliable "tail" of the distribution, and do not address the "head" part,
which we show might contain tedious or even repetitive candidates with high
probability that lead to repetition loops. They also do not consider the issue
that human text does not always favor high-probability words. Inspired by
these, in this work we propose a heuristic sampling method. We propose to use
interquartile range of the predicted distribution to determine the "head" part,
then permutate and rescale the "head" with inverse probability. This aims at
decreasing the probability for the tedious and possibly repetitive candidates
with higher probability, and increasing the probability for the rational but
more surprising candidates with lower probability. The proposed algorithm
provides a reasonable permutation on the predicted distribution which enhances
diversity without compromising rationality of the distribution. We use
pre-trained language model to compare our algorithm with traditional methods.
Results show that our algorithm can effectively increase the diversity of
generated samples while achieving close resemblance to human text.
| 2,021 |
Computation and Language
|
Targeted aspect based multimodal sentiment analysis:an attention capsule
extraction and multi-head fusion network
|
Multimodal sentiment analysis has currently identified its significance in a
variety of domains. For the purpose of sentiment analysis, different aspects of
distinguishing modalities, which correspond to one target, are processed and
analyzed. In this work, we propose the targeted aspect-based multimodal
sentiment analysis (TABMSA) for the first time. Furthermore, an attention
capsule extraction and multi-head fusion network (EF-Net) on the task of TABMSA
is devised. The multi-head attention (MHA) based network and the ResNet-152 are
employed to deal with texts and images, respectively. The integration of MHA
and capsule network aims to capture the interaction among the multimodal
inputs. In addition to the targeted aspect, the information from the context
and the image is also incorporated for sentiment delivered. We evaluate the
proposed model on two manually annotated datasets. the experimental results
demonstrate the effectiveness of our proposed model for this new task.
| 2,021 |
Computation and Language
|
Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet
Extraction
|
Aspect sentiment triplet extraction (ASTE), which aims to identify aspects
from review sentences along with their corresponding opinion expressions and
sentiments, is an emerging task in fine-grained opinion mining. Since ASTE
consists of multiple subtasks, including opinion entity extraction, relation
detection, and sentiment classification, it is critical and challenging to
appropriately capture and utilize the associations among them. In this paper,
we transform ASTE task into a multi-turn machine reading comprehension (MTMRC)
task and propose a bidirectional MRC (BMRC) framework to address this
challenge. Specifically, we devise three types of queries, including
non-restrictive extraction queries, restrictive extraction queries and
sentiment classification queries, to build the associations among different
subtasks. Furthermore, considering that an aspect sentiment triplet can derive
from either an aspect or an opinion expression, we design a bidirectional MRC
structure. One direction sequentially recognizes aspects, opinion expressions,
and sentiments to obtain triplets, while the other direction identifies opinion
expressions first, then aspects, and at last sentiments. By making the two
directions complement each other, our framework can identify triplets more
comprehensively. To verify the effectiveness of our approach, we conduct
extensive experiments on four benchmark datasets. The experimental results
demonstrate that BMRC achieves state-of-the-art performances.
| 2,021 |
Computation and Language
|
OCID-Ref: A 3D Robotic Dataset with Embodied Language for Clutter Scene
Grounding
|
To effectively apply robots in working environments and assist humans, it is
essential to develop and evaluate how visual grounding (VG) can affect machine
performance on occluded objects. However, current VG works are limited in
working environments, such as offices and warehouses, where objects are usually
occluded due to space utilization issues. In our work, we propose a novel
OCID-Ref dataset featuring a referring expression segmentation task with
referring expressions of occluded objects. OCID-Ref consists of 305,694
referring expressions from 2,300 scenes with providing RGB image and point
cloud inputs. To resolve challenging occlusion issues, we argue that it's
crucial to take advantage of both 2D and 3D signals to resolve challenging
occlusion issues. Our experimental results demonstrate the effectiveness of
aggregating 2D and 3D signals but referring to occluded objects still remains
challenging for the modern visual grounding systems. OCID-Ref is publicly
available at https://github.com/lluma/OCID-Ref
| 2,021 |
Computation and Language
|
OkwuGb\'e: End-to-End Speech Recognition for Fon and Igbo
|
Language is inherent and compulsory for human communication. Whether
expressed in a written or spoken way, it ensures understanding between people
of the same and different regions. With the growing awareness and effort to
include more low-resourced languages in NLP research, African languages have
recently been a major subject of research in machine translation, and other
text-based areas of NLP. However, there is still very little comparable
research in speech recognition for African languages. Interestingly, some of
the unique properties of African languages affecting NLP, like their
diacritical and tonal complexities, have a major root in their speech,
suggesting that careful speech interpretation could provide more intuition on
how to deal with the linguistic complexities of African languages for
text-based NLP. OkwuGb\'e is a step towards building speech recognition systems
for African low-resourced languages. Using Fon and Igbo as our case study, we
conduct a comprehensive linguistic analysis of each language and describe the
creation of end-to-end, deep neural network-based speech recognition models for
both languages. We present a state-of-art ASR model for Fon, as well as
benchmark ASR model results for Igbo. Our linguistic analyses (for Fon and
Igbo) provide valuable insights and guidance into the creation of speech
recognition models for other African low-resourced languages, as well as guide
future NLP research for Fon and Igbo. The Fon and Igbo models source code have
been made publicly available.
| 2,021 |
Computation and Language
|
Context Transformer with Stacked Pointer Networks for Conversational
Question Answering over Knowledge Graphs
|
Neural semantic parsing approaches have been widely used for Question
Answering (QA) systems over knowledge graphs. Such methods provide the
flexibility to handle QA datasets with complex queries and a large number of
entities. In this work, we propose a novel framework named CARTON, which
performs multi-task semantic parsing for handling the problem of conversational
question answering over a large-scale knowledge graph. Our framework consists
of a stack of pointer networks as an extension of a context transformer model
for parsing the input question and the dialog history. The framework generates
a sequence of actions that can be executed on the knowledge graph. We evaluate
CARTON on a standard dataset for complex sequential question answering on which
CARTON outperforms all baselines. Specifically, we observe performance
improvements in F1-score on eight out of ten question types compared to the
previous state of the art. For logical reasoning questions, an improvement of
11 absolute points is reached.
| 2,021 |
Computation and Language
|
ParaQA: A Question Answering Dataset with Paraphrase Responses for
Single-Turn Conversation
|
This paper presents ParaQA, a question answering (QA) dataset with multiple
paraphrased responses for single-turn conversation over knowledge graphs (KG).
The dataset was created using a semi-automated framework for generating diverse
paraphrasing of the answers using techniques such as back-translation. The
existing datasets for conversational question answering over KGs
(single-turn/multi-turn) focus on question paraphrasing and provide only up to
one answer verbalization. However, ParaQA contains 5000 question-answer pairs
with a minimum of two and a maximum of eight unique paraphrased responses for
each question. We complement the dataset with baseline models and illustrate
the advantage of having multiple paraphrased answers through commonly used
metrics such as BLEU and METEOR. The ParaQA dataset is publicly available on a
persistent URI for broader usage and adaptation in the research community.
| 2,021 |
Computation and Language
|
Deep Discourse Analysis for Generating Personalized Feedback in
Intelligent Tutor Systems
|
We explore creating automated, personalized feedback in an intelligent
tutoring system (ITS). Our goal is to pinpoint correct and incorrect concepts
in student answers in order to achieve better student learning gains. Although
automatic methods for providing personalized feedback exist, they do not
explicitly inform students about which concepts in their answers are correct or
incorrect. Our approach involves decomposing students answers using neural
discourse segmentation and classification techniques. This decomposition yields
a relational graph over all discourse units covered by the reference solutions
and student answers. We use this inferred relational graph structure and a
neural classifier to match student answers with reference solutions and
generate personalized feedback. Although the process is completely automated
and data-driven, the personalized feedback generated is highly contextual,
domain-aware and effectively targets each student's misconceptions and
knowledge gaps. We test our method in a dialogue-based ITS and demonstrate that
our approach results in high-quality feedback and significantly improved
student learning gains.
| 2,021 |
Computation and Language
|
Multilingual Code-Switching for Zero-Shot Cross-Lingual Intent
Prediction and Slot Filling
|
Predicting user intent and detecting the corresponding slots from text are
two key problems in Natural Language Understanding (NLU). In the context of
zero-shot learning, this task is typically approached by either using
representations from pre-trained multilingual transformers such as mBERT, or by
machine translating the source data into the known target language and then
fine-tuning. Our work focuses on a particular scenario where the target
language is unknown during training. To this goal, we propose a novel method to
augment the monolingual source data using multilingual code-switching via
random translations to enhance a transformer's language neutrality when
fine-tuning it for a downstream task. This method also helps discover novel
insights on how code-switching with different language families around the
world impact the performance on the target language. Experiments on the
benchmark dataset of MultiATIS++ yielded an average improvement of +4.2% in
accuracy for intent task and +1.8% in F1 for slot task using our method over
the state-of-the-art across 8 different languages. Furthermore, we present an
application of our method for crisis informatics using a new human-annotated
tweet dataset of slot filling in English and Haitian Creole, collected during
Haiti earthquake disaster.
| 2,021 |
Computation and Language
|
SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple
Levels
|
Vision-language pre-training (VLP) on large-scale image-text pairs has
recently witnessed rapid progress for learning cross-modal representations.
Existing pre-training methods either directly concatenate image representation
and text representation at a feature level as input to a single-stream
Transformer, or use a two-stream cross-modal Transformer to align the
image-text representation at a high-level semantic space. In real-world
image-text data, we observe that it is easy for some of the image-text pairs to
align simple semantics on both modalities, while others may be related after
higher-level abstraction. Therefore, in this paper, we propose a new
pre-training method SemVLP, which jointly aligns both the low-level and
high-level semantics between image and text representations. The model is
pre-trained iteratively with two prevalent fashions: single-stream pre-training
to align at a fine-grained feature level and two-stream pre-training to align
high-level semantics, by employing a shared Transformer network with a
pluggable cross-modal attention module. An extensive set of experiments have
been conducted on four well-established vision-language understanding tasks to
demonstrate the effectiveness of the proposed SemVLP in aligning cross-modal
representations towards different semantic granularities.
| 2,021 |
Computation and Language
|
A `Sourceful' Twist: Emoji Prediction Based on Sentiment, Hashtags and
Application Source
|
We widely use emojis in social networking to heighten, mitigate or negate the
sentiment of the text. Emoji suggestions already exist in many cross-platform
applications but an emoji is predicted solely based a few prominent words
instead of understanding the subject and substance of the text. Through this
paper, we showcase the importance of using Twitter features to help the model
understand the sentiment involved and hence to predict the most suitable emoji
for the text. Hashtags and Application Sources like Android, etc. are two
features which we found to be important yet underused in emoji prediction and
Twitter sentiment analysis on the whole. To approach this shortcoming and to
further understand emoji behavioral patterns, we propose a more balanced
dataset by crawling additional Twitter data, including timestamp, hashtags, and
application source acting as additional attributes to the tweet. Our data
analysis and neural network model performance evaluations depict that using
hashtags and application sources as features allows to encode different
information and is effective in emoji prediction.
| 2,021 |
Computation and Language
|
Learning a Word-Level Language Model with Sentence-Level Noise
Contrastive Estimation for Contextual Sentence Probability Estimation
|
Inferring the probability distribution of sentences or word sequences is a
key process in natural language processing. While word-level language models
(LMs) have been widely adopted for computing the joint probabilities of word
sequences, they have difficulty in capturing a context long enough for sentence
probability estimation (SPE). To overcome this, recent studies introduced
training methods using sentence-level noise-contrastive estimation (NCE) with
recurrent neural networks (RNNs). In this work, we attempt to extend it for
contextual SPE, which aims to estimate a conditional sentence probability given
a previous text. The proposed NCE samples negative sentences independently of a
previous text so that the trained model gives higher probabilities to the
sentences that are more consistent with \textcolor{blue}{the} context. We apply
our method to a simple word-level RNN LM to focus on the effect of the
sentence-level NCE training rather than on the network architecture. The
quality of estimation was evaluated against multiple-choice cloze-style
questions including both human and automatically generated questions. The
experimental results show that the proposed method improved the SPE quality for
the word-level RNN LM.
| 2,021 |
Computation and Language
|
A Systematic Review of Reproducibility Research in Natural Language
Processing
|
Against the background of what has been termed a reproducibility crisis in
science, the NLP field is becoming increasingly interested in, and
conscientious about, the reproducibility of its results. The past few years
have seen an impressive range of new initiatives, events and active research in
the area. However, the field is far from reaching a consensus about how
reproducibility should be defined, measured and addressed, with diversity of
views currently increasing rather than converging. With this focused
contribution, we aim to provide a wide-angle, and as near as possible complete,
snapshot of current work on reproducibility in NLP, delineating differences and
similarities, and providing pointers to common denominators.
| 2,021 |
Computation and Language
|
Crowdsourced Phrase-Based Tokenization for Low-Resourced Neural Machine
Translation: The Case of Fon Language
|
Building effective neural machine translation (NMT) models for very
low-resourced and morphologically rich African indigenous languages is an open
challenge. Besides the issue of finding available resources for them, a lot of
work is put into preprocessing and tokenization. Recent studies have shown that
standard tokenization methods do not always adequately deal with the
grammatical, diacritical, and tonal properties of some African languages. That,
coupled with the extremely low availability of training samples, hinders the
production of reliable NMT models. In this paper, using Fon language as a case
study, we revisit standard tokenization methods and introduce
Word-Expressions-Based (WEB) tokenization, a human-involved super-words
tokenization strategy to create a better representative vocabulary for
training. Furthermore, we compare our tokenization strategy to others on the
Fon-French and French-Fon translation tasks.
| 2,021 |
Computation and Language
|
Generating CCG Categories
|
Previous CCG supertaggers usually predict categories using multi-class
classification. Despite their simplicity, internal structures of categories are
usually ignored. The rich semantics inside these structures may help us to
better handle relations among categories and bring more robustness into
existing supertaggers. In this work, we propose to generate categories rather
than classify them: each category is decomposed into a sequence of smaller
atomic tags, and the tagger aims to generate the correct sequence. We show that
with this finer view on categories, annotations of different categories could
be shared and interactions with sentence contexts could be enhanced. The
proposed category generator is able to achieve state-of-the-art tagging (95.5%
accuracy) and parsing (89.8% labeled F1) performances on the standard CCGBank.
Furthermore, its performances on infrequent (even unseen) categories,
out-of-domain texts and low resource language give promising results on
introducing generation models to the general CCG analyses.
| 2,021 |
Computation and Language
|
Double Articulation Analyzer with Prosody for Unsupervised Word and
Phoneme Discovery
|
Infants acquire words and phonemes from unsegmented speech signals using
segmentation cues, such as distributional, prosodic, and co-occurrence cues.
Many pre-existing computational models that represent the process tend to focus
on distributional or prosodic cues. This paper proposes a nonparametric
Bayesian probabilistic generative model called the prosodic hierarchical
Dirichlet process-hidden language model (Prosodic HDP-HLM). Prosodic HDP-HLM,
an extension of HDP-HLM, considers both prosodic and distributional cues within
a single integrative generative model. We conducted three experiments on
different types of datasets, and demonstrate the validity of the proposed
method. The results show that the Prosodic DAA successfully uses prosodic cues
and outperforms a method that solely uses distributional cues. The main
contributions of this study are as follows: 1) We develop a probabilistic
generative model for time series data including prosody that potentially has a
double articulation structure; 2) We propose the Prosodic DAA by deriving the
inference procedure for Prosodic HDP-HLM and show that Prosodic DAA can
discover words directly from continuous human speech signals using statistical
information and prosodic information in an unsupervised manner; 3) We show that
prosodic cues contribute to word segmentation more in naturally distributed
case words, i.e., they follow Zipf's law.
| 2,022 |
Computation and Language
|
Mention-centered Graph Neural Network for Document-level Relation
Extraction
|
Document-level relation extraction aims to discover relations between
entities across a whole document. How to build the dependency of entities from
different sentences in a document remains to be a great challenge. Current
approaches either leverage syntactic trees to construct document-level graphs
or aggregate inference information from different sentences. In this paper, we
build cross-sentence dependencies by inferring compositional relations between
inter-sentence mentions. Adopting aggressive linking strategy, intermediate
relations are reasoned on the document-level graphs by mention convolution. We
further notice the generalization problem of NA instances, which is caused by
incomplete annotation and worsened by fully-connected mention pairs. An
improved ranking loss is proposed to attend this problem. Experiments show the
connections between different mentions are crucial to document-level relation
extraction, which enables the model to extract more meaningful higher-level
compositional relations.
| 2,021 |
Computation and Language
|
Towards the evaluation of automatic simultaneous speech translation from
a communicative perspective
|
In recent years, automatic speech-to-speech and speech-to-text translation
has gained momentum thanks to advances in artificial intelligence, especially
in the domains of speech recognition and machine translation. The quality of
such applications is commonly tested with automatic metrics, such as BLEU,
primarily with the goal of assessing improvements of releases or in the context
of evaluation campaigns. However, little is known about how the output of such
systems is perceived by end users or how they compare to human performances in
similar communicative tasks.
In this paper, we present the results of an experiment aimed at evaluating
the quality of a real-time speech translation engine by comparing it to the
performance of professional simultaneous interpreters. To do so, we adopt a
framework developed for the assessment of human interpreters and use it to
perform a manual evaluation on both human and machine performances. In our
sample, we found better performance for the human interpreters in terms of
intelligibility, while the machine performs slightly better in terms of
informativeness. The limitations of the study and the possible enhancements of
the chosen framework are discussed. Despite its intrinsic limitations, the use
of this framework represents a first step towards a user-centric and
communication-oriented methodology for evaluating real-time automatic speech
translation.
| 2,021 |
Computation and Language
|
Sent2Matrix: Folding Character Sequences in Serpentine Manifolds for
Two-Dimensional Sentence
|
We study text representation methods using deep models. Current methods, such
as word-level embedding and character-level embedding schemes, treat texts as
either a sequence of atomic words or a sequence of characters. These methods
either ignore word morphologies or word boundaries. To overcome these
limitations, we propose to convert texts into 2-D representations and develop
the Sent2Matrix method. Our method allows for the explicit incorporation of
both word morphologies and boundaries. When coupled with a novel serpentine
padding method, our Sent2Matrix method leads to an interesting visualization in
which 1-D character sequences are folded into 2-D serpentine manifolds.
Notably, our method is the first attempt to represent texts in 2-D formats.
Experimental results on text classification tasks shown that our method
consistently outperforms prior embedding methods.
| 2,021 |
Computation and Language
|
NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task
|
We present the findings and results of the Second Nuanced Arabic Dialect
Identification Shared Task (NADI 2021). This Shared Task includes four
subtasks: country-level Modern Standard Arabic (MSA) identification (Subtask
1.1), country-level dialect identification (Subtask 1.2), province-level MSA
identification (Subtask 2.1), and province-level sub-dialect identification
(Subtask 2.2). The shared task dataset covers a total of 100 provinces from 21
Arab countries, collected from the Twitter domain. A total of 53 teams from 23
countries registered to participate in the tasks, thus reflecting the interest
of the community in this area. We received 16 submissions for Subtask 1.1 from
five teams, 27 submissions for Subtask 1.2 from eight teams, 12 submissions for
Subtask 2.1 from four teams, and 13 Submissions for subtask 2.2 from four
teams.
| 2,021 |
Computation and Language
|
Multi-view Subword Regularization
|
Multilingual pretrained representations generally rely on subword
segmentation algorithms to create a shared multilingual vocabulary. However,
standard heuristic algorithms often lead to sub-optimal segmentation,
especially for languages with limited amounts of data. In this paper, we take
two major steps towards alleviating this problem. First, we demonstrate
empirically that applying existing subword regularization methods(Kudo, 2018;
Provilkov et al., 2020) during fine-tuning of pre-trained multilingual
representations improves the effectiveness of cross-lingual transfer. Second,
to take full advantage of different possible input segmentations, we propose
Multi-view Subword Regularization (MVR), a method that enforces the consistency
between predictions of using inputs tokenized by the standard and probabilistic
segmentations. Results on the XTREME multilingual benchmark(Hu et al., 2020)
show that MVR brings consistent improvements of up to 2.5 points over using
standard segmentation algorithms.
| 2,021 |
Computation and Language
|
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
|
Typical fact verification models use retrieved written evidence to verify
claims. Evidence sources, however, often change over time as more information
is gathered and revised. In order to adapt, models must be sensitive to subtle
differences in supporting evidence. We present VitaminC, a benchmark infused
with challenging cases that require fact verification models to discern and
adjust to slight factual changes. We collect over 100,000 Wikipedia revisions
that modify an underlying fact, and leverage these revisions, together with
additional synthetically constructed ones, to create a total of over 400,000
claim-evidence pairs. Unlike previous resources, the examples in VitaminC are
contrastive, i.e., they contain evidence pairs that are nearly identical in
language and content, with the exception that one supports a given claim while
the other does not. We show that training using this design increases
robustness -- improving accuracy by 10% on adversarial fact verification and 6%
on adversarial natural language inference (NLI). Moreover, the structure of
VitaminC leads us to define additional tasks for fact-checking resources:
tagging relevant words in the evidence for verifying the claim, identifying
factual revisions, and providing automatic edits via factually consistent text
generation.
| 2,021 |
Computation and Language
|
A Study of Automatic Metrics for the Evaluation of Natural Language
Explanations
|
As transparency becomes key for robotics and AI, it will be necessary to
evaluate the methods through which transparency is provided, including
automatically generated natural language (NL) explanations. Here, we explore
parallels between the generation of such explanations and the much-studied
field of evaluation of Natural Language Generation (NLG). Specifically, we
investigate which of the NLG evaluation measures map well to explanations. We
present the ExBAN corpus: a crowd-sourced corpus of NL explanations for
Bayesian Networks. We run correlations comparing human subjective ratings with
NLG automatic measures. We find that embedding-based automatic NLG evaluation
methods, such as BERTScore and BLEURT, have a higher correlation with human
ratings, compared to word-overlap metrics, such as BLEU and ROUGE. This work
has implications for Explainable AI and transparent robotic and autonomous
systems.
| 2,021 |
Computation and Language
|
The Effect of Domain and Diacritics in Yor\`ub\'a-English Neural Machine
Translation
|
Massively multilingual machine translation (MT) has shown impressive
capabilities, including zero and few-shot translation between low-resource
language pairs. However, these models are often evaluated on high-resource
languages with the assumption that they generalize to low-resource ones. The
difficulty of evaluating MT models on low-resource pairs is often due to lack
of standardized evaluation datasets. In this paper, we present MENYO-20k, the
first multi-domain parallel corpus with a special focus on clean orthography
for Yor\`ub\'a--English with standardized train-test splits for benchmarking.
We provide several neural MT benchmarks and compare them to the performance of
popular pre-trained (massively multilingual) MT models both for the
heterogeneous test set and its subdomains. Since these pre-trained models use
huge amounts of data with uncertain quality, we also analyze the effect of
diacritics, a major characteristic of Yor\`ub\'a, in the training data. We
investigate how and when this training condition affects the final quality and
intelligibility of a translation. Our models outperform massively multilingual
models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$ BLEU) when
translating to Yor\`ub\'a, setting a high quality benchmark for future
research.
| 2,021 |
Computation and Language
|
Discriminative Learning for Probabilistic Context-Free Grammars based on
Generalized H-Criterion
|
We present a formal framework for the development of a family of
discriminative learning algorithms for Probabilistic Context-Free Grammars
(PCFGs) based on a generalization of criterion-H. First of all, we propose the
H-criterion as the objective function and the Growth Transformations as the
optimization method, which allows us to develop the final expressions for the
estimation of the parameters of the PCFGs. And second, we generalize the
H-criterion to take into account the set of reference interpretations and the
set of competing interpretations, and we propose a new family of objective
functions that allow us to develop the expressions of the estimation
transformations for PCFGs.
| 2,021 |
Computation and Language
|
A Transition-based Parser for Unscoped Episodic Logical Forms
|
"Episodic Logic:Unscoped Logical Form" (EL-ULF) is a semantic representation
capturing predicate-argument structure as well as more challenging aspects of
language within the Episodic Logic formalism. We present the first learned
approach for parsing sentences into ULFs, using a growing set of annotated
examples. The results provide a strong baseline for future improvement. Our
method learns a sequence-to-sequence model for predicting the transition action
sequence within a modified cache transition system. We evaluate the efficacy of
type grammar-based constraints, a word-to-symbol lexicon, and transition system
state features in this task. Our system is available at
https://github.com/genelkim/ulf-transition-parser We also present the first
official annotated ULF dataset at
https://www.cs.rochester.edu/u/gkim21/ulf/resources/.
| 2,021 |
Computation and Language
|
dictNN: A Dictionary-Enhanced CNN Approach for Classifying Hate Speech
on Twitter
|
Hate speech on social media is a growing concern, and automated methods have
so far been sub-par at reliably detecting it. A major challenge lies in the
potentially evasive nature of hate speech due to the ambiguity and fast
evolution of natural language. To tackle this, we introduce a vectorisation
based on a crowd-sourced and continuously updated dictionary of hate words and
propose fusing this approach with standard word embedding in order to improve
the classification performance of a CNN model. To train and test our model we
use a merge of two established datasets (110,748 tweets in total). By adding
the dictionary-enhanced input, we are able to increase the CNN model's
predictive power and increase the F1 macro score by seven percentage points.
| 2,021 |
Computation and Language
|
LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time
Image-Text Retrieval
|
Multimodal pre-training has propelled great advancement in
vision-and-language research. These large-scale pre-trained models, although
successful, fatefully suffer from slow inference speed due to enormous
computation cost mainly from cross-modal attention in Transformer architecture.
When applied to real-life applications, such latency and computation demand
severely deter the practical use of pre-trained models. In this paper, we study
Image-text retrieval (ITR), the most mature scenario of V+L application, which
has been widely studied even prior to the emergence of recent pre-trained
models. We propose a simple yet highly effective approach, LightningDOT that
accelerates the inference time of ITR by thousands of times, without
sacrificing accuracy. LightningDOT removes the time-consuming cross-modal
attention by pre-training on three novel learning objectives, extracting
feature indexes offline, and employing instant dot-product matching with
further re-ranking, which significantly speeds up retrieval process. In fact,
LightningDOT achieves new state of the art across multiple ITR benchmarks such
as Flickr30k, COCO and Multi30K, outperforming existing pre-trained models that
consume 1000x magnitude of computational hours. Code and pre-training
checkpoints are available at https://github.com/intersun/LightningDOT.
| 2,021 |
Computation and Language
|
Robustly Optimized and Distilled Training for Natural Language
Understanding
|
In this paper, we explore multi-task learning (MTL) as a second pretraining
step to learn enhanced universal language representation for transformer
language models. We use the MTL enhanced representation across several natural
language understanding tasks to improve performance and generalization.
Moreover, we incorporate knowledge distillation (KD) in MTL to further boost
performance and devise a KD variant that learns effectively from multiple
teachers. By combining MTL and KD, we propose Robustly Optimized and Distilled
(ROaD) modeling framework. We use ROaD together with the ELECTRA model to
obtain state-of-the-art results for machine reading comprehension and natural
language inference.
| 2,021 |
Computation and Language
|
Gumbel-Attention for Multi-modal Machine Translation
|
Multi-modal machine translation (MMT) improves translation quality by
introducing visual information. However, the existing MMT model ignores the
problem that the image will bring information irrelevant to the text, causing
much noise to the model and affecting the translation quality. This paper
proposes a novel Gumbel-Attention for multi-modal machine translation, which
selects the text-related parts of the image features. Specifically, different
from the previous attention-based method, we first use a differentiable method
to select the image information and automatically remove the useless parts of
the image features. Experiments prove that our method retains the image
features related to the text, and the remaining parts help the MMT model
generates better translations.
| 2,022 |
Computation and Language
|
Covid-19 Discourse on Twitter: How the Topics, Sentiments, Subjectivity,
and Figurative Frames Changed Over Time
|
The words we use to talk about the current epidemiological crisis on social
media can inform us on how we are conceptualizing the pandemic and how we are
reacting to its development. This paper provides an extensive explorative
analysis of how the discourse about Covid-19 reported on Twitter changes
through time, focusing on the first wave of this pandemic. Based on an
extensive corpus of tweets (produced between 20th March and 1st July 2020)
first we show how the topics associated with the development of the pandemic
changed through time, using topic modeling. Second, we show how the sentiment
polarity of the language used in the tweets changed from a relatively positive
valence during the first lockdown, toward a more negative valence in
correspondence with the reopening. Third we show how the average subjectivity
of the tweets increased linearly and fourth, how the popular and frequently
used figurative frame of WAR changed when real riots and fights entered the
discourse.
| 2,021 |
Computation and Language
|
Coordinate Constructions in English Enhanced Universal Dependencies:
Analysis and Computational Modeling
|
In this paper, we address the representation of coordinate constructions in
Enhanced Universal Dependencies (UD), where relevant dependency links are
propagated from conjunction heads to other conjuncts. English treebanks for
enhanced UD have been created from gold basic dependencies using a heuristic
rule-based converter, which propagates only core arguments. With the aim of
determining which set of links should be propagated from a semantic
perspective, we create a large-scale dataset of manually edited syntax graphs.
We identify several systematic errors in the original data, and propose to also
propagate adjuncts. We observe high inter-annotator agreement for this semantic
annotation task. Using our new manually verified dataset, we perform the first
principled comparison of rule-based and (partially novel) machine-learning
based methods for conjunction propagation for English. We show that learning
propagation rules is more effective than hand-designing heuristic rules. When
using automatic parses, our neural graph-parser based edge predictor
outperforms the currently predominant pipelinesusing a basic-layer tree parser
plus converters.
| 2,021 |
Computation and Language
|
Structural Adapters in Pretrained Language Models for AMR-to-text
Generation
|
Pretrained language models (PLM) have recently advanced graph-to-text
generation, where the input graph is linearized into a sequence and fed into
the PLM to obtain its representation. However, efficiently encoding the graph
structure in PLMs is challenging because such models were pretrained on natural
language, and modeling structured data may lead to catastrophic forgetting of
distributional knowledge. In this paper, we propose StructAdapt, an adapter
method to encode graph structure into PLMs. Contrary to prior work, StructAdapt
effectively models interactions among the nodes based on the graph
connectivity, only training graph structure-aware adapter parameters. In this
way, we incorporate task-specific knowledge while maintaining the topological
structure of the graph. We empirically show the benefits of explicitly encoding
graph structure into PLMs using StructAdapt, outperforming the state of the art
on two AMR-to-text datasets, training only 5.1% of the PLM parameters.
| 2,021 |
Computation and Language
|
A Multilingual African Embedding for FAQ Chatbots
|
Searching for an available, reliable, official, and understandable
information is not a trivial task due to scattered information across the
internet, and the availability lack of governmental communication channels
communicating with African dialects and languages. In this paper, we introduce
an Artificial Intelligence Powered chatbot for crisis communication that would
be omnichannel, multilingual and multi dialectal. We present our work on
modified StarSpace embedding tailored for African dialects for the
question-answering task along with the architecture of the proposed chatbot
system and a description of the different layers. English, French, Arabic,
Tunisian, Igbo,Yor\`ub\'a, and Hausa are used as languages and dialects.
Quantitative and qualitative evaluation results are obtained for our real
deployed Covid-19 chatbot. Results show that users are satisfied and the
conversation with the chatbot is meeting customer needs.
| 2,021 |
Computation and Language
|
No Intruder, no Validity: Evaluation Criteria for Privacy-Preserving
Text Anonymization
|
For sensitive text data to be shared among NLP researchers and practitioners,
shared documents need to comply with data protection and privacy laws. There is
hence a growing interest in automated approaches for text anonymization.
However, measuring such methods' performance is challenging: missing a single
identifying attribute can reveal an individual's identity. In this paper, we
draw attention to this problem and argue that researchers and practitioners
developing automated text anonymization systems should carefully assess whether
their evaluation methods truly reflect the system's ability to protect
individuals from being re-identified. We then propose TILD, a set of evaluation
criteria that comprises an anonymization method's technical performance, the
information loss resulting from its anonymization, and the human ability to
de-anonymize redacted documents. These criteria may facilitate progress towards
a standardized way for measuring anonymization performance.
| 2,021 |
Computation and Language
|
Graph Convolutional Network for Swahili News Classification
|
This work empirically demonstrates the ability of Text Graph Convolutional
Network (Text GCN) to outperform traditional natural language processing
benchmarks for the task of semi-supervised Swahili news classification. In
particular, we focus our experimentation on the sparsely-labelled
semi-supervised context which is representative of the practical constraints
facing low-resourced African languages. We follow up on this result by
introducing a variant of the Text GCN model which utilises a bag of words
embedding rather than a naive one-hot encoding to reduce the memory footprint
of Text GCN whilst demonstrating similar predictive performance.
| 2,021 |
Computation and Language
|
Cross-Task Instance Representation Interactions and Label Dependencies
for Joint Information Extraction with Graph Convolutional Networks
|
Existing works on information extraction (IE) have mainly solved the four
main tasks separately (entity mention recognition, relation extraction, event
trigger detection, and argument extraction), thus failing to benefit from
inter-dependencies between tasks. This paper presents a novel deep learning
model to simultaneously solve the four tasks of IE in a single model (called
FourIE). Compared to few prior work on jointly performing four IE tasks, FourIE
features two novel contributions to capture inter-dependencies between tasks.
First, at the representation level, we introduce an interaction graph between
instances of the four tasks that is used to enrich the prediction
representation for one instance with those from related instances of other
tasks. Second, at the label level, we propose a dependency graph for the
information types in the four IE tasks that captures the connections between
the types expressed in an input sentence. A new regularization mechanism is
introduced to enforce the consistency between the golden and predicted type
dependency graphs to improve representation learning. We show that the proposed
model achieves the state-of-the-art performance for joint IE on both
monolingual and multilingual learning settings with three different languages.
| 2,021 |
Computation and Language
|
Investigating Monolingual and Multilingual BERTModels for Vietnamese
Aspect Category Detection
|
Aspect category detection (ACD) is one of the challenging tasks in the
Aspect-based sentiment Analysis problem. The purpose of this task is to
identify the aspect categories mentioned in user-generated reviews from a set
of pre-defined categories. In this paper, we investigate the performance of
various monolingual pre-trained language models compared with multilingual
models on the Vietnamese aspect category detection problem. We conduct the
experiments on two benchmark datasets for the restaurant and hotel domain. The
experimental results demonstrated the effectiveness of the monolingual PhoBERT
model than others on two datasets. We also evaluate the performance of the
multilingual model based on the combination of whole SemEval-2016 datasets in
other languages with the Vietnamese dataset. To the best of our knowledge, our
research study is the first attempt at performing various available pre-trained
language models on aspect category detection task and utilize the datasets from
other languages based on multilingual models.
| 2,021 |
Computation and Language
|
Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots
|
Existing multi-turn context-response matching methods mainly concentrate on
obtaining multi-level and multi-dimension representations and better
interactions between context utterances and response. However, in real-place
conversation scenarios, whether a response candidate is suitable not only
counts on the given dialogue context but also other backgrounds, e.g., wording
habits, user-specific dialogue history content. To fill the gap between these
up-to-date methods and the real-world applications, we incorporate
user-specific dialogue history into the response selection and propose a
personalized hybrid matching network (PHMN). Our contributions are two-fold: 1)
our model extracts personalized wording behaviors from user-specific dialogue
history as extra matching information; 2) we perform hybrid representation
learning on context-response utterances and explicitly incorporate a customized
attention mechanism to extract vital information from context-response
interactions so as to improve the accuracy of matching. We evaluate our model
on two large datasets with user identification, i.e., personalized Ubuntu
dialogue Corpus (P-Ubuntu) and personalized Weibo dataset (P-Weibo).
Experimental results confirm that our method significantly outperforms several
strong models by combining personalized attention, wording behaviors, and
hybrid representation learning.
| 2,021 |
Computation and Language
|
Towards Few-Shot Fact-Checking via Perplexity
|
Few-shot learning has drawn researchers' attention to overcome the problem of
data scarcity. Recently, large pre-trained language models have shown great
performance in few-shot learning for various downstream tasks, such as question
answering and machine translation. Nevertheless, little exploration has been
made to achieve few-shot learning for the fact-checking task. However,
fact-checking is an important problem, especially when the amount of
information online is growing exponentially every day. In this paper, we
propose a new way of utilizing the powerful transfer learning ability of a
language model via a perplexity score. The most notable strength of our
methodology lies in its capability in few-shot learning. With only two training
samples, our methodology can already outperform the Major Class baseline by
more than absolute 10% on the F1-Macro metric across multiple datasets. Through
experiments, we empirically verify the plausibility of the rather surprising
usage of the perplexity score in the context of fact-checking and highlight the
strength of our few-shot methodology by comparing it to strong
fine-tuning-based baseline models. Moreover, we construct and publicly release
two new fact-checking datasets related to COVID-19.
| 2,021 |
Computation and Language
|
ENCONTER: Entity Constrained Progressive Sequence Generation via
Insertion-based Transformer
|
Pretrained using large amount of data, autoregressive language models are
able to generate high quality sequences. However, these models do not perform
well under hard lexical constraints as they lack fine control of content
generation process. Progressive insertion-based transformers can overcome the
above limitation and efficiently generate a sequence in parallel given some
input tokens as constraint. These transformers however may fail to support hard
lexical constraints as their generation process is more likely to terminate
prematurely. The paper analyses such early termination problems and proposes
the Entity-constrained insertion transformer (ENCONTER), a new insertion
transformer that addresses the above pitfall without compromising much
generation efficiency. We introduce a new training strategy that considers
predefined hard lexical constraints (e.g., entities to be included in the
generated sequence). Our experiments show that ENCONTER outperforms other
baseline models in several performance metrics rendering it more suitable in
practical applications. Our code is available at
https://github.com/LARC-CMU-SMU/Enconter
| 2,021 |
Computation and Language
|
Endangered Languages are not Low-Resourced!
|
The term low-resourced has been tossed around in the field of natural
language processing to a degree that almost any language that is not English
can be called "low-resourced"; sometimes even just for the sake of making a
mundane or mediocre paper appear more interesting and insightful. In a field
where English is a synonym for language and low-resourced is a synonym for
anything not English, calling endangered languages low-resourced is a bit of an
overstatement. In this paper, I inspect the relation of the endangered with the
low-resourced from my own experiences.
| 2,021 |
Computation and Language
|
Automatic Generation of Contrast Sets from Scene Graphs: Probing the
Compositional Consistency of GQA
|
Recent works have shown that supervised models often exploit data artifacts
to achieve good test scores while their performance severely degrades on
samples outside their training distribution. Contrast sets (Gardneret al.,
2020) quantify this phenomenon by perturbing test samples in a minimal way such
that the output label is modified. While most contrast sets were created
manually, requiring intensive annotation effort, we present a novel method
which leverages rich semantic input representation to automatically generate
contrast sets for the visual question answering task. Our method computes the
answer of perturbed questions, thus vastly reducing annotation cost and
enabling thorough evaluation of models' performance on various semantic aspects
(e.g., spatial or relational reasoning). We demonstrate the effectiveness of
our approach on the GQA dataset and its semantic scene graph image
representation. We find that, despite GQA's compositionality and carefully
balanced label distribution, two high-performing models drop 13-17% in accuracy
compared to the original test set. Finally, we show that our automatic
perturbation can be applied to the training set to mitigate the degradation in
performance, opening the door to more robust models.
| 2,021 |
Computation and Language
|
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots
|
Multilingual models have demonstrated impressive cross-lingual transfer
performance. However, test sets like XNLI are monolingual at the example level.
In multilingual communities, it is common for polyglots to code-mix when
conversing with each other. Inspired by this phenomenon, we present two strong
black-box adversarial attacks (one word-level, one phrase-level) for
multilingual models that push their ability to handle code-mixed sentences to
the limit. The former uses bilingual dictionaries to propose perturbations and
translations of the clean example for sense disambiguation. The latter directly
aligns the clean example with its translations before extracting phrases as
perturbations. Our phrase-level attack has a success rate of 89.75% against
XLM-R-large, bringing its average accuracy of 79.85 down to 8.18 on XNLI.
Finally, we propose an efficient adversarial training scheme that trains in the
same number of steps as the original model and show that it improves model
accuracy.
| 2,021 |
Computation and Language
|
Code Word Detection in Fraud Investigations using a Deep-Learning
Approach
|
In modern litigation, fraud investigators often face an overwhelming number
of documents that must be reviewed throughout a matter. In the majority of
legal cases, fraud investigators do not know beforehand, exactly what they are
looking for, nor where to find it. In addition, fraudsters may use deception to
hide their behaviour and intentions by using code words. Effectively, this
means fraud investigators are looking for a needle in the haystack without
knowing what the needle looks like.
As part of a larger research program, we use a framework to expedite the
investigation process applying text-mining and machine learning techniques. We
structure this framework using three well-known methods in fraud
investigations: (i) the fraud triangle (ii) the golden ("W") investigation
questions, and (iii) the analysis of competing hypotheses. With this framework,
it is possible to automatically organize investigative data, so it is easier
for investigators to find answers to typical investigative questions.
In this research, we focus on one of the components of this framework: the
identification of the usage of code words by fraudsters. Here for, a novel
(annotated) synthetic data set is created containing such code words, hidden in
normal email communication. Subsequently, a range of machine learning
techniques are employed to detect such code words. We show that the
state-of-the-art BERT model significantly outperforms other methods on this
task. With this result, we demonstrate that deep neural language models can
reliably (F1 score of 0.9) be applied in fraud investigations for the detection
of code words.
| 2,021 |
Computation and Language
|
SILT: Efficient transformer training for inter-lingual inference
|
The ability of transformers to perform precision tasks such as question
answering, Natural Language Inference (NLI) or summarising, have enabled them
to be ranked as one of the best paradigm to address Natural Language Processing
(NLP) tasks. NLI is one of the best scenarios to test these architectures, due
to the knowledge required to understand complex sentences and established
relationships between a hypothesis and a premise. Nevertheless, these models
suffer from incapacity to generalise to other domains or difficulties to face
multilingual and interlingual scenarios. The leading pathway in the literature
to address these issues involve designing and training extremely large
architectures, which leads to unpredictable behaviours and to establish
barriers which impede broad access and fine tuning. In this paper, we propose a
new architecture called Siamese Inter-Lingual Transformer (SILT), to
efficiently align multilingual embeddings for Natural Language Inference,
allowing for unmatched language pairs to be processed. SILT leverages siamese
pre-trained multi-lingual transformers with frozen weights where the two input
sentences attend each other to later be combined through a matrix alignment
method. The experimental results carried out in this paper evidence that SILT
allows to reduce drastically the number of trainable parameters while allowing
for inter-lingual NLI and achieving state-of-the-art performance on common
benchmarks.
We make our code and dataset available at
https://github.com/jahuerta92/siamese-inter-lingual-transformer.
| 2,021 |
Computation and Language
|
UniParma at SemEval-2021 Task 5: Toxic Spans Detection Using
CharacterBERT and Bag-of-Words Model
|
With the ever-increasing availability of digital information, toxic content
is also on the rise. Therefore, the detection of this type of language is of
paramount importance. We tackle this problem utilizing a combination of a
state-of-the-art pre-trained language model (CharacterBERT) and a traditional
bag-of-words technique. Since the content is full of toxic words that have not
been written according to their dictionary spelling, attendance to individual
characters is crucial. Therefore, we use CharacterBERT to extract features
based on the word characters. It consists of a CharacterCNN module that learns
character embeddings from the context. These are, then, fed into the well-known
BERT architecture. The bag-of-words method, on the other hand, further improves
upon that by making sure that some frequently used toxic words get labeled
accordingly. With a 4 percent difference from the first team, our system ranked
36th in the competition. The code is available for further re-search and
reproduction of the results.
| 2,021 |
Computation and Language
|
Multimodal End-to-End Sparse Model for Emotion Recognition
|
Existing works on multimodal affective computing tasks, such as emotion
recognition, generally adopt a two-phase pipeline, first extracting feature
representations for each single modality with hand-crafted algorithms and then
performing end-to-end learning with the extracted features. However, the
extracted features are fixed and cannot be further fine-tuned on different
target tasks, and manually finding feature extraction algorithms does not
generalize or scale well to different tasks, which can lead to sub-optimal
performance. In this paper, we develop a fully end-to-end model that connects
the two phases and optimizes them jointly. In addition, we restructure the
current datasets to enable the fully end-to-end training. Furthermore, to
reduce the computational overhead brought by the end-to-end model, we introduce
a sparse cross-modal attention mechanism for the feature extraction.
Experimental results show that our fully end-to-end model significantly
surpasses the current state-of-the-art models based on the two-phase pipeline.
Moreover, by adding the sparse cross-modal attention, our model can maintain
performance with around half the computation in the feature extraction part.
| 2,021 |
Computation and Language
|
Moroccan Dialect -Darija- Open Dataset
|
Darija Open Dataset (DODa) is an open-source project for the Moroccan
dialect. With more than 10,000 entries DODa is arguably the largest open-source
collaborative project for Darija-English translation built for Natural Language
Processing purposes. In fact, besides semantic categorization, DODa also adopts
a syntactic one, presents words under different spellings, offers verb-to-noun
and masculine-to-feminine correspondences, contains the conjugation of hundreds
of verbs in different tenses, and many other subsets to help researchers better
understand and study Moroccan dialect. This data paper presents a description
of DODa, its features, how it was collected, as well as a first application in
Image Classification using ImageNet labels translated to Darija. This
collaborative project is hosted on GitHub platform under MIT's Open-Source
license and aims to be a standard resource for researchers, students, and
anyone who is interested in Moroccan Dialect
| 2,021 |
Computation and Language
|
The Human Evaluation Datasheet 1.0: A Template for Recording Details of
Human Evaluation Experiments in NLP
|
This paper introduces the Human Evaluation Datasheet, a template for
recording the details of individual human evaluation experiments in Natural
Language Processing (NLP). Originally taking inspiration from seminal papers by
Bender and Friedman (2018), Mitchell et al. (2019), and Gebru et al. (2020),
the Human Evaluation Datasheet is intended to facilitate the recording of
properties of human evaluations in sufficient detail, and with sufficient
standardisation, to support comparability, meta-evaluation, and reproducibility
tests.
| 2,021 |
Computation and Language
|
From Plenipotentiary to Puddingless: Users and Uses of New Words in
Early English Letters
|
We study neologism use in two samples of early English correspondence, from
1640--1660 and 1760--1780. Of especial interest are the early adopters of new
vocabulary, the social groups they represent, and the types and functions of
their neologisms. We describe our computer-assisted approach and note the
difficulties associated with massive variation in the corpus. Our findings
include that while male letter-writers tend to use neologisms more frequently
than women, the eighteenth century seems to have provided more opportunities
for women and the lower ranks to participate in neologism use as well. In both
samples, neologisms most frequently occur in letters written between close
friends, which could be due to this less stable relationship triggering more
creative language use. In the seventeenth-century sample, we observe the
influence of the English Civil War, while the eighteenth-century sample appears
to reflect the changing functions of letter-writing, as correspondence is
increasingly being used as a tool for building and maintaining social
relationships in addition to exchanging information.
| 2,021 |
Computation and Language
|
Advancing RNN Transducer Technology for Speech Recognition
|
We investigate a set of techniques for RNN Transducers (RNN-Ts) that were
instrumental in lowering the word error rate on three different tasks
(Switchboard 300 hours, conversational Spanish 780 hours and conversational
Italian 900 hours). The techniques pertain to architectural changes, speaker
adaptation, language model fusion, model combination and general training
recipe. First, we introduce a novel multiplicative integration of the encoder
and prediction network vectors in the joint network (as opposed to additive).
Second, we discuss the applicability of i-vector speaker adaptation to RNN-Ts
in conjunction with data perturbation. Third, we explore the effectiveness of
the recently proposed density ratio language model fusion for these tasks. Last
but not least, we describe the other components of our training recipe and
their effect on recognition performance. We report a 5.9% and 12.5% word error
rate on the Switchboard and CallHome test sets of the NIST Hub5 2000 evaluation
and a 12.7% WER on the Mozilla CommonVoice Italian test set.
| 2,021 |
Computation and Language
|
Model Extraction and Adversarial Transferability, Your BERT is
Vulnerable!
|
Natural language processing (NLP) tasks, ranging from text classification to
text generation, have been revolutionised by the pre-trained language models,
such as BERT. This allows corporations to easily build powerful APIs by
encapsulating fine-tuned BERT models for downstream tasks. However, when a
fine-tuned BERT model is deployed as a service, it may suffer from different
attacks launched by malicious users. In this work, we first present how an
adversary can steal a BERT-based API service (the victim/target model) on
multiple benchmark datasets with limited prior knowledge and queries. We
further show that the extracted model can lead to highly transferable
adversarial attacks against the victim model. Our studies indicate that the
potential vulnerabilities of BERT-based API services still hold, even when
there is an architectural mismatch between the victim model and the attack
model. Finally, we investigate two defence strategies to protect the victim
model and find that unless the performance of the victim model is sacrificed,
both model ex-traction and adversarial transferability can effectively
compromise the target models
| 2,021 |
Computation and Language
|
Constructive and Toxic Speech Detection for Open-domain Social Media
Comments in Vietnamese
|
The rise of social media has led to the increasing of comments on online
forums. However, there still exists invalid comments which are not informative
for users. Moreover, those comments are also quite toxic and harmful to people.
In this paper, we create a dataset for constructive and toxic speech detection,
named UIT-ViCTSD (Vietnamese Constructive and Toxic Speech Detection dataset)
with 10,000 human-annotated comments. For these tasks, we propose a system for
constructive and toxic speech detection with the state-of-the-art transfer
learning model in Vietnamese NLP as PhoBERT. With this system, we obtain
F1-scores of 78.59% and 59.40% for classifying constructive and toxic comments,
respectively. Besides, we implement various baseline models as traditional
Machine Learning and Deep Neural Network-Based models to evaluate the dataset.
With the results, we can solve several tasks on the online discussions and
develop the framework for identifying constructiveness and toxicity of
Vietnamese social media comments automatically.
| 2,021 |
Computation and Language
|
Quinductor: a multilingual data-driven method for generating
reading-comprehension questions using Universal Dependencies
|
We propose a multilingual data-driven method for generating reading
comprehension questions using dependency trees. Our method provides a strong,
mostly deterministic, and inexpensive-to-train baseline for less-resourced
languages. While a language-specific corpus is still required, its size is
nowhere near those required by modern neural question generation (QG)
architectures. Our method surpasses QG baselines previously reported in the
literature and shows a good performance in terms of human evaluation.
| 2,023 |
Computation and Language
|
Evaluating Document Coherence Modelling
|
While pretrained language models ("LM") have driven impressive gains over
morpho-syntactic and semantic tasks, their ability to model discourse and
pragmatic phenomena is less clear. As a step towards a better understanding of
their discourse modelling capabilities, we propose a sentence intrusion
detection task. We examine the performance of a broad range of pretrained LMs
on this detection task for English. Lacking a dataset for the task, we
introduce INSteD, a novel intruder sentence detection dataset, containing
170,000+ documents constructed from English Wikipedia and CNN news articles.
Our experiments show that pretrained LMs perform impressively in in-domain
evaluation, but experience a substantial drop in the cross-domain setting,
indicating limited generalisation capacity. Further results over a novel
linguistic probe dataset show that there is substantial room for improvement,
especially in the cross-domain setting.
| 2,021 |
Computation and Language
|
Let-Mi: An Arabic Levantine Twitter Dataset for Misogynistic Language
|
Online misogyny has become an increasing worry for Arab women who experience
gender-based online abuse on a daily basis. Misogyny automatic detection
systems can assist in the prohibition of anti-women Arabic toxic content.
Developing such systems is hindered by the lack of the Arabic misogyny
benchmark datasets. In this paper, we introduce an Arabic Levantine Twitter
dataset for Misogynistic language (LeT-Mi) to be the first benchmark dataset
for Arabic misogyny. We further provide a detailed review of the dataset
creation and annotation phases. The consistency of the annotations for the
proposed dataset was emphasized through inter-rater agreement evaluation
measures. Moreover, Let-Mi was used as an evaluation dataset through
binary/multi-/target classification tasks conducted by several state-of-the-art
machine learning systems along with Multi-Task Learning (MTL) configuration.
The obtained results indicated that the performances achieved by the used
systems are consistent with state-of-the-art results for languages other than
Arabic, while employing MTL improved the performance of the misogyny/target
classification tasks.
| 2,021 |
Computation and Language
|
Smoothing and Shrinking the Sparse Seq2Seq Search Space
|
Current sequence-to-sequence models are trained to minimize cross-entropy and
use softmax to compute the locally normalized probabilities over target
sequences. While this setup has led to strong results in a variety of tasks,
one unsatisfying aspect is its length bias: models give high scores to short,
inadequate hypotheses and often make the empty string the argmax -- the
so-called cat got your tongue problem. Recently proposed entmax-based sparse
sequence-to-sequence models present a possible solution, since they can shrink
the search space by assigning zero probability to bad hypotheses, but their
ability to handle word-level tasks with transformers has never been tested. In
this work, we show that entmax-based models effectively solve the cat got your
tongue problem, removing a major source of model error for neural machine
translation. In addition, we generalize label smoothing, a critical
regularization technique, to the broader family of Fenchel-Young losses, which
includes both cross-entropy and the entmax losses. Our resulting label-smoothed
entmax loss models set a new state of the art on multilingual
grapheme-to-phoneme conversion and deliver improvements and better calibration
properties on cross-lingual morphological inflection and machine translation
for 6 language pairs.
| 2,021 |
Computation and Language
|
Contextual Biasing of Language Models for Speech Recognition in
Goal-Oriented Conversational Agents
|
Goal-oriented conversational interfaces are designed to accomplish specific
tasks and typically have interactions that tend to span multiple turns adhering
to a pre-defined structure and a goal. However, conventional neural language
models (NLM) in Automatic Speech Recognition (ASR) systems are mostly trained
sentence-wise with limited context. In this paper, we explore different ways to
incorporate context into a LSTM based NLM in order to model long range
dependencies and improve speech recognition. Specifically, we use context carry
over across multiple turns and use lexical contextual cues such as system
dialog act from Natural Language Understanding (NLU) models and the user
provided structure of the chatbot. We also propose a new architecture that
utilizes context embeddings derived from BERT on sample utterances provided
during inference time. Our experiments show a word error rate (WER) relative
reduction of 7% over non-contextual utterance-level NLM rescorers on
goal-oriented audio datasets.
| 2,021 |
Computation and Language
|
GLM: General Language Model Pretraining with Autoregressive Blank
Infilling
|
There have been various types of pretraining architectures including
autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and
encoder-decoder models (e.g., T5). However, none of the pretraining frameworks
performs the best for all tasks of three main categories including natural
language understanding (NLU), unconditional generation, and conditional
generation. We propose a General Language Model (GLM) based on autoregressive
blank infilling to address this challenge. GLM improves blank filling
pretraining by adding 2D positional encodings and allowing an arbitrary order
to predict spans, which results in performance gains over BERT and T5 on NLU
tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying
the number and lengths of blanks. On a wide range of tasks across NLU,
conditional and unconditional generation, GLM outperforms BERT, T5, and GPT
given the same model sizes and data, and achieves the best performance from a
single pretrained model with 1.25x parameters of BERT Large , demonstrating its
generalizability to different downstream tasks.
| 2,022 |
Computation and Language
|
GPT Understands, Too
|
Prompting a pretrained language model with natural language patterns has been
proved effective for natural language understanding (NLU). However, our
preliminary study reveals that manual discrete prompts often lead to unstable
performance -- e.g., changing a single word in the prompt might result in
substantial performance drop. We propose a novel method P-Tuning that employs
trainable continuous prompt embeddings in concatenation with discrete prompts.
Empirically, P-Tuning not only stabilizes training by minimizing the gap
between various discrete prompts, but also improves performance by a sizeable
margin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is
generally effective for both frozen and tuned language models, under both the
fully-supervised and few-shot settings.
| 2,023 |
Computation and Language
|
Decomposing and Recomposing Event Structure
|
We present an event structure classification empirically derived from
inferential properties annotated on sentence- and document-level Universal
Decompositional Semantics (UDS) graphs. We induce this classification jointly
with semantic role, entity, and event-event relation classifications using a
document-level generative model structured by these graphs. To support this
induction, we augment existing annotations found in the UDS1.0 dataset, which
covers the entirety of the English Web Treebank, with an array of inferential
properties capturing fine-grained aspects of the temporal and aspectual
structure of events. The resulting dataset (available at decomp.io) is the
largest annotation of event structure and (partial) event coreference to date.
| 2,021 |
Computation and Language
|
Refining Language Models with Compositional Explanations
|
Pre-trained language models have been successful on text classification
tasks, but are prone to learning spurious correlations from biased datasets,
and are thus vulnerable when making inferences in a new domain. Prior work
reveals such spurious patterns via post-hoc explanation algorithms which
compute the importance of input features. Further, the model is regularized to
align the importance scores with human knowledge, so that the unintended model
behaviors are eliminated. However, such a regularization technique lacks
flexibility and coverage, since only importance scores towards a pre-defined
list of features are adjusted, while more complex human knowledge such as
feature interaction and pattern generalization can hardly be incorporated. In
this work, we propose to refine a learned language model for a target domain by
collecting human-provided compositional explanations regarding observed biases.
By parsing these explanations into executable logic rules, the human-specified
refinement advice from a small set of explanations can be generalized to more
training examples. We additionally introduce a regularization term allowing
adjustments for both importance and interaction of features to better rectify
model behavior. We demonstrate the effectiveness of the proposed approach on
two text classification tasks by showing improved performance in target domain
as well as improved model fairness after refinement.
| 2,022 |
Computation and Language
|
Pretraining the Noisy Channel Model for Task-Oriented Dialogue
|
Direct decoding for task-oriented dialogue is known to suffer from the
explaining-away effect, manifested in models that prefer short and generic
responses. Here we argue for the use of Bayes' theorem to factorize the
dialogue task into two models, the distribution of the context given the
response, and the prior for the response itself. This approach, an
instantiation of the noisy channel model, both mitigates the explaining-away
effect and allows the principled incorporation of large pretrained models for
the response prior. We present extensive experiments showing that a noisy
channel model decodes better responses compared to direct decoding and that a
two stage pretraining strategy, employing both open-domain and task-oriented
dialogue data, improves over randomly initialized models.
| 2,021 |
Computation and Language
|
Improving the Lexical Ability of Pretrained Language Models for
Unsupervised Neural Machine Translation
|
Successful methods for unsupervised neural machine translation (UNMT) employ
crosslingual pretraining via self-supervision, often in the form of a masked
language modeling or a sequence generation task, which requires the model to
align the lexical- and high-level representations of the two languages. While
cross-lingual pretraining works for similar languages with abundant corpora, it
performs poorly in low-resource and distant languages. Previous research has
shown that this is because the representations are not sufficiently aligned. In
this paper, we enhance the bilingual masked language model pretraining with
lexical-level information by using type-level cross-lingual subword embeddings.
Empirical results demonstrate improved performance both on UNMT (up to 4.5
BLEU) and bilingual lexicon induction using our method compared to a UNMT
baseline.
| 2,021 |
Computation and Language
|
Gender and Racial Fairness in Depression Research using Social Media
|
Multiple studies have demonstrated that behavior on internet-based social
media platforms can be indicative of an individual's mental health status. The
widespread availability of such data has spurred interest in mental health
research from a computational lens. While previous research has raised concerns
about possible biases in models produced from this data, no study has
quantified how these biases actually manifest themselves with respect to
different demographic groups, such as gender and racial/ethnic groups. Here, we
analyze the fairness of depression classifiers trained on Twitter data with
respect to gender and racial demographic groups. We find that model performance
systematically differs for underrepresented groups and that these discrepancies
cannot be fully explained by trivial data representation issues. Our study
concludes with recommendations on how to avoid these biases in future research.
| 2,021 |
Computation and Language
|
Extractive Summarization of Call Transcripts
|
Text summarization is the process of extracting the most important
information from the text and presenting it concisely in fewer sentences. Call
transcript is a text that involves textual description of a phone conversation
between a customer (caller) and agent(s) (customer representatives). This paper
presents an indigenously developed method that combines topic modeling and
sentence selection with punctuation restoration in condensing ill-punctuated or
un-punctuated call transcripts to produce summaries that are more readable.
Extensive testing, evaluation and comparisons have demonstrated the efficacy of
this summarizer for call transcript summarization.
| 2,022 |
Computation and Language
|
Cost-effective Deployment of BERT Models in Serverless Environment
|
In this study we demonstrate the viability of deploying BERT-style models to
serverless environments in a production setting. Since the freely available
pre-trained models are too large to be deployed in this way, we utilize
knowledge distillation and fine-tune the models on proprietary datasets for two
real-world tasks: sentiment analysis and semantic textual similarity. As a
result, we obtain models that are tuned for a specific domain and deployable in
serverless environments. The subsequent performance analysis shows that this
solution results in latency levels acceptable for production use and that it is
also a cost-effective approach for small-to-medium size deployments of BERT
models, all without any infrastructure overhead.
| 2,021 |
Computation and Language
|
Controllable Generation from Pre-trained Language Models via Inverse
Prompting
|
Large-scale pre-trained language models have demonstrated strong capabilities
of generating realistic text. However, it remains challenging to control the
generation results. Previous approaches such as prompting are far from
sufficient, which limits the usage of language models. To tackle this
challenge, we propose an innovative method, inverse prompting, to better
control text generation. The core idea of inverse prompting is to use generated
text to inversely predict the prompt during beam search, which enhances the
relevance between the prompt and the generated text and provides better
controllability. Empirically, we pre-train a large-scale Chinese language model
to perform a systematic study using human evaluation on the tasks of
open-domain poem generation and open-domain long-form question answering. Our
results show that our proposed method substantially outperforms the baselines
and that our generation quality is close to human performance on some of the
tasks.
Narrators can try our poem generation demo at
https://pretrain.aminer.cn/apps/poetry.html, while our QA demo can be found at
https://pretrain.aminer.cn/app/qa. For researchers, the code is provided in
https://github.com/THUDM/InversePrompting.
| 2,021 |
Computation and Language
|
MuRIL: Multilingual Representations for Indian Languages
|
India is a multilingual society with 1369 rationalized languages and dialects
being spoken across the country (INDIA, 2011). Of these, the 22 scheduled
languages have a staggering total of 1.17 billion speakers and 121 languages
have more than 10,000 speakers (INDIA, 2011). India also has the second largest
(and an ever growing) digital footprint (Statista, 2020). Despite this, today's
state-of-the-art multilingual systems perform suboptimally on Indian (IN)
languages. This can be explained by the fact that multilingual language models
(LMs) are often trained on 100+ languages together, leading to a small
representation of IN languages in their vocabulary and training data.
Multilingual LMs are substantially less effective in resource-lean scenarios
(Wu and Dredze, 2020; Lauscher et al., 2020), as limited data doesn't help
capture the various nuances of a language. One also commonly observes IN
language text transliterated to Latin or code-mixed with English, especially in
informal settings (for example, on social media platforms) (Rijhwani et al.,
2017). This phenomenon is not adequately handled by current state-of-the-art
multilingual LMs. To address the aforementioned gaps, we propose MuRIL, a
multilingual LM specifically built for IN languages. MuRIL is trained on
significantly large amounts of IN text corpora only. We explicitly augment
monolingual text corpora with both translated and transliterated document
pairs, that serve as supervised cross-lingual signals in training. MuRIL
significantly outperforms multilingual BERT (mBERT) on all tasks in the
challenging cross-lingual XTREME benchmark (Hu et al., 2020). We also present
results on transliterated (native to Latin script) test sets of the chosen
datasets and demonstrate the efficacy of MuRIL in handling transliterated data.
| 2,021 |
Computation and Language
|
Acoustic word embeddings for zero-resource languages using
self-supervised contrastive learning and multilingual adaptation
|
Acoustic word embeddings (AWEs) are fixed-dimensional representations of
variable-length speech segments. For zero-resource languages where labelled
data is not available, one AWE approach is to use unsupervised
autoencoder-based recurrent models. Another recent approach is to use
multilingual transfer: a supervised AWE model is trained on several
well-resourced languages and then applied to an unseen zero-resource language.
We consider how a recent contrastive learning loss can be used in both the
purely unsupervised and multilingual transfer settings. Firstly, we show that
terms from an unsupervised term discovery system can be used for contrastive
self-supervision, resulting in improvements over previous unsupervised
monolingual AWE models. Secondly, we consider how multilingual AWE models can
be adapted to a specific zero-resource language using discovered terms. We find
that self-supervised contrastive adaptation outperforms adapted multilingual
correspondence autoencoder and Siamese AWE models, giving the best overall
results in a word discrimination task on six zero-resource languages.
| 2,021 |
Computation and Language
|
Congolese Swahili Machine Translation for Humanitarian Response
|
In this paper we describe our efforts to make a bidirectional Congolese
Swahili (SWC) to French (FRA) neural machine translation system with the
motivation of improving humanitarian translation workflows. For training, we
created a 25,302-sentence general domain parallel corpus and combined it with
publicly available data. Experimenting with low-resource methodologies like
cross-dialect transfer and semi-supervised learning, we recorded improvements
of up to 2.4 and 3.5 BLEU points in the SWC-FRA and FRA-SWC directions,
respectively. We performed human evaluations to assess the usability of our
models in a COVID-domain chatbot that operates in the Democratic Republic of
Congo (DRC). Direct assessment in the SWC-FRA direction demonstrated an average
quality ranking of 6.3 out of 10 with 75% of the target strings conveying the
main message of the source text. For the FRA-SWC direction, our preliminary
tests on post-editing assessment showed its potential usefulness for
machine-assisted translation. We make our models, datasets containing up to 1
million sentences, our development pipeline, and a translator web-app available
for public use.
| 2,021 |
Computation and Language
|
Attention-based model for predicting question relatedness on Stack
Overflow
|
Stack Overflow is one of the most popular Programming Community-based
Question Answering (PCQA) websites that has attracted more and more users in
recent years. When users raise or inquire questions in Stack Overflow,
providing related questions can help them solve problems. Although there are
many approaches based on deep learning that can automatically predict the
relatedness between questions, those approaches are limited since interaction
information between two questions may be lost. In this paper, we adopt the deep
learning technique, propose an Attention-based Sentence pair Interaction Model
(ASIM) to predict the relatedness between questions on Stack Overflow
automatically. We adopt the attention mechanism to capture the semantic
interaction information between the questions. Besides, we have pre-trained and
released word embeddings specific to the software engineering domain for this
task, which may also help other related tasks. The experiment results
demonstrate that ASIM has made significant improvement over the baseline
approaches in Precision, Recall, and Micro-F1 evaluation metrics, achieving
state-of-the-art performance in this task. Our model also performs well in the
duplicate question detection task of AskUbuntu, which is a similar but
different task, proving its generalization and robustness.
| 2,021 |
Computation and Language
|
Play the Shannon Game With Language Models: A Human-Free Approach to
Summary Evaluation
|
The goal of a summary is to concisely state the most important information in
a document. With this principle in mind, we introduce new reference-free
summary evaluation metrics that use a pretrained language model to estimate the
information content shared between a document and its summary. These metrics
are a modern take on the Shannon Game, a method for summary quality scoring
proposed decades ago, where we replace human annotators with language models.
We also view these metrics as an extension of BLANC, a recently proposed
approach to summary quality measurement based on the performance of a language
model with and without the help of a summary. Using transformer based language
models, we empirically verify that our metrics achieve state-of-the-art
correlation with human judgement of the summary quality dimensions of both
coherence and relevance, as well as competitive correlation with human
judgement of consistency and fluency.
| 2,021 |
Computation and Language
|
Let Your Heart Speak in its Mother Tongue: Multilingual Captioning of
Cardiac Signals
|
Cardiac signals, such as the electrocardiogram, convey a significant amount
of information about the health status of a patient which is typically
summarized by a clinician in the form of a clinical report, a cumbersome
process that is prone to errors. To streamline this routine process, we propose
a deep neural network capable of captioning cardiac signals; it receives a
cardiac signal as input and generates a clinical report as output. We extend
this further to generate multilingual reports. To that end, we create and make
publicly available a multilingual clinical report dataset. In the absence of
sufficient labelled data, deep neural networks can benefit from a warm-start,
or pre-training, procedure in which parameters are first learned in an
arbitrary task. We propose such a task in the form of discriminative
multilingual pre-training where tokens from clinical reports are randomly
replaced with those from other languages and the network is tasked with
predicting the language of all tokens. We show that our method performs on par
with state-of-the-art pre-training methods such as MLM, ELECTRA, and MARGE,
while simultaneously generating diverse and plausible clinical reports. We also
demonstrate that multilingual models can outperform their monolingual
counterparts, informally terming this beneficial phenomenon as the blessing of
multilinguality.
| 2,021 |
Computation and Language
|
Conceptual similarity and communicative need shape colexification: an
experimental study
|
Colexification refers to the phenomenon of multiple meanings sharing one word
in a language. Cross-linguistic lexification patterns have been shown to be
largely predictable, as similar concepts are often colexified. We test a recent
claim that, beyond this general tendency, communicative needs play an important
role in shaping colexification patterns. We approach this question by means of
a series of human experiments, using an artificial language communication game
paradigm. Our results across four experiments match the previous
cross-linguistic findings: all other things being equal, speakers do prefer to
colexify similar concepts. However, we also find evidence supporting the
communicative need hypothesis: when faced with a frequent need to distinguish
similar pairs of meanings, speakers adjust their colexification preferences to
maintain communicative efficiency, and avoid colexifying those similar meanings
which need to be distinguished in communication. This research provides further
evidence to support the argument that languages are shaped by the needs and
preferences of their speakers.
| 2,021 |
Computation and Language
|
TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between
Corpora
|
Embeddings of words and concepts capture syntactic and semantic regularities
of language; however, they have seen limited use as tools to study
characteristics of different corpora and how they relate to one another. We
introduce TextEssence, an interactive system designed to enable comparative
analysis of corpora using embeddings. TextEssence includes visual,
neighbor-based, and similarity-based modes of embedding analysis in a
lightweight, web-based interface. We further propose a new measure of embedding
confidence based on nearest neighborhood overlap, to assist in identifying
high-quality embeddings for corpus analysis. A case study on COVID-19
scientific literature illustrates the utility of the system. TextEssence is
available from https://github.com/drgriffis/text-essence.
| 2,021 |
Computation and Language
|
Attribute Alignment: Controlling Text Generation from Pre-trained
Language Models
|
Large language models benefit from training with a large amount of unlabeled
text, which gives them increasingly fluent and diverse generation capabilities.
However, using these models for text generation that takes into account target
attributes, such as sentiment polarity or specific topics, remains a challenge.
We propose a simple and flexible method for controlling text generation by
aligning disentangled attribute representations. In contrast to recent efforts
on training a discriminator to perturb the token level distribution for an
attribute, we use the same data to learn an alignment function to guide the
pre-trained, non-controlled language model to generate texts with the target
attribute without changing the original language model parameters. We evaluate
our method on sentiment- and topic-controlled generation, and show large
performance gains over previous methods while retaining fluency and diversity.
| 2,021 |
Computation and Language
|
Local Interpretations for Explainable Natural Language Processing: A
Survey
|
As the use of deep learning techniques has grown across various fields over
the past decade, complaints about the opaqueness of the black-box models have
increased, resulting in an increased focus on transparency in deep learning
models. This work investigates various methods to improve the interpretability
of deep neural networks for natural language processing (NLP) tasks, including
machine translation and sentiment analysis. We provide a comprehensive
discussion on the definition of the term \textit{interpretability} and its
various aspects at the beginning of this work. The methods collected and
summarised in this survey are only associated with local interpretation and are
divided into three categories: 1) explaining the model's predictions through
related input features; 2) explaining through natural language explanation; 3)
probing the hidden states of models and word representations.
| 2,022 |
Computation and Language
|
Token-wise Curriculum Learning for Neural Machine Translation
|
Existing curriculum learning approaches to Neural Machine Translation (NMT)
require sampling sufficient amounts of "easy" samples from training data at the
early training stage. This is not always achievable for low-resource languages
where the amount of training data is limited. To address such limitation, we
propose a novel token-wise curriculum learning approach that creates sufficient
amounts of easy samples. Specifically, the model learns to predict a short
sub-sequence from the beginning part of each target sentence at the early stage
of training, and then the sub-sequence is gradually expanded as the training
progresses. Such a new curriculum design is inspired by the cumulative effect
of translation errors, which makes the latter tokens more difficult to predict
than the beginning ones. Extensive experiments show that our approach can
consistently outperform baselines on 5 language pairs, especially for
low-resource languages. Combining our approach with sentence-level methods
further improves the performance on high-resource languages.
| 2,021 |
Computation and Language
|
Dependency Graph-to-String Statistical Machine Translation
|
We present graph-based translation models which translate source graphs into
target strings. Source graphs are constructed from dependency trees with extra
links so that non-syntactic phrases are connected. Inspired by phrase-based
models, we first introduce a translation model which segments a graph into a
sequence of disjoint subgraphs and generates a translation by combining
subgraph translations left-to-right using beam search. However, similar to
phrase-based models, this model is weak at phrase reordering. Therefore, we
further introduce a model based on a synchronous node replacement grammar which
learns recursive translation rules. We provide two implementations of the model
with different restrictions so that source graphs can be parsed efficiently.
Experiments on Chinese--English and German--English show that our graph-based
models are significantly better than corresponding sequence- and tree-based
baselines.
| 2,021 |
Computation and Language
|
Overprotective Training Environments Fall Short at Testing Time: Let
Models Contribute to Their Own Training
|
Despite important progress, conversational systems often generate dialogues
that sound unnatural to humans. We conjecture that the reason lies in their
different training and testing conditions: agents are trained in a controlled
"lab" setting but tested in the "wild". During training, they learn to generate
an utterance given the human dialogue history. On the other hand, during
testing, they must interact with each other, and hence deal with noisy data. We
propose to fill this gap by training the model with mixed batches containing
both samples of human and machine-generated dialogues. We assess the validity
of the proposed method on GuessWhat?!, a visual referential game.
| 2,021 |
Computation and Language
|
The Interplay of Task Success and Dialogue Quality: An in-depth
Evaluation in Task-Oriented Visual Dialogues
|
When training a model on referential dialogue guessing games, the best model
is usually chosen based on its task success. We show that in the popular
end-to-end approach, this choice prevents the model from learning to generate
linguistically richer dialogues, since the acquisition of language proficiency
takes longer than learning the guessing task. By comparing models playing
different games (GuessWhat, GuessWhich, and Mutual Friends), we show that this
discrepancy is model- and task-agnostic. We investigate whether and when better
language quality could lead to higher task success. We show that in GuessWhat,
models could increase their accuracy if they learn to ground, encode, and
decode also words that do not occur frequently in the training set.
| 2,021 |
Computation and Language
|
The Effectiveness of Morphology-aware Segmentation in Low-Resource
Neural Machine Translation
|
This paper evaluates the performance of several modern subword segmentation
methods in a low-resource neural machine translation setting. We compare
segmentations produced by applying BPE at the token or sentence level with
morphologically-based segmentations from LMVR and MORSEL. We evaluate
translation tasks between English and each of Nepali, Sinhala, and Kazakh, and
predict that using morphologically-based segmentation methods would lead to
better performance in this setting. However, comparing to BPE, we find that no
consistent and reliable differences emerge between the segmentation methods.
While morphologically-based methods outperform BPE in a few cases, what
performs best tends to vary across tasks, and the performance of segmentation
methods is often statistically indistinguishable.
| 2,021 |
Computation and Language
|
Self-Supervised Test-Time Learning for Reading Comprehension
|
Recent work on unsupervised question answering has shown that models can be
trained with procedurally generated question-answer pairs and can achieve
performance competitive with supervised methods. In this work, we consider the
task of unsupervised reading comprehension and present a method that performs
"test-time learning" (TTL) on a given context (text passage), without requiring
training on large-scale human-authored datasets containing
\textit{context-question-answer} triplets. This method operates directly on a
single test context, uses self-supervision to train models on synthetically
generated question-answer pairs, and then infers answers to unseen
human-authored questions for this context. Our method achieves accuracies
competitive with fully supervised methods and significantly outperforms current
unsupervised methods. TTL methods with a smaller model are also competitive
with the current state-of-the-art in unsupervised reading comprehension.
| 2,021 |
Computation and Language
|
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense
Knowledge Resources
|
Warning: this paper contains content that may be offensive or upsetting.
Numerous natural language processing models have tried injecting commonsense
by using the ConceptNet knowledge base to improve performance on different
tasks. ConceptNet, however, is mostly crowdsourced from humans and may reflect
human biases such as "lawyers are dishonest." It is important that these biases
are not conflated with the notion of commonsense. We study this missing yet
important problem by first defining and quantifying biases in ConceptNet as two
types of representational harms: overgeneralization of polarized perceptions
and representation disparity. We find that ConceptNet contains severe biases
and disparities across four demographic categories. In addition, we analyze two
downstream models that use ConceptNet as a source for commonsense knowledge and
find the existence of biases in those models as well. We further propose a
filtered-based bias-mitigation approach and examine its effectiveness. We show
that our mitigation approach can reduce the issues in both resource and models
but leads to a performance drop, leaving room for future work to build fairer
and stronger commonsense models.
| 2,021 |
Computation and Language
|
AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive
Summarization
|
State-of-the-art abstractive summarization models generally rely on extensive
labeled data, which lowers their generalization ability on domains where such
data are not available. In this paper, we present a study of domain adaptation
for the abstractive summarization task across six diverse target domains in a
low-resource setting. Specifically, we investigate the second phase of
pre-training on large-scale generative models under three different settings:
1) source domain pre-training; 2) domain-adaptive pre-training; and 3)
task-adaptive pre-training. Experiments show that the effectiveness of
pre-training is correlated with the similarity between the pre-training data
and the target domain task. Moreover, we find that continuing pre-training
could lead to the pre-trained model's catastrophic forgetting, and a learning
method with less forgetting can alleviate this issue. Furthermore, results
illustrate that a huge gap still exists between the low-resource and
high-resource settings, which highlights the need for more advanced domain
adaptation methods for the abstractive summarization task.
| 2,021 |
Computation and Language
|
Structural block driven - enhanced convolutional neural representation
for relation extraction
|
In this paper, we propose a novel lightweight relation extraction approach of
structural block driven - convolutional neural learning. Specifically, we
detect the essential sequential tokens associated with entities through
dependency analysis, named as a structural block, and only encode the block on
a block-wise and an inter-block-wise representation, utilizing multi-scale
CNNs. This is to 1) eliminate the noisy from irrelevant part of a sentence;
meanwhile 2) enhance the relevant block representation with both block-wise and
inter-block-wise semantically enriched representation. Our method has the
advantage of being independent of long sentence context since we only encode
the sequential tokens within a block boundary. Experiments on two datasets
i.e., SemEval2010 and KBP37, demonstrate the significant advantages of our
method. In particular, we achieve the new state-of-the-art performance on the
KBP37 dataset; and comparable performance with the state-of-the-art on the
SemEval2010 dataset.
| 2,021 |
Computation and Language
|
NameRec*: Highly Accurate and Fine-grained Person Name Recognition
|
In this paper, we introduce the NameRec* task, which aims to do highly
accurate and fine-grained person name recognition. Traditional Named Entity
Recognition models have good performance in recognising well-formed person
names from text with consistent and complete syntax, such as news articles.
However, there are rapidly growing scenarios where sentences are of incomplete
syntax and names are in various forms such as user-generated contents and
academic homepages. To address person name recognition in this context, we
propose a fine-grained annotation scheme based on anthroponymy. To take full
advantage of the fine-grained annotations, we propose a Co-guided Neural
Network (CogNN) for person name recognition. CogNN fully explores the
intra-sentence context and rich training signals of name forms. To better
utilize the inter-sentence context and implicit relations, which are extremely
essential for recognizing person names in long documents, we further propose an
Inter-sentence BERT Model (IsBERT). IsBERT has an overlapped input processor,
and an inter-sentence encoder with bidirectional overlapped contextual
embedding learning and multi-hop inference mechanisms. To derive benefit from
different documents with a diverse abundance of context, we propose an advanced
Adaptive Inter-sentence BERT Model (Ada-IsBERT) to dynamically adjust the
inter-sentence overlapping ratio to different documents. We conduct extensive
experiments to demonstrate the superiority of the proposed methods on both
academic homepages and news articles.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.