Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer
Matching Retrieval | We introduce SPARTA, a novel neural retrieval method that shows great promise
in performance, generalization, and interpretability for open-domain question
answering. Unlike many neural ranking methods that use dense vector nearest
neighbor search, SPARTA learns a sparse representation that can be efficiently
implemented as an Inverted Index. The resulting representation enables scalable
neural retrieval that does not require expensive approximate vector search and
leads to better performance than its dense counterpart. We validated our
approaches on 4 open-domain question answering (OpenQA) tasks and 11 retrieval
question answering (ReQA) tasks. SPARTA achieves new state-of-the-art results
across a variety of open-domain question answering tasks in both English and
Chinese datasets, including open SQuAD, Natuarl Question, CMRC and etc.
Analysis also confirms that the proposed method creates human interpretable
representation and allows flexible control over the trade-off between
performance and efficiency.
| 2,020 | Computation and Language |
Mitigating Gender Bias for Neural Dialogue Generation with Adversarial
Learning | Dialogue systems play an increasingly important role in various aspects of
our daily life. It is evident from recent research that dialogue systems
trained on human conversation data are biased. In particular, they can produce
responses that reflect people's gender prejudice. Many debiasing methods have
been developed for various NLP tasks, such as word embedding. However, they are
not directly applicable to dialogue systems because they are likely to force
dialogue models to generate similar responses for different genders. This
greatly degrades the diversity of the generated responses and immensely hurts
the performance of the dialogue models. In this paper, we propose a novel
adversarial learning framework Debiased-Chat to train dialogue models free from
gender bias while keeping their performance. Extensive experiments on two
real-world conversation datasets show that our framework significantly reduces
gender bias in dialogue models while maintaining the response quality. The
implementation of the proposed framework is released.
| 2,020 | Computation and Language |
A Simple and Efficient Ensemble Classifier Combining Multiple Neural
Network Models on Social Media Datasets in Vietnamese | Text classification is a popular topic of natural language processing, which
has currently attracted numerous research efforts worldwide. The significant
increase of data in social media requires the vast attention of researchers to
analyze such data. There are various studies in this field in many languages
but limited to the Vietnamese language. Therefore, this study aims to classify
Vietnamese texts on social media from three different Vietnamese benchmark
datasets. Advanced deep learning models are used and optimized in this study,
including CNN, LSTM, and their variants. We also implement the BERT, which has
never been applied to the datasets. Our experiments find a suitable model for
classification tasks on each specific dataset. To take advantage of single
models, we propose an ensemble model, combining the highest-performance models.
Our single models reach positive results on each dataset. Moreover, our
ensemble model achieves the best performance on all three datasets. We reach
86.96% of F1- score for the HSD-VLSP dataset, 65.79% of F1-score for the
UIT-VSMEC dataset, 92.79% and 89.70% for sentiments and topics on the UIT-VSFC
dataset, respectively. Therefore, our models achieve better performances as
compared to previous studies on these datasets.
| 2,020 | Computation and Language |
Reactive Supervision: A New Method for Collecting Sarcasm Data | Sarcasm detection is an important task in affective computing, requiring
large amounts of labeled data. We introduce reactive supervision, a novel data
collection method that utilizes the dynamics of online conversations to
overcome the limitations of existing data collection techniques. We use the new
method to create and release a first-of-its-kind large dataset of tweets with
sarcasm perspective labels and new contextual features. The dataset is expected
to advance sarcasm detection research. Our method can be adapted to other
affective computing domains, thus opening up new research opportunities.
| 2,020 | Computation and Language |
What Disease does this Patient Have? A Large-scale Open Domain Question
Answering Dataset from Medical Exams | Open domain question answering (OpenQA) tasks have been recently attracting
more and more attention from the natural language processing (NLP) community.
In this work, we present the first free-form multiple-choice OpenQA dataset for
solving medical problems, MedQA, collected from the professional medical board
exams. It covers three languages: English, simplified Chinese, and traditional
Chinese, and contains 12,723, 34,251, and 14,123 questions for the three
languages, respectively. We implement both rule-based and popular neural
methods by sequentially combining a document retriever and a machine
comprehension model. Through experiments, we find that even the current best
method can only achieve 36.7\%, 42.0\%, and 70.1\% of test accuracy on the
English, traditional Chinese, and simplified Chinese questions, respectively.
We expect MedQA to present great challenges to existing OpenQA systems and hope
that it can serve as a platform to promote much stronger OpenQA models from the
NLP community in the future.
| 2,020 | Computation and Language |
Deep Transformers with Latent Depth | The Transformer model has achieved state-of-the-art performance in many
sequence modeling tasks. However, how to leverage model capacity with large or
variable depths is still an open challenge. We present a probabilistic
framework to automatically learn which layer(s) to use by learning the
posterior distributions of layer selection. As an extension of this framework,
we propose a novel method to train one shared Transformer network for
multilingual machine translation with different layer selection posteriors for
each language pair. The proposed method alleviates the vanishing gradient issue
and enables stable training of deep Transformers (e.g. 100 layers). We evaluate
on WMT English-German machine translation and masked language modeling tasks,
where our method outperforms existing approaches for training deeper
Transformers. Experiments on multilingual machine translation demonstrate that
this approach can effectively leverage increased model capacity and bring
universal improvement for both many-to-one and one-to-many translation with
diverse language pairs.
| 2,020 | Computation and Language |
Neural Baselines for Word Alignment | Word alignments identify translational correspondences between words in a
parallel sentence pair and is used, for instance, to learn bilingual
dictionaries, to train statistical machine translation systems , or to perform
quality estimation. In most areas of natural language processing, neural
network models nowadays constitute the preferred approach, a situation that
might also apply to word alignment models. In this work, we study and
comprehensively evaluate neural models for unsupervised word alignment for four
language pairs, contrasting several variants of neural models. We show that in
most settings, neural versions of the IBM-1 and hidden Markov models vastly
outperform their discrete counterparts. We also analyze typical alignment
errors of the baselines that our models overcome to illustrate the benefits-and
the limitations-of these new models for morphologically rich languages.
| 2,020 | Computation and Language |
Generative latent neural models for automatic word alignment | Word alignments identify translational correspondences between words in a
parallel sentence pair and are used, for instance, to learn bilingual
dictionaries, to train statistical machine translation systems or to perform
quality estimation. Variational autoencoders have been recently used in various
of natural language processing to learn in an unsupervised way latent
representations that are useful for language generation tasks. In this paper,
we study these models for the task of word alignment and propose and assess
several evolutions of a vanilla variational autoencoders. We demonstrate that
these techniques can yield competitive results as compared to Giza++ and to a
strong neural network alignment system for two language pairs.
| 2,020 | Computation and Language |
Incomplete Utterance Rewriting as Semantic Segmentation | Recent years the task of incomplete utterance rewriting has raised a large
attention. Previous works usually shape it as a machine translation task and
employ sequence to sequence based architecture with copy mechanism. In this
paper, we present a novel and extensive approach, which formulates it as a
semantic segmentation task. Instead of generating from scratch, such a
formulation introduces edit operations and shapes the problem as prediction of
a word-level edit matrix. Benefiting from being able to capture both local and
global information, our approach achieves state-of-the-art performance on
several public datasets. Furthermore, our approach is four times faster than
the standard approach in inference.
| 2,020 | Computation and Language |
Knowledge-Aware Procedural Text Understanding with Multi-Stage Training | Procedural text describes dynamic state changes during a step-by-step natural
process (e.g., photosynthesis). In this work, we focus on the task of
procedural text understanding, which aims to comprehend such documents and
track entities' states and locations during a process. Although recent
approaches have achieved substantial progress, their results are far behind
human performance. Two challenges, the difficulty of commonsense reasoning and
data insufficiency, still remain unsolved, which require the incorporation of
external knowledge bases. Previous works on external knowledge injection
usually rely on noisy web mining tools and heuristic rules with limited
applicable scenarios. In this paper, we propose a novel KnOwledge-Aware
proceduraL text understAnding (KOALA) model, which effectively leverages
multiple forms of external knowledge in this task. Specifically, we retrieve
informative knowledge triples from ConceptNet and perform knowledge-aware
reasoning while tracking the entities. Besides, we employ a multi-stage
training schema which fine-tunes the BERT model over unlabeled data collected
from Wikipedia before further fine-tuning it on the final model. Experimental
results on two procedural text datasets, ProPara and Recipes, verify the
effectiveness of the proposed methods, in which our model achieves
state-of-the-art performance in comparison to various baselines.
| 2,021 | Computation and Language |
Energy-Based Reranking: Improving Neural Machine Translation Using
Energy-Based Models | The discrepancy between maximum likelihood estimation (MLE) and task measures
such as BLEU score has been studied before for autoregressive neural machine
translation (NMT) and resulted in alternative training algorithms (Ranzato et
al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018). However,
MLE training remains the de facto approach for autoregressive NMT because of
its computational efficiency and stability. Despite this mismatch between the
training objective and task measure, we notice that the samples drawn from an
MLE-based trained NMT support the desired distribution -- there are samples
with much higher BLEU score comparing to the beam decoding output. To benefit
from this observation, we train an energy-based model to mimic the behavior of
the task measure (i.e., the energy-based model assigns lower energy to samples
with higher BLEU score), which is resulted in a re-ranking algorithm based on
the samples drawn from NMT: energy-based re-ranking (EBR). We use both marginal
energy models (over target sentence) and joint energy models (over both source
and target sentences). Our EBR with the joint energy model consistently
improves the performance of the Transformer-based NMT: +4 BLEU points on
IWSLT'14 German-English, +3.0 BELU points on Sinhala-English, +1.2 BLEU on
WMT'16 English-German tasks.
| 2,021 | Computation and Language |
Dissecting Lottery Ticket Transformers: Structural and Behavioral Study
of Sparse Neural Machine Translation | Recent work on the lottery ticket hypothesis has produced highly sparse
Transformers for NMT while maintaining BLEU. However, it is unclear how such
pruning techniques affect a model's learned representations. By probing
Transformers with more and more low-magnitude weights pruned away, we find that
complex semantic information is first to be degraded. Analysis of internal
activations reveals that higher layers diverge most over the course of pruning,
gradually becoming less complex than their dense counterparts. Meanwhile, early
layers of sparse models begin to perform more encoding. Attention mechanisms
remain remarkably consistent as sparsity increases.
| 2,020 | Computation and Language |
Augmented Natural Language for Generative Sequence Labeling | We propose a generative framework for joint sequence labeling and
sentence-level classification. Our model performs multiple sequence labeling
tasks at once using a single, shared natural language output space. Unlike
prior discriminative methods, our model naturally incorporates label semantics
and shares knowledge across tasks. Our framework is general purpose, performing
well on few-shot, low-resource, and high-resource tasks. We demonstrate these
advantages on popular named entity recognition, slot labeling, and intent
classification benchmarks. We set a new state-of-the-art for few-shot slot
labeling, improving substantially upon the previous 5-shot ($75.0\% \rightarrow
90.9\%$) and 1-shot ($70.4\% \rightarrow 81.0\%$) state-of-the-art results.
Furthermore, our model generates large improvements ($46.27\% \rightarrow
63.83\%$) in low-resource slot labeling over a BERT baseline by incorporating
label semantics. We also maintain competitive results on high-resource tasks,
performing within two points of the state-of-the-art on all tasks and setting a
new state-of-the-art on the SNIPS dataset.
| 2,020 | Computation and Language |
Zero-shot Multi-Domain Dialog State Tracking Using Descriptive Rules | In this work, we present a framework for incorporating descriptive logical
rules in state-of-the-art neural networks, enabling them to learn how to handle
unseen labels without the introduction of any new training data. The rules are
integrated into existing networks without modifying their architecture, through
an additional term in the network's loss function that penalizes states of the
network that do not obey the designed rules. As a case of study, the framework
is applied to an existing neural-based Dialog State Tracker. Our experiments
demonstrate that the inclusion of logical rules allows the prediction of unseen
labels, without deteriorating the predictive capacity of the original system.
| 2,020 | Computation and Language |
Graph-based Multi-hop Reasoning for Long Text Generation | Long text generation is an important but challenging task.The main problem
lies in learning sentence-level semantic dependencies which traditional
generative models often suffer from. To address this problem, we propose a
Multi-hop Reasoning Generation (MRG) approach that incorporates multi-hop
reasoning over a knowledge graph to learn semantic dependencies among
sentences. MRG consists of twoparts, a graph-based multi-hop reasoning module
and a path-aware sentence realization module. The reasoning module is
responsible for searching skeleton paths from a knowledge graph to imitate the
imagination process in the human writing for semantic transfer. Based on the
inferred paths, the sentence realization module then generates a complete
sentence. Unlike previous black-box models, MRG explicitly infers the skeleton
path, which provides explanatory views tounderstand how the proposed model
works. We conduct experiments on three representative tasks, including story
generation, review generation, and product description generation. Automatic
and manual evaluation show that our proposed method can generate more
informative and coherentlong text than strong baselines, such as pre-trained
models(e.g. GPT-2) and knowledge-enhanced models.
| 2,020 | Computation and Language |
Pchatbot: A Large-Scale Dataset for Personalized Chatbot | Natural language dialogue systems raise great attention recently. As many
dialogue models are data-driven, high-quality datasets are essential to these
systems. In this paper, we introduce Pchatbot, a large-scale dialogue dataset
that contains two subsets collected from Weibo and Judicial forums
respectively. To adapt the raw dataset to dialogue systems, we elaborately
normalize the raw dataset via processes such as anonymization, deduplication,
segmentation, and filtering. The scale of Pchatbot is significantly larger than
existing Chinese datasets, which might benefit the data-driven models. Besides,
current dialogue datasets for personalized chatbot usually contain several
persona sentences or attributes. Different from existing datasets, Pchatbot
provides anonymized user IDs and timestamps for both posts and responses. This
enables the development of personalized dialogue models that directly learn
implicit user personality from the user's dialogue history. Our preliminary
experimental study benchmarks several state-of-the-art dialogue models to
provide a comparison for future work. The dataset can be publicly accessed at
Github.
| 2,021 | Computation and Language |
A Diagnostic Study of Explainability Techniques for Text Classification | Recent developments in machine learning have introduced models that approach
human performance at the cost of increased architectural complexity. Efforts to
make the rationales behind the models' predictions transparent have inspired an
abundance of new explainability techniques. Provided with an already trained
model, they compute saliency scores for the words of an input instance.
However, there exists no definitive guide on (i) how to choose such a technique
given a particular application task and model architecture, and (ii) the
benefits and drawbacks of using each such technique. In this paper, we develop
a comprehensive list of diagnostic properties for evaluating existing
explainability techniques. We then employ the proposed list to compare a set of
diverse explainability techniques on downstream text classification tasks and
neural network architectures. We also compare the saliency scores assigned by
the explainability techniques with human annotations of salient input regions
to find relations between a model's performance and the agreement of its
rationales with human ones. Overall, we find that the gradient-based
explanations perform best across tasks and model architectures, and we present
further insights into the properties of the reviewed explainability techniques.
| 2,020 | Computation and Language |
Learning to Match Jobs with Resumes from Sparse Interaction Data using
Multi-View Co-Teaching Network | With the ever-increasing growth of online recruitment data, job-resume
matching has become an important task to automatically match jobs with suitable
resumes. This task is typically casted as a supervised text matching problem.
Supervised learning is powerful when the labeled data is sufficient. However,
on online recruitment platforms, job-resume interaction data is sparse and
noisy, which affects the performance of job-resume match algorithms. To
alleviate these problems, in this paper, we propose a novel multi-view
co-teaching network from sparse interaction data for job-resume matching. Our
network consists of two major components, namely text-based matching model and
relation-based matching model. The two parts capture semantic compatibility in
two different views, and complement each other. In order to address the
challenges from sparse and noisy data, we design two specific strategies to
combine the two components. First, two components share the learned parameters
or representations, so that the original representations of each component can
be enhanced. More importantly, we adopt a co-teaching mechanism to reduce the
influence of noise in training data. The core idea is to let the two components
help each other by selecting more reliable training instances. The two
strategies focus on representation enhancement and data enhancement,
respectively. Compared with pure text-based matching models, the proposed
approach is able to learn better data representations from limited or even
sparse interaction data, which is more resistible to noise in training data.
Experiment results have demonstrated that our model is able to outperform
state-of-the-art methods for job-resume matching.
| 2,020 | Computation and Language |
Reducing Quantity Hallucinations in Abstractive Summarization | It is well-known that abstractive summaries are subject to
hallucination---including material that is not supported by the original text.
While summaries can be made hallucination-free by limiting them to general
phrases, such summaries would fail to be very informative. Alternatively, one
can try to avoid hallucinations by verifying that any specific entities in the
summary appear in the original text in a similar context. This is the approach
taken by our system, Herman. The system learns to recognize and verify quantity
entities (dates, numbers, sums of money, etc.) in a beam-worth of abstractive
summaries produced by state-of-the-art models, in order to up-rank those
summaries whose quantity terms are supported by the original text. Experimental
results demonstrate that the ROUGE scores of such up-ranked summaries have a
higher Precision than summaries that have not been up-ranked, without a
comparable loss in Recall, resulting in higher F$_1$. Preliminary human
evaluation of up-ranked vs. original summaries shows people's preference for
the former.
| 2,020 | Computation and Language |
Similarity Detection Pipeline for Crawling a Topic Related Fake News
Corpus | Fake news detection is a challenging task aiming to reduce human time and
effort to check the truthfulness of news. Automated approaches to combat fake
news, however, are limited by the lack of labeled benchmark datasets,
especially in languages other than English. Moreover, many publicly available
corpora have specific limitations that make them difficult to use. To address
this problem, our contribution is threefold. First, we propose a new, publicly
available German topic related corpus for fake news detection. To the best of
our knowledge, this is the first corpus of its kind. In this regard, we
developed a pipeline for crawling similar news articles. As our third
contribution, we conduct different learning experiments to detect fake news.
The best performance was achieved using sentence level embeddings from SBERT in
combination with a Bi-LSTM (k=0.88).
| 2,021 | Computation and Language |
Identifying Automatically Generated Headlines using Transformers | False information spread via the internet and social media influences public
opinion and user activity, while generative models enable fake content to be
generated faster and more cheaply than had previously been possible. In the not
so distant future, identifying fake content generated by deep learning models
will play a key role in protecting users from misinformation. To this end, a
dataset containing human and computer-generated headlines was created and a
user study indicated that humans were only able to identify the fake headlines
in 47.8% of the cases. However, the most accurate automatic approach,
transformers, achieved an overall accuracy of 85.7%, indicating that content
generated from language models can be filtered out accurately.
| 2,021 | Computation and Language |
Aspects of Terminological and Named Entity Knowledge within Rule-Based
Machine Translation Models for Under-Resourced Neural Machine Translation
Scenarios | Rule-based machine translation is a machine translation paradigm where
linguistic knowledge is encoded by an expert in the form of rules that
translate text from source to target language. While this approach grants
extensive control over the output of the system, the cost of formalising the
needed linguistic knowledge is much higher than training a corpus-based system,
where a machine learning approach is used to automatically learn to translate
from examples. In this paper, we describe different approaches to leverage the
information contained in rule-based machine translation systems to improve a
corpus-based one, namely, a neural machine translation model, with a focus on a
low-resource scenario. Three different kinds of information were used:
morphological information, named entities and terminology. In addition to
evaluating the general performance of the system, we systematically analysed
the performance of the proposed approaches when dealing with the targeted
phenomena. Our results suggest that the proposed models have limited ability to
learn from external information, and most approaches do not significantly alter
the results of the automatic evaluation, but our preliminary qualitative
evaluation shows that in certain cases the hypothesis generated by our system
exhibit favourable behaviour such as keeping the use of passive voice.
| 2,020 | Computation and Language |
Injecting Entity Types into Entity-Guided Text Generation | Recent successes in deep generative modeling have led to significant advances
in natural language generation (NLG). Incorporating entities into neural
generation models has demonstrated great improvements by assisting to infer the
summary topic and to generate coherent content. To enhance the role of entity
in NLG, in this paper, we aim to model the entity type in the decoding phase to
generate contextual words accurately. We develop a novel NLG model to produce a
target sequence based on a given list of entities. Our model has a multi-step
decoder that injects the entity types into the process of entity mention
generation. Experiments on two public news datasets demonstrate type injection
performs better than existing type embedding concatenation baselines.
| 2,021 | Computation and Language |
PIN: A Novel Parallel Interactive Network for Spoken Language
Understanding | Spoken Language Understanding (SLU) is an essential part of the spoken
dialogue system, which typically consists of intent detection (ID) and slot
filling (SF) tasks. Recently, recurrent neural networks (RNNs) based methods
achieved the state-of-the-art for SLU. It is noted that, in the existing
RNN-based approaches, ID and SF tasks are often jointly modeled to utilize the
correlation information between them. However, we noted that, so far, the
efforts to obtain better performance by supporting bidirectional and explicit
information exchange between ID and SF are not well studied.In addition, few
studies attempt to capture the local context information to enhance the
performance of SF. Motivated by these findings, in this paper, Parallel
Interactive Network (PIN) is proposed to model the mutual guidance between ID
and SF. Specifically, given an utterance, a Gaussian self-attentive encoder is
introduced to generate the context-aware feature embedding of the utterance
which is able to capture local context information. Taking the feature
embedding of the utterance, Slot2Intent module and Intent2Slot module are
developed to capture the bidirectional information flow for ID and SF tasks.
Finally, a cooperation mechanism is constructed to fuse the information
obtained from Slot2Intent and Intent2Slot modules to further reduce the
prediction bias.The experiments on two benchmark datasets, i.e., SNIPS and
ATIS, demonstrate the effectiveness of our approach, which achieves a
competitive result with state-of-the-art models. More encouragingly, by using
the feature embedding of the utterance generated by the pre-trained language
model BERT, our method achieves the state-of-the-art among all comparison
approaches.
| 2,020 | Computation and Language |
DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented
Dialogue | A long-standing goal of task-oriented dialogue research is the ability to
flexibly adapt dialogue models to new domains. To progress research in this
direction, we introduce DialoGLUE (Dialogue Language Understanding Evaluation),
a public benchmark consisting of 7 task-oriented dialogue datasets covering 4
distinct natural language understanding tasks, designed to encourage dialogue
research in representation-based transfer, domain adaptation, and
sample-efficient task learning. We release several strong baseline models,
demonstrating performance improvements over a vanilla BERT architecture and
state-of-the-art results on 5 out of 7 tasks, by pre-training on a large
open-domain dialogue corpus and task-adaptive self-supervised training. Through
the DialoGLUE benchmark, the baseline methods, and our evaluation scripts, we
hope to facilitate progress towards the goal of developing more general
task-oriented dialogue models.
| 2,020 | Computation and Language |
Non-Pharmaceutical Intervention Discovery with Topic Modeling | We consider the task of discovering categories of non-pharmaceutical
interventions during the evolving COVID-19 pandemic. We explore topic modeling
on two corpora with national and international scope. These models discover
existing categories when compared with human intervention labels while reduced
human effort needed.
| 2,020 | Computation and Language |
Visual Pivoting for (Unsupervised) Entity Alignment | This work studies the use of visual semantic representations to align
entities in heterogeneous knowledge graphs (KGs). Images are natural components
of many existing KGs. By combining visual knowledge with other auxiliary
information, we show that the proposed new approach, EVA, creates a holistic
entity representation that provides strong signals for cross-graph entity
alignment. Besides, previous entity alignment methods require human labelled
seed alignment, restricting availability. EVA provides a completely
unsupervised solution by leveraging the visual similarity of entities to create
an initial seed dictionary (visual pivots). Experiments on benchmark data sets
DBP15k and DWY15k show that EVA offers state-of-the-art performance on both
monolingual and cross-lingual entity alignment tasks. Furthermore, we discover
that images are particularly useful to align long-tail KG entities, which
inherently lack the structural contexts necessary for capturing the
correspondences.
| 2,020 | Computation and Language |
Conversational Semantic Parsing | The structured representation for semantic parsing in task-oriented assistant
systems is geared towards simple understanding of one-turn queries. Due to the
limitations of the representation, the session-based properties such as
co-reference resolution and context carryover are processed downstream in a
pipelined system. In this paper, we propose a semantic representation for such
task-oriented conversational systems that can represent concepts such as
co-reference and context carryover, enabling comprehensive understanding of
queries in a session. We release a new session-based, compositional
task-oriented parsing dataset of 20k sessions consisting of 60k utterances.
Unlike Dialog State Tracking Challenges, the queries in the dataset have
compositional forms. We propose a new family of Seq2Seq models for the
session-based parsing above, which achieve better or comparable performance to
the current state-of-the-art on ATIS, SNIPS, TOP and DSTC2. Notably, we improve
the best known results on DSTC2 by up to 5 points for slot-carryover.
| 2,020 | Computation and Language |
Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems | Task-oriented dialogue systems are either modularized with separate dialogue
state tracking (DST) and management steps or end-to-end trainable. In either
case, the knowledge base (KB) plays an essential role in fulfilling user
requests. Modularized systems rely on DST to interact with the KB, which is
expensive in terms of annotation and inference time. End-to-end systems use the
KB directly as input, but they cannot scale when the KB is larger than a few
hundred entries. In this paper, we propose a method to embed the KB, of any
size, directly into the model parameters. The resulting model does not require
any DST or template responses, nor the KB as input, and it can dynamically
update its KB via fine-tuning. We evaluate our solution in five task-oriented
dialogue datasets with small, medium, and large KB size. Our experiments show
that end-to-end models can effectively embed knowledge bases in their
parameters and achieve competitive performance in all evaluated datasets.
| 2,020 | Computation and Language |
Improve Transformer Models with Better Relative Position Embeddings | Transformer architectures rely on explicit position encodings in order to
preserve a notion of word order. In this paper, we argue that existing work
does not fully utilize position information. For example, the initial proposal
of a sinusoid embedding is fixed and not learnable. In this paper, we first
review absolute position embeddings and existing methods for relative position
embeddings. We then propose new techniques that encourage increased interaction
between query, key and relative position embeddings in the self-attention
mechanism. Our most promising approach is a generalization of the absolute
position embedding, improving results on SQuAD1.1 compared to previous position
embeddings approaches. In addition, we address the inductive property of
whether a position embedding can be robust enough to handle long sequences. We
demonstrate empirically that our relative position embedding method is
reasonably generalized and robust from the inductive perspective. Finally, we
show that our proposed method can be adopted as a near drop-in replacement for
improving the accuracy of large models with a small computational budget.
| 2,020 | Computation and Language |
Leader: Prefixing a Length for Faster Word Vector Serialization | Two competing file formats have become the de facto standards for
distributing pre-trained word embeddings. Both are named after the most popular
pre-trained embeddings that are distributed in that format. The GloVe format is
an entirely text based format that suffers from huge file sizes and slow reads,
and the word2vec format is a smaller binary format that mixes a textual
representation of words with a binary representation of the vectors themselves.
Both formats have problems that we solve with a new format we call the Leader
format. We include a word length prefix for faster reads while maintaining the
smaller file size a binary format offers. We also created a minimalist library
to facilitate the reading and writing of various word vector formats, as well
as tools for converting pre-trained embeddings to our new Leader format.
| 2,020 | Computation and Language |
Double Graph Based Reasoning for Document-level Relation Extraction | Document-level relation extraction aims to extract relations among entities
within a document. Different from sentence-level relation extraction, it
requires reasoning over multiple sentences across a document. In this paper, we
propose Graph Aggregation-and-Inference Network (GAIN) featuring double graphs.
GAIN first constructs a heterogeneous mention-level graph (hMG) to model
complex interaction among different mentions across the document. It also
constructs an entity-level graph (EG), based on which we propose a novel path
reasoning mechanism to infer relations between entities. Experiments on the
public dataset, DocRED, show GAIN achieves a significant performance
improvement (2.85 on F1) over the previous state-of-the-art. Our code is
available at https://github.com/DreamInvoker/GAIN .
| 2,020 | Computation and Language |
Neural Retrieval for Question Answering with Cross-Attention Supervised
Data Augmentation | Neural models that independently project questions and answers into a shared
embedding space allow for efficient continuous space retrieval from large
corpora. Independently computing embeddings for questions and answers results
in late fusion of information related to matching questions to their answers.
While critical for efficient retrieval, late fusion underperforms models that
make use of early fusion (e.g., a BERT based classifier with cross-attention
between question-answer pairs). We present a supervised data mining method
using an accurate early fusion model to improve the training of an efficient
late fusion retrieval model. We first train an accurate classification model
with cross-attention between questions and answers. The accurate
cross-attention model is then used to annotate additional passages in order to
generate weighted training examples for a neural retrieval model. The resulting
retrieval model with additional data significantly outperforms retrieval models
directly trained with gold annotations on Precision at $N$ (P@N) and Mean
Reciprocal Rank (MRR).
| 2,020 | Computation and Language |
A Simple but Tough-to-Beat Data Augmentation Approach for Natural
Language Understanding and Generation | Adversarial training has been shown effective at endowing the learned
representations with stronger generalization ability. However, it typically
requires expensive computation to determine the direction of the injected
perturbations. In this paper, we introduce a set of simple yet effective data
augmentation strategies dubbed cutoff, where part of the information within an
input sentence is erased to yield its restricted views (during the fine-tuning
stage). Notably, this process relies merely on stochastic sampling and thus
adds little computational overhead. A Jensen-Shannon Divergence consistency
loss is further utilized to incorporate these augmented samples into the
training objective in a principled manner. To verify the effectiveness of the
proposed strategies, we apply cutoff to both natural language understanding and
generation problems. On the GLUE benchmark, it is demonstrated that cutoff, in
spite of its simplicity, performs on par or better than several competitive
adversarial-based approaches. We further extend cutoff to machine translation
and observe significant gains in BLEU scores (based upon the Transformer Base
model). Moreover, cutoff consistently outperforms adversarial training and
achieves state-of-the-art results on the IWSLT2014 German-English dataset.
| 2,020 | Computation and Language |
SynSetExpan: An Iterative Framework for Joint Entity Set Expansion and
Synonym Discovery | Entity set expansion and synonym discovery are two critical NLP tasks.
Previous studies accomplish them separately, without exploring their
interdependencies. In this work, we hypothesize that these two tasks are
tightly coupled because two synonymous entities tend to have similar
likelihoods of belonging to various semantic classes. This motivates us to
design SynSetExpan, a novel framework that enables two tasks to mutually
enhance each other. SynSetExpan uses a synonym discovery model to include
popular entities' infrequent synonyms into the set, which boosts the set
expansion recall. Meanwhile, the set expansion model, being able to determine
whether an entity belongs to a semantic class, can generate pseudo training
data to fine-tune the synonym discovery model towards better accuracy. To
facilitate the research on studying the interplays of these two tasks, we
create the first large-scale Synonym-Enhanced Set Expansion (SE2) dataset via
crowdsourcing. Extensive experiments on the SE2 dataset and previous benchmarks
demonstrate the effectiveness of SynSetExpan for both entity set expansion and
synonym discovery tasks.
| 2,020 | Computation and Language |
HINT3: Raising the bar for Intent Detection in the Wild | Intent Detection systems in the real world are exposed to complexities of
imbalanced datasets containing varying perception of intent, unintended
correlations and domain-specific aberrations. To facilitate benchmarking which
can reflect near real-world scenarios, we introduce 3 new datasets created from
live chatbots in diverse domains. Unlike most existing datasets that are
crowdsourced, our datasets contain real user queries received by the chatbots
and facilitates penalising unwanted correlations grasped during the training
process. We evaluate 4 NLU platforms and a BERT based classifier and find that
performance saturates at inadequate levels on test sets because all systems
latch on to unintended patterns in training data.
| 2,020 | Computation and Language |
GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing | We present GraPPa, an effective pre-training approach for table semantic
parsing that learns a compositional inductive bias in the joint representations
of textual and tabular data. We construct synthetic question-SQL pairs over
high-quality tables via a synchronous context-free grammar (SCFG) induced from
existing text-to-SQL datasets. We pre-train our model on the synthetic data
using a novel text-schema linking objective that predicts the syntactic role of
a table field in the SQL for each question-SQL pair. To maintain the model's
ability to represent real-world data, we also include masked language modeling
(MLM) over several existing table-and-language datasets to regularize the
pre-training process. On four popular fully supervised and weakly supervised
table semantic parsing benchmarks, GraPPa significantly outperforms
RoBERTa-large as the feature representation layers and establishes new
state-of-the-art results on all of them.
| 2,021 | Computation and Language |
Fake News Spreader Detection on Twitter using Character N-Grams.
Notebook for PAN at CLEF 2020 | The authors of fake news often use facts from verified news sources and mix
them with misinformation to create confusion and provoke unrest among the
readers. The spread of fake news can thereby have serious implications on our
society. They can sway political elections, push down the stock price or crush
reputations of corporations or public figures. Several websites have taken on
the mission of checking rumors and allegations, but are often not fast enough
to check the content of all the news being disseminated. Especially social
media websites have offered an easy platform for the fast propagation of
information. Towards limiting fake news from being propagated among social
media users, the task of this year's PAN 2020 challenge lays the focus on the
fake news spreaders. The aim of the task is to determine whether it is possible
to discriminate authors that have shared fake news in the past from those that
have never done it. In this notebook, we describe our profiling system for the
fake news detection task on Twitter. For this, we conduct different feature
extraction techniques and learning experiments from a multilingual perspective,
namely English and Spanish. Our final submitted systems use character n-grams
as features in combination with a linear SVM for English and Logistic
Regression for the Spanish language. Our submitted models achieve an overall
accuracy of 73% and 79% on the English and Spanish official test set,
respectively. Our experiments show that it is difficult to differentiate
solidly fake news spreaders on Twitter from users who share credible
information leaving room for further investigations. Our model ranked 3rd out
of 72 competitors.
| 2,020 | Computation and Language |
Utility is in the Eye of the User: A Critique of NLP Leaderboards | Benchmarks such as GLUE have helped drive advances in NLP by incentivizing
the creation of more accurate models. While this leaderboard paradigm has been
remarkably successful, a historical focus on performance-based evaluation has
been at the expense of other qualities that the NLP community values in models,
such as compactness, fairness, and energy efficiency. In this opinion paper, we
study the divergence between what is incentivized by leaderboards and what is
useful in practice through the lens of microeconomic theory. We frame both the
leaderboard and NLP practitioners as consumers and the benefit they get from a
model as its utility to them. With this framing, we formalize how leaderboards
-- in their current form -- can be poor proxies for the NLP community at large.
For example, a highly inefficient model would provide less utility to
practitioners but not to a leaderboard, since it is a cost that only the former
must bear. To allow practitioners to better estimate a model's utility to them,
we advocate for more transparency on leaderboards, such as the reporting of
statistics that are of practical concern (e.g., model size, energy efficiency,
and inference latency).
| 2,021 | Computation and Language |
Sequence-to-Sequence Learning for Indonesian Automatic Question
Generator | Automatic question generation is defined as the task of automating the
creation of question given a various of textual data. Research in automatic
question generator (AQG) has been conducted for more than 10 years, mainly
focused on factoid question. In all these studies, the state-of-the-art is
attained using sequence-to-sequence approach. However, AQG system for
Indonesian has not ever been researched intensely. In this work we construct an
Indonesian automatic question generator, adapting the architecture from some
previous works. In summary, we used sequence-to-sequence approach using BiGRU,
BiLSTM, and Transformer with additional linguistic features, copy mechanism,
and coverage mechanism. Since there is no public large dan popular Indonesian
dataset for question generation, we translated SQuAD v2.0 factoid question
answering dataset, with additional Indonesian TyDiQA dev set for testing. The
system achieved BLEU1, BLEU2, BLEU3, BLEU4, and ROUGE-L score at 38,35, 20,96,
10,68, 5,78, and 43,4 for SQuAD, and 39.9, 20.78, 10.26, 6.31, 44.13 for
TyDiQA, respectively. The system performed well when the expected answers are
named entities and are syntactically close with the context explaining them.
Additionally, from native Indonesian perspective, the best questions generated
by our best models on their best cases are acceptable and reasonably useful.
| 2,020 | Computation and Language |
Utterance-level Dialogue Understanding: An Empirical Study | The recent abundance of conversational data on the Web and elsewhere calls
for effective NLP systems for dialog understanding. Complete utterance-level
understanding often requires context understanding, defined by nearby
utterances. In recent years, a number of approaches have been proposed for
various utterance-level dialogue understanding tasks. Most of these approaches
account for the context for effective understanding. In this paper, we explore
and quantify the role of context for different aspects of a dialogue, namely
emotion, intent, and dialogue act identification, using state-of-the-art dialog
understanding methods as baselines. Specifically, we employ various
perturbations to distort the context of a given utterance and study its impact
on the different tasks and baselines. This provides us with insights into the
fundamental contextual controlling factors of different aspects of a dialogue.
Such insights can inspire more effective dialogue understanding models, and
provide support for future text generation approaches. The implementation
pertaining to this work is available at
https://github.com/declare-lab/dialogue-understanding.
| 2,020 | Computation and Language |
Aligning Intraobserver Agreement by Transitivity | Annotation reproducibility and accuracy rely on good consistency within
annotators. We propose a novel method for measuring within annotator
consistency or annotator Intraobserver Agreement (IA). The proposed approach is
based on transitivity, a measure that has been thoroughly studied in the
context of rational decision-making. The transitivity measure, in contrast with
the commonly used test-retest strategy for annotator IA, is less sensitive to
the several types of bias introduced by the test-retest strategy. We present a
representation theorem to the effect that relative judgement data that meet
transitivity can be mapped to a scale (in terms of measurement theory). We also
discuss a further application of transitivity as part of data collection design
for addressing the problem of the quadratic complexity of data collection of
relative judgements.
| 2,020 | Computation and Language |
CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced
Pre-Trained Language Models | Several recent efforts have been devoted to enhancing pre-trained language
models (PLMs) by utilizing extra heterogeneous knowledge in knowledge graphs
(KGs) and achieved consistent improvements on various knowledge-driven NLP
tasks. However, most of these knowledge-enhanced PLMs embed static sub-graphs
of KGs ("knowledge context"), regardless of that the knowledge required by PLMs
may change dynamically according to specific text ("textual context"). In this
paper, we propose a novel framework named Coke to dynamically select contextual
knowledge and embed knowledge context according to textual context for PLMs,
which can avoid the effect of redundant and ambiguous knowledge in KGs that
cannot match the input text. Our experimental results show that Coke
outperforms various baselines on typical knowledge-driven NLP tasks, indicating
the effectiveness of utilizing dynamic knowledge context for language
understanding. Besides the performance improvements, the dynamically selected
knowledge in Coke can describe the semantics of text-related knowledge in a
more interpretable form than the conventional PLMs. Our source code and
datasets will be available to provide more details for Coke.
| 2,021 | Computation and Language |
Neural Topic Modeling with Cycle-Consistent Adversarial Training | Advances on deep generative models have attracted significant research
interest in neural topic modeling. The recently proposed Adversarial-neural
Topic Model models topics with an adversarially trained generator network and
employs Dirichlet prior to capture the semantic patterns in latent topics. It
is effective in discovering coherent topics but unable to infer topic
distributions for given documents or utilize available document labels. To
overcome such limitations, we propose Topic Modeling with Cycle-consistent
Adversarial Training (ToMCAT) and its supervised version sToMCAT. ToMCAT
employs a generator network to interpret topics and an encoder network to infer
document topics. Adversarial training and cycle-consistent constraints are used
to encourage the generator and the encoder to produce realistic samples that
coordinate with each other. sToMCAT extends ToMCAT by incorporating document
labels into the topic modeling process to help discover more coherent topics.
The effectiveness of the proposed models is evaluated on
unsupervised/supervised topic modeling and text classification. The
experimental results show that our models can produce both coherent and
informative topics, outperforming a number of competitive baselines.
| 2,020 | Computation and Language |
Neural Topic Modeling by Incorporating Document Relationship Graph | Graph Neural Networks (GNNs) that capture the relationships between graph
nodes via message passing have been a hot research direction in the natural
language processing community. In this paper, we propose Graph Topic Model
(GTM), a GNN based neural topic model that represents a corpus as a document
relationship graph. Documents and words in the corpus become nodes in the graph
and are connected based on document-word co-occurrences. By introducing the
graph structure, the relationships between documents are established through
their shared words and thus the topical representation of a document is
enriched by aggregating information from its neighboring nodes using graph
convolution. Extensive experiments on three datasets were conducted and the
results demonstrate the effectiveness of the proposed approach.
| 2,020 | Computation and Language |
Building Legal Case Retrieval Systems with Lexical Matching and
Summarization using A Pre-Trained Phrase Scoring Model | We present our method for tackling the legal case retrieval task of the
Competition on Legal Information Extraction/Entailment 2019. Our approach is
based on the idea that summarization is important for retrieval. On one hand,
we adopt a summarization based model called encoded summarization which encodes
a given document into continuous vector space which embeds the summary
properties of the document. We utilize the resource of COLIEE 2018 on which we
train the document representation model. On the other hand, we extract lexical
features on different parts of a given query and its candidates. We observe
that by comparing different parts of the query and its candidates, we can
achieve better performance. Furthermore, the combination of the lexical
features with latent features by the summarization-based method achieves even
better performance. We have achieved the state-of-the-art result for the task
on the benchmark of the competition.
| 2,020 | Computation and Language |
Improving Low Compute Language Modeling with In-Domain Embedding
Initialisation | Many NLP applications, such as biomedical data and technical support, have
10-100 million tokens of in-domain data and limited computational resources for
learning from it. How should we train a language model in this scenario? Most
language modeling research considers either a small dataset with a closed
vocabulary (like the standard 1 million token Penn Treebank), or the whole web
with byte-pair encoding. We show that for our target setting in English,
initialising and freezing input embeddings using in-domain data can improve
language model performance by providing a useful representation of rare words,
and this pattern holds across several different domains. In the process, we
show that the standard convention of tying input and output embeddings does not
improve perplexity when initializing with embeddings trained on in-domain data.
| 2,020 | Computation and Language |
A Survey on Semantic Parsing from the perspective of Compositionality | Different from previous surveys in semantic parsing (Kamath and Das, 2018)
and knowledge base question answering(KBQA)(Chakraborty et al., 2019; Zhu et
al., 2019; Hoffner et al., 2017) we try to takes a different perspective on the
study of semantic parsing. Specifically, we will focus on (a)meaning
composition from syntactical structure(Partee, 1975), and (b) the ability of
semantic parsers to handle lexical variation given the context of a knowledge
base (KB). In the following section after an introduction of the field of
semantic parsing and its uses in KBQA, we will describe meaning representation
using grammar formalism CCG (Steedman, 1996). We will discuss semantic
composition using formal languages in Section 2. In section 3 we will consider
systems that uses formal languages e.g. $\lambda$-calculus (Steedman, 1996),
$\lambda$-DCS (Liang, 2013). Section 4 and 5 consider semantic parser using
structured-language for logical form. Section 6 is on different benchmark
datasets ComplexQuestions (Bao et al.,2016) and GraphQuestions (Su et al.,
2016) that can be used to evaluate semantic parser on their ability to answer
complex questions that are highly compositional in nature.
| 2,021 | Computation and Language |
Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank | Pretrained multilingual contextual representations have shown great success,
but due to the limits of their pretraining data, their benefits do not apply
equally to all language varieties. This presents a challenge for language
varieties unfamiliar to these models, whose labeled \emph{and unlabeled} data
is too limited to train a monolingual model effectively. We propose the use of
additional language-specific pretraining and vocabulary augmentation to adapt
multilingual models to low-resource settings. Using dependency parsing of four
diverse low-resource language varieties as a case study, we show that these
methods significantly improve performance over baselines, especially in the
lowest-resource cases, and demonstrate the importance of the relationship
between such models' pretraining data and target language varieties.
| 2,020 | Computation and Language |
Contrastive Distillation on Intermediate Representations for Language
Model Compression | Existing language model compression methods mostly use a simple L2 loss to
distill knowledge in the intermediate representations of a large BERT model to
a smaller one. Although widely used, this objective by design assumes that all
the dimensions of hidden representations are independent, failing to capture
important structural knowledge in the intermediate layers of the teacher
network. To achieve better distillation efficacy, we propose Contrastive
Distillation on Intermediate Representations (CoDIR), a principled knowledge
distillation framework where the student is trained to distill knowledge
through intermediate layers of the teacher via a contrastive objective. By
learning to distinguish positive sample from a large set of negative samples,
CoDIR facilitates the student's exploitation of rich information in teacher's
hidden layers. CoDIR can be readily applied to compress large-scale language
models in both pre-training and finetuning stages, and achieves superb
performance on the GLUE benchmark, outperforming state-of-the-art compression
methods.
| 2,020 | Computation and Language |
Visually-Grounded Planning without Vision: Language Models Infer
Detailed Plans from High-level Instructions | The recently proposed ALFRED challenge task aims for a virtual robotic agent
to complete complex multi-step everyday tasks in a virtual home environment
from high-level natural language directives, such as "put a hot piece of bread
on a plate". Currently, the best-performing models are able to complete less
than 5% of these tasks successfully. In this work we focus on modeling the
translation problem of converting natural language directives into detailed
multi-step sequences of actions that accomplish those goals in the virtual
environment. We empirically demonstrate that it is possible to generate gold
multi-step plans from language directives alone without any visual input in 26%
of unseen cases. When a small amount of visual information is incorporated,
namely the starting location in the virtual environment, our best-performing
GPT-2 model successfully generates gold command sequences in 58% of cases. Our
results suggest that contextualized language models may provide strong visual
semantic planning modules for grounded virtual agents.
| 2,020 | Computation and Language |
Abusive Language Detection and Characterization of Twitter Behavior | In this work, abusive language detection in online content is performed using
Bidirectional Recurrent Neural Network (BiRNN) method. Here the main objective
is to focus on various forms of abusive behaviors on Twitter and to detect
whether a speech is abusive or not. The results are compared for various
abusive behaviors in social media, with Convolutional Neural Netwrok (CNN) and
Recurrent Neural Network (RNN) methods and proved that the proposed BiRNN is a
better deep learning model for automatic abusive speech detection.
| 2,020 | Computation and Language |
TEST_POSITIVE at W-NUT 2020 Shared Task-3: Joint Event Multi-task
Learning for Slot Filling in Noisy Text | The competition of extracting COVID-19 events from Twitter is to develop
systems that can automatically extract related events from tweets. The built
system should identify different pre-defined slots for each event, in order to
answer important questions (e.g., Who is tested positive? What is the age of
the person? Where is he/she?). To tackle these challenges, we propose the Joint
Event Multi-task Learning (JOELIN) model. Through a unified global learning
framework, we make use of all the training data across different events to
learn and fine-tune the language model. Moreover, we implement a type-aware
post-processing procedure using named entity recognition (NER) to further
filter the predictions. JOELIN outperforms the BERT baseline by 17.2% in micro
F1.
| 2,020 | Computation and Language |
Cross-lingual Alignment Methods for Multilingual BERT: A Comparative
Study | Multilingual BERT (mBERT) has shown reasonable capability for zero-shot
cross-lingual transfer when fine-tuned on downstream tasks. Since mBERT is not
pre-trained with explicit cross-lingual supervision, transfer performance can
further be improved by aligning mBERT with cross-lingual signal. Prior work
proposes several approaches to align contextualised embeddings. In this paper
we analyse how different forms of cross-lingual supervision and various
alignment methods influence the transfer capability of mBERT in zero-shot
setting. Specifically, we compare parallel corpora vs. dictionary-based
supervision and rotational vs. fine-tuning based alignment methods. We evaluate
the performance of different alignment methodologies across eight languages on
two tasks: Name Entity Recognition and Semantic Slot Filling. In addition, we
propose a novel normalisation method which consistently improves the
performance of rotation-based alignment including a notable 3% F1 improvement
for distant and typologically dissimilar languages. Importantly we identify the
biases of the alignment methods to the type of task and proximity to the
transfer language. We also find that supervision from parallel corpus is
generally superior to dictionary alignments.
| 2,020 | Computation and Language |
INSPIRED: Toward Sociable Recommendation Dialog Systems | In recommendation dialogs, humans commonly disclose their preference and make
recommendations in a friendly manner. However, this is a challenge when
developing a sociable recommendation dialog system, due to the lack of dialog
dataset annotated with such sociable strategies. Therefore, we present
INSPIRED, a new dataset of 1,001 human-human dialogs for movie recommendation
with measures for successful recommendations. To better understand how humans
make recommendations in communication, we design an annotation scheme related
to recommendation strategies based on social science theories and annotate
these dialogs. Our analysis shows that sociable recommendation strategies, such
as sharing personal opinions or communicating with encouragement, more
frequently lead to successful recommendations. Based on our dataset, we train
end-to-end recommendation dialog systems with and without our strategy labels.
In both automatic and human evaluation, our model with strategy incorporation
outperforms the baseline model. This work is a first step for building sociable
recommendation dialog systems with a basis of social science theories.
| 2,020 | Computation and Language |
NatCat: Weakly Supervised Text Classification with Naturally Annotated
Resources | We describe NatCat, a large-scale resource for text classification
constructed from three data sources: Wikipedia, Stack Exchange, and Reddit.
NatCat consists of document-category pairs derived from manual curation that
occurs naturally within online communities. To demonstrate its usefulness, we
build general purpose text classifiers by training on NatCat and evaluate them
on a suite of 11 text classification tasks (CatEval), reporting large
improvements compared to prior work. We benchmark different modeling choices
and resource combinations and show how tasks benefit from particular NatCat
data sources.
| 2,021 | Computation and Language |
MaP: A Matrix-based Prediction Approach to Improve Span Extraction in
Machine Reading Comprehension | Span extraction is an essential problem in machine reading comprehension.
Most of the existing algorithms predict the start and end positions of an
answer span in the given corresponding context by generating two probability
vectors. In this paper, we propose a novel approach that extends the
probability vector to a probability matrix. Such a matrix can cover more
start-end position pairs. Precisely, to each possible start index, the method
always generates an end probability vector. Besides, we propose a
sampling-based training strategy to address the computational cost and memory
issue in the matrix training phase. We evaluate our method on SQuAD 1.1 and
three other question answering benchmarks. Leveraging the most competitive
models BERT and BiDAF as the backbone, our proposed approach can get consistent
improvements in all datasets, demonstrating the effectiveness of the proposed
method.
| 2,020 | Computation and Language |
Ethically Collecting Multi-Modal Spontaneous Conversations with People
that have Cognitive Impairments | In order to make spoken dialogue systems (such as Amazon Alexa or Google
Assistant) more accessible and naturally interactive for people with cognitive
impairments, appropriate data must be obtainable. Recordings of multi-modal
spontaneous conversations with vulnerable user groups are scarce however and
this valuable data is challenging to collect. Researchers that call for this
data are commonly inexperienced in ethical and legal issues around working with
vulnerable participants. Additionally, standard recording equipment is insecure
and should not be used to capture sensitive data. We spent a year consulting
experts on how to ethically capture and share recordings of multi-modal
spontaneous conversations with vulnerable user groups. In this paper we provide
guidance, collated from these experts, on how to ethically collect such data
and we present a new system - "CUSCO" - to capture, transport and exchange
sensitive data securely. This framework is intended to be easily followed and
implemented to encourage further publications of similar corpora. Using this
guide and secure recording system, researchers can review and refine their
ethical measures.
| 2,020 | Computation and Language |
Generation of lyrics lines conditioned on music audio clips | We present a system for generating novel lyrics lines conditioned on music
audio. A bimodal neural network model learns to generate lines conditioned on
any given short audio clip. The model consists of a spectrogram variational
autoencoder (VAE) and a text VAE. Both automatic and human evaluations
demonstrate effectiveness of our model in generating lines that have an
emotional impact matching a given audio clip. The system is intended to serve
as a creativity tool for songwriters.
| 2,020 | Computation and Language |
Development of Word Embeddings for Uzbek Language | In this paper, we share the process of developing word embeddings for the
Cyrillic variant of the Uzbek language. The result of our work is the first
publicly available set of word vectors trained on the word2vec, GloVe, and
fastText algorithms using a high-quality web crawl corpus developed in-house.
The developed word embeddings can be used in many natural language processing
downstream tasks.
| 2,020 | Computation and Language |
End-to-End Spoken Language Understanding Without Full Transcripts | An essential component of spoken language understanding (SLU) is slot
filling: representing the meaning of a spoken utterance using semantic entity
labels. In this paper, we develop end-to-end (E2E) spoken language
understanding systems that directly convert speech input to semantic entities
and investigate if these E2E SLU models can be trained solely on semantic
entity annotations without word-for-word transcripts. Training such models is
very useful as they can drastically reduce the cost of data collection. We
created two types of such speech-to-entities models, a CTC model and an
attention-based encoder-decoder model, by adapting models trained originally
for speech recognition. Given that our experiments involve speech input, these
systems need to recognize both the entity label and words representing the
entity value correctly. For our speech-to-entities experiments on the ATIS
corpus, both the CTC and attention models showed impressive ability to skip
non-entity words: there was little degradation when trained on just entities
versus full transcripts. We also explored the scenario where the entities are
in an order not necessarily related to spoken order in the utterance. With its
ability to do re-ordering, the attention model did remarkably well, achieving
only about 2% degradation in speech-to-bag-of-entities F1 score.
| 2,020 | Computation and Language |
Multiple Word Embeddings for Increased Diversity of Representation | Most state-of-the-art models in natural language processing (NLP) are neural
models built on top of large, pre-trained, contextual language models that
generate representations of words in context and are fine-tuned for the task at
hand. The improvements afforded by these "contextual embeddings" come with a
high computational cost. In this work, we explore a simple technique that
substantially and consistently improves performance over a strong baseline with
negligible increase in run time. We concatenate multiple pre-trained embeddings
to strengthen our representation of words. We show that this concatenation
technique works across many tasks, datasets, and model types. We analyze
aspects of pre-trained embedding similarity and vocabulary coverage and find
that the representational diversity between different pre-trained embeddings is
the driving force of why this technique works. We provide open source
implementations of our models in both TensorFlow and PyTorch.
| 2,020 | Computation and Language |
Can Automatic Post-Editing Improve NMT? | Automatic post-editing (APE) aims to improve machine translations, thereby
reducing human post-editing effort. APE has had notable success when used with
statistical machine translation (SMT) systems but has not been as successful
over neural machine translation (NMT) systems. This has raised questions on the
relevance of APE task in the current scenario. However, the training of APE
models has been heavily reliant on large-scale artificial corpora combined with
only limited human post-edited data. We hypothesize that APE models have been
underperforming in improving NMT translations due to the lack of adequate
supervision. To ascertain our hypothesis, we compile a larger corpus of human
post-edits of English to German NMT. We empirically show that a state-of-art
neural APE model trained on this corpus can significantly improve a strong
in-domain NMT system, challenging the current understanding in the field. We
further investigate the effects of varying training data sizes, using
artificial training data, and domain specificity for the APE task. We release
this new corpus under CC BY-NC-SA 4.0 license at
https://github.com/shamilcm/pedra.
| 2,020 | Computation and Language |
Towards Improved Model Design for Authorship Identification: A Survey on
Writing Style Understanding | Authorship identification tasks, which rely heavily on linguistic styles,
have always been an important part of Natural Language Understanding (NLU)
research. While other tasks based on linguistic style understanding benefit
from deep learning methods, these methods have not behaved as well as
traditional machine learning methods in many authorship-based tasks. With these
tasks becoming more and more challenging, however, traditional machine learning
methods based on handcrafted feature sets are already approaching their
performance limits. Thus, in order to inspire future applications of deep
learning methods in authorship-based tasks in ways that benefit the extraction
of stylistic features, we survey authorship-based tasks and other tasks related
to writing style understanding. We first describe our survey results on the
current state of research in both sets of tasks and summarize existing
achievements and problems in authorship-related tasks. We then describe
outstanding methods in style-related tasks in general and analyze how they are
used in combination in the top-performing models. We are optimistic about the
applicability of these models to authorship-based tasks and hope our survey
will help advance research in this field.
| 2,020 | Computation and Language |
Towards a Multi-modal, Multi-task Learning based Pre-training Framework
for Document Representation Learning | Recent approaches in literature have exploited the multi-modal information in
documents (text, layout, image) to serve specific downstream document tasks.
However, they are limited by their - (i) inability to learn cross-modal
representations across text, layout and image dimensions for documents and (ii)
inability to process multi-page documents. Pre-training techniques have been
shown in Natural Language Processing (NLP) domain to learn generic textual
representations from large unlabelled datasets, applicable to various
downstream NLP tasks. In this paper, we propose a multi-task learning-based
framework that utilizes a combination of self-supervised and supervised
pre-training tasks to learn a generic document representation applicable to
various downstream document tasks. Specifically, we introduce Document Topic
Modelling and Document Shuffle Prediction as novel pre-training tasks to learn
rich image representations along with the text and layout representations for
documents. We utilize the Longformer network architecture as the backbone to
encode the multi-modal information from multi-page documents in an end-to-end
fashion. We showcase the applicability of our pre-training framework on a
variety of different real-world document tasks such as document classification,
document information extraction, and document retrieval. We evaluate our
framework on different standard document datasets and conduct exhaustive
experiments to compare performance against various ablations of our framework
and state-of-the-art baselines.
| 2,022 | Computation and Language |
LEBANONUPRISING: a thorough study of Lebanese tweets | Recent studies showed a huge interest in social networks sentiment analysis.
Twitter, which is a microblogging service, can be a great source of information
on how the users feel about a certain topic, or what their opinion is regarding
a social, economic and even political matter. On October 17, Lebanon witnessed
the start of a revolution; the LebanonUprising hashtag became viral on Twitter.
A dataset consisting of a 100,0000 tweets was collected between 18 and 21
October. In this paper, we conducted a sentiment analysis study for the tweets
in spoken Lebanese Arabic related to the LebanonUprising hashtag using
different machine learning algorithms. The dataset was manually annotated to
measure the precision and recall metrics and to compare between the different
algorithms. Furthermore, the work completed in this paper provides two more
contributions. The first is related to building a Lebanese to Modern Standard
Arabic mapping dictionary that was used for the preprocessing of the tweets and
the second is an attempt to move from sentiment analysis to emotion detection
using emojis, and the two emotions we tried to predict were the "sarcastic" and
"funny" emotions. We built a training set from the tweets collected in October
2019 and then we used this set to predict sentiments and emotions of the tweets
we collected between May and August 2020. The analysis we conducted shows the
variation in sentiments, emotions and users between the two datasets. The
results we obtained seem satisfactory especially considering that there was no
previous or similar work done involving Lebanese Arabic tweets, to our
knowledge.
| 2,020 | Computation and Language |
Neural RST-based Evaluation of Discourse Coherence | This paper evaluates the utility of Rhetorical Structure Theory (RST) trees
and relations in discourse coherence evaluation. We show that incorporating
silver-standard RST features can increase accuracy when classifying coherence.
We demonstrate this through our tree-recursive neural model, namely
RST-Recursive, which takes advantage of the text's RST features produced by a
state of the art RST parser. We evaluate our approach on the Grammarly Corpus
for Discourse Coherence (GCDC) and show that when ensembled with the current
state of the art, we can achieve the new state of the art accuracy on this
benchmark. Furthermore, when deployed alone, RST-Recursive achieves competitive
accuracy while having 62% fewer parameters.
| 2,020 | Computation and Language |
Cross-lingual Spoken Language Understanding with Regularized
Representation Alignment | Despite the promising results of current cross-lingual models for spoken
language understanding systems, they still suffer from imperfect cross-lingual
representation alignments between the source and target languages, which makes
the performance sub-optimal. To cope with this issue, we propose a
regularization approach to further align word-level and sentence-level
representations across languages without any external resource. First, we
regularize the representation of user utterances based on their corresponding
labels. Second, we regularize the latent variable model (Liu et al., 2019) by
leveraging adversarial training to disentangle the latent variables.
Experiments on the cross-lingual spoken language understanding task show that
our model outperforms current state-of-the-art methods in both few-shot and
zero-shot scenarios, and our model, trained on a few-shot setting with only 3\%
of the target language training data, achieves comparable performance to the
supervised training with all the training data.
| 2,020 | Computation and Language |
Dilated Convolutional Attention Network for Medical Code Assignment from
Clinical Text | Medical code assignment, which predicts medical codes from clinical texts, is
a fundamental task of intelligent medical information systems. The emergence of
deep models in natural language processing has boosted the development of
automatic assignment methods. However, recent advanced neural architectures
with flat convolutions or multi-channel feature concatenation ignore the
sequential causal constraint within a text sequence and may not learn
meaningful clinical text representations, especially for lengthy clinical notes
with long-term sequential dependency. This paper proposes a Dilated
Convolutional Attention Network (DCAN), integrating dilated convolutions,
residual connections, and label attention, for medical code assignment. It
adopts dilated convolutions to capture complex medical patterns with a
receptive field which increases exponentially with dilation size. Experiments
on a real-world clinical dataset empirically show that our model improves the
state of the art.
| 2,021 | Computation and Language |
Learning Hard Retrieval Decoder Attention for Transformers | The Transformer translation model is based on the multi-head attention
mechanism, which can be parallelized easily. The multi-head attention network
performs the scaled dot-product attention function in parallel, empowering the
model by jointly attending to information from different representation
subspaces at different positions. In this paper, we present an approach to
learning a hard retrieval attention where an attention head only attends to one
token in the sentence rather than all tokens. The matrix multiplication between
attention probabilities and the value sequence in the standard scaled
dot-product attention can thus be replaced by a simple and efficient retrieval
operation. We show that our hard retrieval attention mechanism is 1.43 times
faster in decoding, while preserving translation quality on a wide range of
machine translation tasks when used in the decoder self- and cross-attention
networks.
| 2,021 | Computation and Language |
RDSGAN: Rank-based Distant Supervision Relation Extraction with
Generative Adversarial Framework | Distant supervision has been widely used for relation extraction but suffers
from noise labeling problem. Neural network models are proposed to denoise with
attention mechanism but cannot eliminate noisy data due to its non-zero
weights. Hard decision is proposed to remove wrongly-labeled instances from the
positive set though causes loss of useful information contained in removed
instances. In this paper, we propose a novel generative neural framework named
RDSGAN (Rank-based Distant Supervision GAN) which automatically generates valid
instances for distant supervision relation extraction. Our framework combines
soft attention and hard decision to learn the distribution of true positive
instances via adversarial training and selects valid instances conforming to
the distribution via rank-based distant supervision, which addresses the false
positive problem. Experimental results show the superiority of our framework
over strong baselines.
| 2,020 | Computation and Language |
A Vietnamese Dataset for Evaluating Machine Reading Comprehension | Over 97 million people speak Vietnamese as their native language in the
world. However, there are few research studies on machine reading comprehension
(MRC) for Vietnamese, the task of understanding a text and answering questions
related to it. Due to the lack of benchmark datasets for Vietnamese, we present
the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the
low-resource language as Vietnamese to evaluate MRC models. This dataset
comprises over 23,000 human-generated question-answer pairs based on 5,109
passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a
new process of dataset creation for Vietnamese MRC. Our in-depth analyses
illustrate that our dataset requires abilities beyond simple reasoning like
word matching and demands single-sentence and multiple-sentence inferences.
Besides, we conduct experiments on state-of-the-art MRC methods for English and
Chinese as the first experimental models on UIT-ViQuAD. We also estimate human
performance on the dataset and compare it to the experimental results of
powerful machine learning models. As a result, the substantial differences
between human performance and the best model performance on the dataset
indicate that improvements can be made on UIT-ViQuAD in future research. Our
dataset is freely available on our website to encourage the research community
to overcome challenges in Vietnamese MRC.
| 2,020 | Computation and Language |
Point-of-Interest Type Inference from Social Media Text | Physical places help shape how we perceive the experiences we have there. For
the first time, we study the relationship between social media text and the
type of the place from where it was posted, whether a park, restaurant, or
someplace else. To facilitate this, we introduce a novel data set of
$\sim$200,000 English tweets published from 2,761 different points-of-interest
in the U.S., enriched with place type information. We train classifiers to
predict the type of the location a tweet was sent from that reach a macro F1 of
43.67 across eight classes and uncover the linguistic markers associated with
each type of place. The ability to predict semantic place information from a
tweet has applications in recommendation systems, personalization services and
cultural geography.
| 2,020 | Computation and Language |
Bridging Information-Seeking Human Gaze and Machine Reading
Comprehension | In this work, we analyze how human gaze during reading comprehension is
conditioned on the given reading comprehension question, and whether this
signal can be beneficial for machine reading comprehension. To this end, we
collect a new eye-tracking dataset with a large number of participants engaging
in a multiple choice reading comprehension task. Our analysis of this data
reveals increased fixation times over parts of the text that are most relevant
for answering the question. Motivated by this finding, we propose making
automated reading comprehension more human-like by mimicking human
information-seeking reading behavior during reading comprehension. We
demonstrate that this approach leads to performance gains on multiple choice
question answering in English for a state-of-the-art reading comprehension
model.
| 2,020 | Computation and Language |
BERT for Monolingual and Cross-Lingual Reverse Dictionary | Reverse dictionary is the task to find the proper target word given the word
description. In this paper, we tried to incorporate BERT into this task.
However, since BERT is based on the byte-pair-encoding (BPE) subword encoding,
it is nontrivial to make BERT generate a word given the description. We propose
a simple but effective method to make BERT generate the target word for this
specific task. Besides, the cross-lingual reverse dictionary is the task to
find the proper target word described in another language. Previous models have
to keep two different word embeddings and learn to align these embeddings.
Nevertheless, by using the Multilingual BERT (mBERT), we can efficiently
conduct the cross-lingual reverse dictionary with one subword embedding, and
the alignment between languages is not necessary. More importantly, mBERT can
achieve remarkable cross-lingual reverse dictionary performance even without
the parallel corpus, which means it can conduct the cross-lingual reverse
dictionary with only corresponding monolingual data. Code is publicly available
at https://github.com/yhcc/BertForRD.git.
| 2,020 | Computation and Language |
A Tale of Two Linkings: Dynamically Gating between Schema Linking and
Structural Linking for Text-to-SQL Parsing | In Text-to-SQL semantic parsing, selecting the correct entities (tables and
columns) for the generated SQL query is both crucial and challenging; the
parser is required to connect the natural language (NL) question and the SQL
query to the structured knowledge in the database. We formulate two linking
processes to address this challenge: schema linking which links explicit NL
mentions to the database and structural linking which links the entities in the
output SQL with their structural relationships in the database schema.
Intuitively, the effectiveness of these two linking processes changes based on
the entity being generated, thus we propose to dynamically choose between them
using a gating mechanism. Integrating the proposed method with two graph neural
network-based semantic parsers together with BERT representations demonstrates
substantial gains in parsing accuracy on the challenging Spider dataset.
Analyses show that our proposed method helps to enhance the structure of the
model output when generating complicated SQL queries and offers more
explainable predictions.
| 2,020 | Computation and Language |
On Romanization for Model Transfer Between Scripts in Neural Machine
Translation | Transfer learning is a popular strategy to improve the quality of
low-resource machine translation. For an optimal transfer of the embedding
layer, the child and parent model should share a substantial part of the
vocabulary. This is not the case when transferring to languages with a
different script. We explore the benefit of romanization in this scenario. Our
results show that romanization entails information loss and is thus not always
superior to simpler vocabulary transfer methods, but can improve the transfer
between related languages with different scripts. We compare two romanization
tools and find that they exhibit different degrees of information loss, which
affects translation quality. Finally, we extend romanization to the target
side, showing that this can be a successful strategy when coupled with a simple
deromanization model.
| 2,020 | Computation and Language |
AbuseAnalyzer: Abuse Detection, Severity and Target Prediction for Gab
Posts | While extensive popularity of online social media platforms has made
information dissemination faster, it has also resulted in widespread online
abuse of different types like hate speech, offensive language, sexist and
racist opinions, etc. Detection and curtailment of such abusive content is
critical for avoiding its psychological impact on victim communities, and
thereby preventing hate crimes. Previous works have focused on classifying user
posts into various forms of abusive behavior. But there has hardly been any
focus on estimating the severity of abuse and the target. In this paper, we
present a first of the kind dataset with 7601 posts from Gab which looks at
online abuse from the perspective of presence of abuse, severity and target of
abusive behavior. We also propose a system to address these tasks, obtaining an
accuracy of ~80% for abuse presence, ~82% for abuse target prediction, and ~65%
for abuse severity prediction.
| 2,020 | Computation and Language |
Multi-document Summarization with Maximal Marginal Relevance-guided
Reinforcement Learning | While neural sequence learning methods have made significant progress in
single-document summarization (SDS), they produce unsatisfactory results on
multi-document summarization (MDS). We observe two major challenges when
adapting SDS advances to MDS: (1) MDS involves larger search space and yet more
limited training data, setting obstacles for neural methods to learn adequate
representations; (2) MDS needs to resolve higher information redundancy among
the source documents, which SDS methods are less effective to handle. To close
the gap, we present RL-MMR, Maximal Margin Relevance-guided Reinforcement
Learning for MDS, which unifies advanced neural SDS methods and statistical
measures used in classical MDS. RL-MMR casts MMR guidance on fewer promising
candidates, which restrains the search space and thus leads to better
representation learning. Additionally, the explicit redundancy measure in MMR
helps the neural representation of the summary to better capture redundancy.
Extensive experiments demonstrate that RL-MMR achieves state-of-the-art
performance on benchmark MDS datasets. In particular, we show the benefits of
incorporating MMR into end-to-end learning when adapting SDS to MDS in terms of
both learning effectiveness and efficiency.
| 2,020 | Computation and Language |
Interactive Re-Fitting as a Technique for Improving Word Embeddings | Word embeddings are a fixed, distributional representation of the context of
words in a corpus learned from word co-occurrences. While word embeddings have
proven to have many practical uses in natural language processing tasks, they
reflect the attributes of the corpus upon which they are trained. Recent work
has demonstrated that post-processing of word embeddings to apply information
found in lexical dictionaries can improve their quality. We build on this
post-processing technique by making it interactive. Our approach makes it
possible for humans to adjust portions of a word embedding space by moving sets
of words closer to one another. One motivating use case for this capability is
to enable users to identify and reduce the presence of bias in word embeddings.
Our approach allows users to trigger selective post-processing as they interact
with and assess potential bias in word embeddings.
| 2,020 | Computation and Language |
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
Language Models | Pretrained language models, especially masked language models (MLMs) have
seen success across many NLP tasks. However, there is ample evidence that they
use the cultural biases that are undoubtedly present in the corpora they are
trained on, implicitly creating harm with biased representations. To measure
some forms of social bias in language models against protected demographic
groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark
(CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing
with nine types of bias, like race, religion, and age. In CrowS-Pairs a model
is presented with two sentences: one that is more stereotyping and another that
is less stereotyping. The data focuses on stereotypes about historically
disadvantaged groups and contrasts them with advantaged groups. We find that
all three of the widely-used MLMs we evaluate substantially favor sentences
that express stereotypes in every category in CrowS-Pairs. As work on building
less biased models advances, this dataset can be used as a benchmark to
evaluate progress.
| 2,020 | Computation and Language |
Learning from Mistakes: Combining Ontologies via Self-Training for
Dialogue Generation | Natural language generators (NLGs) for task-oriented dialogue typically take
a meaning representation (MR) as input. They are trained end-to-end with a
corpus of MR/utterance pairs, where the MRs cover a specific set of dialogue
acts and domain attributes. Creation of such datasets is labor-intensive and
time-consuming. Therefore, dialogue systems for new domain ontologies would
benefit from using data for pre-existing ontologies. Here we explore, for the
first time, whether it is possible to train an NLG for a new larger ontology
using existing training sets for the restaurant domain, where each set is based
on a different ontology. We create a new, larger combined ontology, and then
train an NLG to produce utterances covering it. For example, if one dataset has
attributes for family-friendly and rating information, and the other has
attributes for decor and service, our aim is an NLG for the combined ontology
that can produce utterances that realize values for family-friendly, rating,
decor and service. Initial experiments with a baseline neural
sequence-to-sequence model show that this task is surprisingly challenging. We
then develop a novel self-training method that identifies (errorful) model
outputs, automatically constructs a corrected MR input to form a new (MR,
utterance) training pair, and then repeatedly adds these new instances back
into the training data. We then test the resulting model on a new test set. The
result is a self-trained model whose performance is an absolute 75.4%
improvement over the baseline model. We also report a human qualitative
evaluation of the final model showing that it achieves high naturalness,
semantic coherence and grammaticality
| 2,020 | Computation and Language |
Examining the rhetorical capacities of neural language models | Recently, neural language models (LMs) have demonstrated impressive abilities
in generating high-quality discourse. While many recent papers have analyzed
the syntactic aspects encoded in LMs, there has been no analysis to date of the
inter-sentential, rhetorical knowledge. In this paper, we propose a method that
quantitatively evaluates the rhetorical capacities of neural LMs. We examine
the capacities of neural LMs understanding the rhetoric of discourse by
evaluating their abilities to encode a set of linguistic features derived from
Rhetorical Structure Theory (RST). Our experiments show that BERT-based LMs
outperform other Transformer LMs, revealing the richer discourse knowledge in
their intermediate layer representations. In addition, GPT-2 and XLNet
apparently encode less rhetorical knowledge, and we suggest an explanation
drawing from linguistic philosophy. Our method shows an avenue towards
quantifying the rhetorical capacities of neural LMs.
| 2,020 | Computation and Language |
A Compare Aggregate Transformer for Understanding Document-grounded
Dialogue | Unstructured documents serving as external knowledge of the dialogues help to
generate more informative responses. Previous research focused on knowledge
selection (KS) in the document with dialogue. However, dialogue history that is
not related to the current dialogue may introduce noise in the KS processing.
In this paper, we propose a Compare Aggregate Transformer (CAT) to jointly
denoise the dialogue context and aggregate the document information for
response generation. We designed two different comparison mechanisms to reduce
noise (before and during decoding). In addition, we propose two metrics for
evaluating document utilization efficiency based on word overlap. Experimental
results on the CMUDoG dataset show that the proposed CAT model outperforms the
state-of-the-art approach and strong baselines.
| 2,020 | Computation and Language |
Improving Vietnamese Named Entity Recognition from Speech Using Word
Capitalization and Punctuation Recovery Models | Studies on the Named Entity Recognition (NER) task have shown outstanding
results that reach human parity on input texts with correct text formattings,
such as with proper punctuation and capitalization. However, such conditions
are not available in applications where the input is speech, because the text
is generated from a speech recognition system (ASR), and that the system does
not consider the text formatting. In this paper, we (1) presented the first
Vietnamese speech dataset for NER task, and (2) the first pre-trained public
large-scale monolingual language model for Vietnamese that achieved the new
state-of-the-art for the Vietnamese NER task by 1.3% absolute F1 score
comparing to the latest study. And finally, (3) we proposed a new pipeline for
NER task from speech that overcomes the text formatting problem by introducing
a text capitalization and punctuation recovery model (CaPu) into the pipeline.
The model takes input text from an ASR system and performs two tasks at the
same time, producing proper text formatting that helps to improve NER
performance. Experimental results indicated that the CaPu model helps to
improve by nearly 4% of F1-score.
| 2,020 | Computation and Language |
WeChat Neural Machine Translation Systems for WMT20 | We participate in the WMT 2020 shared news translation task on Chinese to
English. Our system is based on the Transformer (Vaswani et al., 2017a) with
effective variants and the DTMT (Meng and Zhang, 2019) architecture. In our
experiments, we employ data selection, several synthetic data generation
approaches (i.e., back-translation, knowledge distillation, and iterative
in-domain knowledge transfer), advanced finetuning approaches and self-bleu
based model ensemble. Our constrained Chinese to English system achieves 36.9
case-sensitive BLEU score, which is the highest among all submissions.
| 2,020 | Computation and Language |
Joint Persian Word Segmentation Correction and Zero-Width Non-Joiner
Recognition Using BERT | Words are properly segmented in the Persian writing system; in practice,
however, these writing rules are often neglected, resulting in single words
being written disjointedly and multiple words written without any white spaces
between them. This paper addresses the problems of word segmentation and
zero-width non-joiner (ZWNJ) recognition in Persian, which we approach jointly
as a sequence labeling problem. We achieved a macro-averaged F1-score of 92.40%
on a carefully collected corpus of 500 sentences with a high level of
difficulty.
| 2,020 | Computation and Language |
Phonemer at WNUT-2020 Task 2: Sequence Classification Using COVID
Twitter BERT and Bagging Ensemble Technique based on Plurality Voting | This paper presents the approach that we employed to tackle the EMNLP
WNUT-2020 Shared Task 2 : Identification of informative COVID-19 English
Tweets. The task is to develop a system that automatically identifies whether
an English Tweet related to the novel coronavirus (COVID-19) is informative or
not. We solve the task in three stages. The first stage involves pre-processing
the dataset by filtering only relevant information. This is followed by
experimenting with multiple deep learning models like CNNs, RNNs and
Transformer based models. In the last stage, we propose an ensemble of the best
model trained on different subsets of the provided dataset. Our final approach
achieved an F1-score of 0.9037 and we were ranked sixth overall with F1-score
as the evaluation criteria.
| 2,020 | Computation and Language |
CoLAKE: Contextualized Language and Knowledge Embedding | With the emerging branch of incorporating factual knowledge into pre-trained
language models such as BERT, most existing models consider shallow, static,
and separately pre-trained entity embeddings, which limits the performance
gains of these models. Few works explore the potential of deep contextualized
knowledge representation when injecting knowledge. In this paper, we propose
the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly
learns contextualized representation for both language and knowledge with the
extended MLM objective. Instead of injecting only entity embeddings, CoLAKE
extracts the knowledge context of an entity from large-scale knowledge bases.
To handle the heterogeneity of knowledge context and language context, we
integrate them in a unified data structure, word-knowledge graph (WK graph).
CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer
encoder. We conduct experiments on knowledge-driven tasks, knowledge probing
tasks, and language understanding tasks. Experimental results show that CoLAKE
outperforms previous counterparts on most of the tasks. Besides, CoLAKE
achieves surprisingly high performance on our synthetic task called
word-knowledge graph completion, which shows the superiority of simultaneously
contextualizing language and knowledge representation.
| 2,020 | Computation and Language |
"Did you really mean what you said?" : Sarcasm Detection in
Hindi-English Code-Mixed Data using Bilingual Word Embeddings | With the increased use of social media platforms by people across the world,
many new interesting NLP problems have come into existence. One such being the
detection of sarcasm in the social media texts. We present a corpus of tweets
for training custom word embeddings and a Hinglish dataset labelled for sarcasm
detection. We propose a deep learning based approach to address the issue of
sarcasm detection in Hindi-English code mixed tweets using bilingual word
embeddings derived from FastText and Word2Vec approaches. We experimented with
various deep learning models, including CNNs, LSTMs, Bi-directional LSTMs (with
and without attention). We were able to outperform all state-of-the-art
performances with our deep learning models, with attention based Bi-directional
LSTMs giving the best performance exhibiting an accuracy of 78.49%.
| 2,020 | Computation and Language |
Detecting White Supremacist Hate Speech using Domain Specific Word
Embedding with Deep Learning and BERT | White supremacists embrace a radical ideology that considers white people
superior to people of other races. The critical influence of these groups is no
longer limited to social media; they also have a significant effect on society
in many ways by promoting racial hatred and violence. White supremacist hate
speech is one of the most recently observed harmful content on social
media.Traditional channels of reporting hate speech have proved inadequate due
to the tremendous explosion of information, and therefore, it is necessary to
find an automatic way to detect such speech in a timely manner. This research
investigates the viability of automatically detecting white supremacist hate
speech on Twitter by using deep learning and natural language processing
techniques. Through our experiments, we used two approaches, the first approach
is by using domain-specific embeddings which are extracted from white
supremacist corpus in order to catch the meaning of this white supremacist
slang with bidirectional Long Short-Term Memory (LSTM) deep learning model,
this approach reached a 0.74890 F1-score. The second approach is by using the
one of the most recent language model which is BERT, BERT model provides the
state of the art of most NLP tasks. It reached to a 0.79605 F1-score. Both
approaches are tested on a balanced dataset given that our experiments were
based on textual data only. The dataset was combined from dataset created from
Twitter and a Stormfront dataset compiled from that white supremacist forum.
| 2,020 | Computation and Language |
How LSTM Encodes Syntax: Exploring Context Vectors and Semi-Quantization
on Natural Text | Long Short-Term Memory recurrent neural network (LSTM) is widely used and
known to capture informative long-term syntactic dependencies. However, how
such information are reflected in its internal vectors for natural text has not
yet been sufficiently investigated. We analyze them by learning a language
model where syntactic structures are implicitly given. We empirically show that
the context update vectors, i.e. outputs of internal gates, are approximately
quantized to binary or ternary values to help the language model to count the
depth of nesting accurately, as Suzgun et al. (2019) recently show for
synthetic Dyck languages. For some dimensions in the context vector, we show
that their activations are highly correlated with the depth of phrase
structures, such as VP and NP. Moreover, with an $L_1$ regularization, we also
found that it can accurately predict whether a word is inside a phrase
structure or not from a small number of components of the context vector. Even
for the case of learning from raw text, context vectors are shown to still
correlate well with the phrase structures. Finally, we show that natural
clusters of the functional words and the part of speeches that trigger phrases
are represented in a small but principal subspace of the context-update vector
of LSTM.
| 2,020 | Computation and Language |
Citation Sentiment Changes Analysis | Metrics for measuring the citation sentiment changes were introduced.
Citation sentiment changes can be observed from global citation sentiment
sequences (GCSSs). With respect to a cited paper, the citation sentiment
sequences were analysed across a collection of citing papers ordered by the
published time. For analysing GCSSs, Eddy Dissipation Rate (EDR) was adopted,
with the hypothesis that the GCSSs pattern differences can be spotted by EDR
based method. Preliminary evidence showed that EDR based method holds the
potential for analysing a publication's impact in a time series fashion.
| 2,020 | Computation and Language |
A Survey on Explainability in Machine Reading Comprehension | This paper presents a systematic review of benchmarks and approaches for
explainability in Machine Reading Comprehension (MRC). We present how the
representation and inference challenges evolved and the steps which were taken
to tackle these challenges. We also present the evaluation methodologies to
assess the performance of explainable systems. In addition, we identify
persisting open research questions and highlight critical directions for future
work.
| 2,020 | Computation and Language |
Evaluating Multilingual BERT for Estonian | Recently, large pre-trained language models, such as BERT, have reached
state-of-the-art performance in many natural language processing tasks, but for
many languages, including Estonian, BERT models are not yet available. However,
there exist several multilingual BERT models that can handle multiple languages
simultaneously and that have been trained also on Estonian data. In this paper,
we evaluate four multilingual models -- multilingual BERT, multilingual
distilled BERT, XLM and XLM-RoBERTa -- on several NLP tasks including POS and
morphological tagging, NER and text classification. Our aim is to establish a
comparison between these multilingual BERT models and the existing baseline
neural models for these tasks. Our results show that multilingual BERT models
can generalise well on different Estonian NLP tasks outperforming all baselines
models for POS and morphological tagging and text classification, and reaching
the comparable level with the best baseline for NER, with XLM-RoBERTa achieving
the highest results compared with other multilingual models.
| 2,021 | Computation and Language |
Towards Question-Answering as an Automatic Metric for Evaluating the
Content Quality of a Summary | A desirable property of a reference-based evaluation metric that measures the
content quality of a summary is that it should estimate how much information
that summary has in common with a reference. Traditional text overlap based
metrics such as ROUGE fail to achieve this because they are limited to matching
tokens, either lexically or via embeddings. In this work, we propose a metric
to evaluate the content quality of a summary using question-answering (QA).
QA-based methods directly measure a summary's information overlap with a
reference, making them fundamentally different than text overlap metrics. We
demonstrate the experimental benefits of QA-based metrics through an analysis
of our proposed metric, QAEval. QAEval out-performs current state-of-the-art
metrics on most evaluations using benchmark datasets, while being competitive
on others due to limitations of state-of-the-art models. Through a careful
analysis of each component of QAEval, we identify its performance bottlenecks
and estimate that its potential upper-bound performance surpasses all other
automatic metrics, approaching that of the gold-standard Pyramid Method.
| 2,021 | Computation and Language |
LiveQA: A Question Answering Dataset over Sports Live | In this paper, we introduce LiveQA, a new question answering dataset
constructed from play-by-play live broadcast. It contains 117k multiple-choice
questions written by human commentators for over 1,670 NBA games, which are
collected from the Chinese Hupu (https://nba.hupu.com/games) website. Derived
from the characteristics of sports games, LiveQA can potentially test the
reasoning ability across timeline-based live broadcasts, which is challenging
compared to the existing datasets. In LiveQA, the questions require
understanding the timeline, tracking events or doing mathematical computations.
Our preliminary experiments show that the dataset introduces a challenging
problem for question answering models, and a strong baseline model only
achieves the accuracy of 53.1\% and cannot beat the dominant option rule. We
release the code and data of this paper for future research.
| 2,020 | Computation and Language |
ISAAQ -- Mastering Textbook Questions with Pre-trained Transformers and
Bottom-Up and Top-Down Attention | Textbook Question Answering is a complex task in the intersection of Machine
Comprehension and Visual Question Answering that requires reasoning with
multimodal information from text and diagrams. For the first time, this paper
taps on the potential of transformer language models and bottom-up and top-down
attention to tackle the language and visual understanding challenges this task
entails. Rather than training a language-visual transformer from scratch we
rely on pre-trained transformers, fine-tuning and ensembling. We add bottom-up
and top-down attention to identify regions of interest corresponding to diagram
constituents and their relationships, improving the selection of relevant
visual information for each question and answer options. Our system ISAAQ
reports unprecedented success in all TQA question types, with accuracies of
81.36%, 71.11% and 55.12% on true/false, text-only and diagram multiple choice
questions. ISAAQ also demonstrates its broad applicability, obtaining
state-of-the-art results in other demanding datasets.
| 2,020 | Computation and Language |
Understanding tables with intermediate pre-training | Table entailment, the binary classification task of finding if a sentence is
supported or refuted by the content of a table, requires parsing language and
table structure as well as numerical and discrete reasoning. While there is
extensive work on textual entailment, table entailment is less well studied. We
adapt TAPAS (Herzig et al., 2020), a table-based BERT model, to recognize
entailment. Motivated by the benefits of data augmentation, we create a
balanced dataset of millions of automatically created training examples which
are learned in an intermediate step prior to fine-tuning. This new data is not
only useful for table entailment, but also for SQA (Iyyer et al., 2017), a
sequential table QA task. To be able to use long examples as input of BERT
models, we evaluate table pruning techniques as a pre-processing step to
drastically improve the training and prediction efficiency at a moderate drop
in accuracy. The different methods set the new state-of-the-art on the TabFact
(Chen et al., 2020) and SQA datasets.
| 2,020 | Computation and Language |
Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking | Graph neural networks (GNNs) have become a popular approach to integrating
structural inductive biases into NLP models. However, there has been little
work on interpreting them, and specifically on understanding which parts of the
graphs (e.g. syntactic trees or co-reference structures) contribute to a
prediction. In this work, we introduce a post-hoc method for interpreting the
predictions of GNNs which identifies unnecessary edges. Given a trained GNN
model, we learn a simple classifier that, for every edge in every layer,
predicts if that edge can be dropped. We demonstrate that such a classifier can
be trained in a fully differentiable fashion, employing stochastic gates and
encouraging sparsity through the expected $L_0$ norm. We use our technique as
an attribution method to analyze GNN models for two tasks -- question answering
and semantic role labeling -- providing insights into the information flow in
these models. We show that we can drop a large proportion of edges without
deteriorating the performance of the model, while we can analyse the remaining
edges for interpreting model predictions.
| 2,022 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.