Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
NUIG-Shubhanker@Dravidian-CodeMix-FIRE2020: Sentiment Analysis of
Code-Mixed Dravidian text using XLNet
|
Social media has penetrated into multilingual societies, however most of them
use English to be a preferred language for communication. So it looks natural
for them to mix their cultural language with English during conversations
resulting in abundance of multilingual data, call this code-mixed data,
available in todays' world.Downstream NLP tasks using such data is challenging
due to the semantic nature of it being spread across multiple languages.One
such Natural Language Processing task is sentiment analysis, for this we use an
auto-regressive XLNet model to perform sentiment analysis on code-mixed
Tamil-English and Malayalam-English datasets.
| 2,020 |
Computation and Language
|
Response Selection for Multi-Party Conversations with Dynamic Topic
Tracking
|
While participants in a multi-party multi-turn conversation simultaneously
engage in multiple conversation topics, existing response selection methods are
developed mainly focusing on a two-party single-conversation scenario. Hence,
the prolongation and transition of conversation topics are ignored by current
methods. In this work, we frame response selection as a dynamic topic tracking
task to match the topic between the response and relevant conversation context.
With this new formulation, we propose a novel multi-task learning framework
that supports efficient encoding through large pretrained models with only two
utterances at once to perform dynamic topic disentanglement and response
selection. We also propose Topic-BERT an essential pretraining step to embed
topic information into BERT with self-supervised learning. Experimental results
on the DSTC-8 Ubuntu IRC dataset show state-of-the-art results in response
selection and topic disentanglement tasks outperforming existing methods by a
good margin.
| 2,020 |
Computation and Language
|
Hierarchical Poset Decoding for Compositional Generalization in Language
|
We formalize human language understanding as a structured prediction task
where the output is a partially ordered set (poset). Current encoder-decoder
architectures do not take the poset structure of semantics into account
properly, thus suffering from poor compositional generalization ability. In
this paper, we propose a novel hierarchical poset decoding paradigm for
compositional generalization in language. Intuitively: (1) the proposed
paradigm enforces partial permutation invariance in semantics, thus avoiding
overfitting to bias ordering information; (2) the hierarchical mechanism allows
to capture high-level structures of posets. We evaluate our proposed decoder on
Compositional Freebase Questions (CFQ), a large and realistic natural language
question answering dataset that is specifically designed to measure
compositional generalization. Results show that it outperforms current
decoders.
| 2,020 |
Computation and Language
|
Where's the Question? A Multi-channel Deep Convolutional Neural Network
for Question Identification in Textual Data
|
In most clinical practice settings, there is no rigorous reviewing of the
clinical documentation, resulting in inaccurate information captured in the
patient medical records. The gold standard in clinical data capturing is
achieved via "expert-review", where clinicians can have a dialogue with a
domain expert (reviewers) and ask them questions about data entry rules.
Automatically identifying "real questions" in these dialogues could uncover
ambiguities or common problems in data capturing in a given clinical setting.
In this study, we proposed a novel multi-channel deep convolutional neural
network architecture, namely Quest-CNN, for the purpose of separating real
questions that expect an answer (information or help) about an issue from
sentences that are not questions, as well as from questions referring to an
issue mentioned in a nearby sentence (e.g., can you clarify this?), which we
will refer as "c-questions". We conducted a comprehensive performance
comparison analysis of the proposed multi-channel deep convolutional neural
network against other deep neural networks. Furthermore, we evaluated the
performance of traditional rule-based and learning-based methods for detecting
question sentences. The proposed Quest-CNN achieved the best F1 score both on a
dataset of data entry-review dialogue in a dialysis care setting, and on a
general domain dataset.
| 2,021 |
Computation and Language
|
Fine-Tuning Pre-trained Language Model with Weak Supervision: A
Contrastive-Regularized Self-Training Approach
|
Fine-tuned pre-trained language models (LMs) have achieved enormous success
in many natural language processing (NLP) tasks, but they still require
excessive labeled data in the fine-tuning stage. We study the problem of
fine-tuning pre-trained LMs using only weak supervision, without any labeled
data. This problem is challenging because the high capacity of LMs makes them
prone to overfitting the noisy labels generated by weak supervision. To address
this problem, we develop a contrastive self-training framework, COSINE, to
enable fine-tuning LMs with weak supervision. Underpinned by contrastive
regularization and confidence-based reweighting, this contrastive self-training
framework can gradually improve model fitting while effectively suppressing
error propagation. Experiments on sequence, token, and sentence pair
classification tasks show that our model outperforms the strongest baseline by
large margins on 7 benchmarks in 6 tasks, and achieves competitive performance
with fully-supervised fine-tuning methods.
| 2,021 |
Computation and Language
|
Update Frequently, Update Fast: Retraining Semantic Parsing Systems in a
Fraction of Time
|
Currently used semantic parsing systems deployed in voice assistants can
require weeks to train. Datasets for these models often receive small and
frequent updates, data patches. Each patch requires training a new model. To
reduce training time, one can fine-tune the previously trained model on each
patch, but naive fine-tuning exhibits catastrophic forgetting - degradation of
the model performance on the data not represented in the data patch. In this
work, we propose a simple method that alleviates catastrophic forgetting and
show that it is possible to match the performance of a model trained from
scratch in less than 10% of a time via fine-tuning. The key to achieving this
is supersampling and EWC regularization. We demonstrate the effectiveness of
our method on multiple splits of the Facebook TOP and SNIPS datasets.
| 2,021 |
Computation and Language
|
Tokenization Repair in the Presence of Spelling Errors
|
We consider the following tokenization repair problem: Given a natural
language text with any combination of missing or spurious spaces, correct
these. Spelling errors can be present, but it's not part of the problem to
correct them. For example, given: "Tispa per isabout token izaionrep air",
compute "Tis paper is about tokenizaion repair". We identify three key
ingredients of high-quality tokenization repair, all missing from previous
work: deep language models with a bidirectional component, training the models
on text with spelling errors, and making use of the space information already
present. Our methods also improve existing spell checkers by fixing not only
more tokenization errors but also more spelling errors: once it is clear which
characters form a word, it is much easier for them to figure out the correct
word. We provide six benchmarks that cover three use cases (OCR errors, text
extraction from PDF, human errors) and the cases of partially correct space
information and all spaces missing. We evaluate our methods against the best
existing methods and a non-trivial baseline. We provide full reproducibility
under https://ad.cs.uni-freiburg.de/publications .
| 2,021 |
Computation and Language
|
Understanding Neural Abstractive Summarization Models via Uncertainty
|
An advantage of seq2seq abstractive summarization models is that they
generate text in a free-form manner, but this flexibility makes it difficult to
interpret model behavior. In this work, we analyze summarization decoders in
both blackbox and whitebox ways by studying on the entropy, or uncertainty, of
the model's token-level predictions. For two strong pre-trained models, PEGASUS
and BART on two summarization datasets, we find a strong correlation between
low prediction entropy and where the model copies tokens rather than generating
novel text. The decoder's uncertainty also connects to factors like sentence
position and syntactic distance between adjacent pairs of tokens, giving a
sense of what factors make a context particularly selective for the model's
next output token. Finally, we study the relationship of decoder uncertainty
and attention behavior to understand how attention gives rise to these observed
effects in the model. We show that uncertainty is a useful perspective for
analyzing summarization and text generation models more broadly.
| 2,020 |
Computation and Language
|
Compressive Summarization with Plausibility and Salience Modeling
|
Compressive summarization systems typically rely on a crafted set of
syntactic rules to determine what spans of possible summary sentences can be
deleted, then learn a model of what to actually delete by optimizing for
content selection (ROUGE). In this work, we propose to relax the rigid
syntactic constraints on candidate spans and instead leave compression
decisions to two data-driven criteria: plausibility and salience. Deleting a
span is plausible if removing it maintains the grammaticality and factuality of
a sentence, and spans are salient if they contain important information from
the summary. Each of these is judged by a pre-trained Transformer model, and
only deletions that are both plausible and not salient can be applied. When
integrated into a simple extraction-compression pipeline, our method achieves
strong in-domain results on benchmark summarization datasets, and human
evaluation shows that the plausibility model generally selects for grammatical
and factual deletions. Furthermore, the flexibility of our approach allows it
to generalize cross-domain: our system fine-tuned on only 500 samples from a
new domain can match or exceed an in-domain extractive model trained on much
more data.
| 2,020 |
Computation and Language
|
Improving Natural Language Processing Tasks with Human Gaze-Guided
Neural Attention
|
A lack of corpora has so far limited advances in integrating human gaze data
as a supervisory signal in neural attention mechanisms for natural language
processing(NLP). We propose a novel hybrid text saliency model(TSM) that, for
the first time, combines a cognitive model of reading with explicit human gaze
supervision in a single machine learning framework. On four different corpora
we demonstrate that our hybrid TSM duration predictions are highly correlated
with human gaze ground truth. We further propose a novel joint modeling
approach to integrate TSM predictions into the attention layer of a network
designed for a specific upstream NLP task without the need for any
task-specific human gaze data. We demonstrate that our joint model outperforms
the state of the art in paraphrase generation on the Quora Question Pairs
corpus by more than 10% in BLEU-4 and achieves state of the art performance for
sentence compression on the challenging Google Sentence Compression corpus. As
such, our work introduces a practical approach for bridging between data-driven
and cognitive models and demonstrates a new way to integrate human gaze-guided
neural attention into NLP tasks.
| 2,020 |
Computation and Language
|
Explicit Alignment Objectives for Multilingual Bidirectional Encoders
|
Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and
XLMR (Conneau et al., 2020) have proven to be impressively effective at
enabling transfer-learning of NLP systems from high-resource languages to
low-resource languages. This success comes despite the fact that there is no
explicit objective to align the contextual embeddings of words/sentences with
similar meanings across languages together in the same space. In this paper, we
present a new method for learning multilingual encoders, AMBER (Aligned
Multilingual Bidirectional EncodeR). AMBER is trained on additional parallel
data using two explicit alignment objectives that align the multilingual
representations at different granularities. We conduct experiments on zero-shot
cross-lingual transfer learning for different tasks including sequence tagging,
sentence retrieval and sentence classification. Experimental results show that
AMBER obtains gains of up to 1.1 average F1 score on sequence tagging and up to
27.3 average accuracy on retrieval over the XLMR-large model which has 3.2x the
parameters of AMBER. Our code and models are available at
http://github.com/junjiehu/amber.
| 2,021 |
Computation and Language
|
CXP949 at WNUT-2020 Task 2: Extracting Informative COVID-19 Tweets --
RoBERTa Ensembles and The Continued Relevance of Handcrafted Features
|
This paper presents our submission to Task 2 of the Workshop on Noisy
User-generated Text. We explore improving the performance of a pre-trained
transformer-based language model fine-tuned for text classification through an
ensemble implementation that makes use of corpus level information and a
handcrafted feature. We test the effectiveness of including the aforementioned
features in accommodating the challenges of a noisy data set centred on a
specific subject outside the remit of the pre-training data. We show that
inclusion of additional features can improve classification results and achieve
a score within 2 points of the top performing team.
| 2,020 |
Computation and Language
|
What is More Likely to Happen Next? Video-and-Language Future Event
Prediction
|
Given a video with aligned dialogue, people can often infer what is more
likely to happen next. Making such predictions requires not only a deep
understanding of the rich dynamics underlying the video and dialogue, but also
a significant amount of commonsense knowledge. In this work, we explore whether
AI models are able to learn to make such multimodal commonsense next-event
predictions. To support research in this direction, we collect a new dataset,
named Video-and-Language Event Prediction (VLEP), with 28,726 future event
prediction examples (along with their rationales) from 10,234 diverse TV Show
and YouTube Lifestyle Vlog video clips. In order to promote the collection of
non-trivial challenging examples, we employ an adversarial
human-and-model-in-the-loop data collection procedure. We also present a strong
baseline incorporating information from video, dialogue, and commonsense
knowledge. Experiments show that each type of information is useful for this
challenging task, and that compared to the high human performance on VLEP, our
model provides a good starting point but leaves large room for future work. Our
dataset and code are available at:
https://github.com/jayleicn/VideoLanguageFuturePred
| 2,020 |
Computation and Language
|
GSum: A General Framework for Guided Neural Abstractive Summarization
|
Neural abstractive summarization models are flexible and can produce coherent
summaries, but they are sometimes unfaithful and can be difficult to control.
While previous studies attempt to provide different types of guidance to
control the output and increase faithfulness, it is not clear how these
strategies compare and contrast to each other. In this paper, we propose a
general and extensible guided summarization framework (GSum) that can
effectively take different kinds of external guidance as input, and we perform
experiments across several different varieties. Experiments demonstrate that
this model is effective, achieving state-of-the-art performance according to
ROUGE on 4 popular summarization datasets when using highlighted sentences as
guidance. In addition, we show that our guided model can generate more faithful
summaries and demonstrate how different types of guidance generate
qualitatively different summaries, lending a degree of controllability to the
learned models.
| 2,021 |
Computation and Language
|
MAST: Multimodal Abstractive Summarization with Trimodal Hierarchical
Attention
|
This paper presents MAST, a new model for Multimodal Abstractive Text
Summarization that utilizes information from all three modalities -- text,
audio and video -- in a multimodal video. Prior work on multimodal abstractive
text summarization only utilized information from the text and video
modalities. We examine the usefulness and challenges of deriving information
from the audio modality and present a sequence-to-sequence trimodal
hierarchical attention-based model that overcomes these challenges by letting
the model pay more attention to the text modality. MAST outperforms the current
state of the art model (video-text) by 2.51 points in terms of Content F1 score
and 1.00 points in terms of Rouge-L score on the How2 dataset for multimodal
language understanding.
| 2,020 |
Computation and Language
|
Montague Grammar Induction
|
We propose a computational modeling framework for inducing combinatory
categorial grammars from arbitrary behavioral data. This framework provides the
analyst fine-grained control over the assumptions that the induced grammar
should conform to: (i) what the primitive types are; (ii) how complex types are
constructed; (iii) what set of combinators can be used to combine types; and
(iv) whether (and to what) the types of some lexical items should be fixed. In
a proof-of-concept experiment, we deploy our framework for use in
distributional analysis. We focus on the relationship between
s(emantic)-selection and c(ategory)-selection, using as input a lexicon-scale
acceptability judgment dataset focused on English verbs' syntactic distribution
(the MegaAcceptability dataset) and enforcing standard assumptions from the
semantics literature on the induced grammar.
| 2,020 |
Computation and Language
|
Inferring symmetry in natural language
|
We present a methodological framework for inferring symmetry of verb
predicates in natural language. Empirical work on predicate symmetry has taken
two main approaches. The feature-based approach focuses on linguistic features
pertaining to symmetry. The context-based approach denies the existence of
absolute symmetry but instead argues that such inference is context dependent.
We develop methods that formalize these approaches and evaluate them against a
novel symmetry inference sentence (SIS) dataset comprised of 400 naturalistic
usages of literature-informed verbs spanning the spectrum of
symmetry-asymmetry. Our results show that a hybrid transfer learning model that
integrates linguistic features with contextualized language models most
faithfully predicts the empirical data. Our work integrates existing approaches
to symmetry in natural language and suggests how symmetry inference can improve
systematicity in state-of-the-art language models.
| 2,020 |
Computation and Language
|
Generating Diverse Translation from Model Distribution with Dropout
|
Despite the improvement of translation quality, neural machine translation
(NMT) often suffers from the lack of diversity in its generation. In this
paper, we propose to generate diverse translations by deriving a large number
of possible models with Bayesian modelling and sampling models from them for
inference. The possible models are obtained by applying concrete dropout to the
NMT model and each of them has specific confidence for its prediction, which
corresponds to a posterior model distribution under specific training data in
the principle of Bayesian modeling. With variational inference, the posterior
model distribution can be approximated with a variational distribution, from
which the final models for inference are sampled. We conducted experiments on
Chinese-English and English-German translation tasks and the results shows that
our method makes a better trade-off between diversity and accuracy.
| 2,020 |
Computation and Language
|
DiDi's Machine Translation System for WMT2020
|
This paper describes DiDi AI Labs' submission to the WMT2020 news translation
shared task. We participate in the translation direction of Chinese->English.
In this direction, we use the Transformer as our baseline model, and integrate
several techniques for model enhancement, including data filtering, data
selection, back-translation, fine-tuning, model ensembling, and re-ranking. As
a result, our submission achieves a BLEU score of $36.6$ in Chinese->English.
| 2,020 |
Computation and Language
|
RocketQA: An Optimized Training Approach to Dense Passage Retrieval for
Open-Domain Question Answering
|
In open-domain question answering, dense passage retrieval has become a new
paradigm to retrieve relevant passages for finding answers. Typically, the
dual-encoder architecture is adopted to learn dense representations of
questions and passages for semantic matching. However, it is difficult to
effectively train a dual-encoder due to the challenges including the
discrepancy between training and inference, the existence of unlabeled
positives and limited training data. To address these challenges, we propose an
optimized training approach, called RocketQA, to improving dense passage
retrieval. We make three major technical contributions in RocketQA, namely
cross-batch negatives, denoised hard negatives and data augmentation. The
experiment results show that RocketQA significantly outperforms previous
state-of-the-art models on both MSMARCO and Natural Questions. We also conduct
extensive experiments to examine the effectiveness of the three strategies in
RocketQA. Besides, we demonstrate that the performance of end-to-end QA can be
improved based on our RocketQA retriever.
| 2,021 |
Computation and Language
|
Lexicon-constrained Copying Network for Chinese Abstractive
Summarization
|
Copy mechanism allows sequence-to-sequence models to choose words from the
input and put them directly into the output, which is finding increasing use in
abstractive summarization. However, since there is no explicit delimiter in
Chinese sentences, most existing models for Chinese abstractive summarization
can only perform character copy, resulting in inefficient. To solve this
problem, we propose a lexicon-constrained copying network that models
multi-granularity in both encoder and decoder. On the source side, words and
characters are aggregated into the same input memory using a Transformerbased
encoder. On the target side, the decoder can copy either a character or a
multi-character word at each time step, and the decoding process is guided by a
word-enhanced search algorithm that facilitates the parallel computation and
encourages the model to copy more words. Moreover, we adopt a word selector to
integrate keyword information. Experiments results on a Chinese social media
dataset show that our model can work standalone or with the word selector. Both
forms can outperform previous character-based models and achieve competitive
performances.
| 2,021 |
Computation and Language
|
Unsupervised Natural Language Inference via Decoupled Multimodal
Contrastive Learning
|
We propose to solve the natural language inference problem without any
supervision from the inference labels via task-agnostic multimodal pretraining.
Although recent studies of multimodal self-supervised learning also represent
the linguistic and visual context, their encoders for different modalities are
coupled. Thus they cannot incorporate visual information when encoding plain
text alone. In this paper, we propose Multimodal Aligned Contrastive Decoupled
learning (MACD) network. MACD forces the decoupled text encoder to represent
the visual information via contrastive learning. Therefore, it embeds visual
knowledge even for plain text inference. We conducted comprehensive experiments
over plain text inference datasets (i.e. SNLI and STS-B). The unsupervised MACD
even outperforms the fully-supervised BiLSTM and BiLSTM+ELMO on STS-B.
| 2,020 |
Computation and Language
|
Coarse-to-Fine Pre-training for Named Entity Recognition
|
More recently, Named Entity Recognition hasachieved great advances aided by
pre-trainingapproaches such as BERT. However, currentpre-training techniques
focus on building lan-guage modeling objectives to learn a gen-eral
representation, ignoring the named entity-related knowledge. To this end, we
proposea NER-specific pre-training framework to in-ject coarse-to-fine
automatically mined entityknowledge into pre-trained models. Specifi-cally, we
first warm-up the model via an en-tity span identification task by training it
withWikipedia anchors, which can be deemed asgeneral-typed entities. Then we
leverage thegazetteer-based distant supervision strategy totrain the model
extract coarse-grained typedentities. Finally, we devise a
self-supervisedauxiliary task to mine the fine-grained namedentity knowledge
via clustering.Empiricalstudies on three public NER datasets demon-strate that
our framework achieves significantimprovements against several pre-trained
base-lines, establishing the new state-of-the-art per-formance on three
benchmarks. Besides, weshow that our framework gains promising re-sults without
using human-labeled trainingdata, demonstrating its effectiveness in label-few
and low-resource scenarios
| 2,020 |
Computation and Language
|
Collaborative Training of GANs in Continuous and Discrete Spaces for
Text Generation
|
Applying generative adversarial networks (GANs) to text-related tasks is
challenging due to the discrete nature of language. One line of research
resolves this issue by employing reinforcement learning (RL) and optimizing the
next-word sampling policy directly in a discrete action space. Such methods
compute the rewards from complete sentences and avoid error accumulation due to
exposure bias. Other approaches employ approximation techniques that map the
text to continuous representation in order to circumvent the non-differentiable
discrete process. Particularly, autoencoder-based methods effectively produce
robust representations that can model complex discrete structures. In this
paper, we propose a novel text GAN architecture that promotes the collaborative
training of the continuous-space and discrete-space methods. Our method employs
an autoencoder to learn an implicit data manifold, providing a learning
objective for adversarial training in a continuous space. Furthermore, the
complete textual output is directly evaluated and updated via RL in a discrete
space. The collaborative interplay between the two adversarial trainings
effectively regularize the text representations in different spaces. The
experimental results on three standard benchmark datasets show that our model
substantially outperforms state-of-the-art text GANs with respect to quality,
diversity, and global consistency.
| 2,020 |
Computation and Language
|
WNUT-2020 Task 2: Identification of Informative COVID-19 English Tweets
|
In this paper, we provide an overview of the WNUT-2020 shared task on the
identification of informative COVID-19 English Tweets. We describe how we
construct a corpus of 10K Tweets and organize the development and evaluation
phases for this task. In addition, we also present a brief summary of results
obtained from the final system evaluation submissions of 55 teams, finding that
(i) many systems obtain very high performance, up to 0.91 F1 score, (ii) the
majority of the submissions achieve substantially higher results than the
baseline fastText (Joulin et al., 2017), and (iii) fine-tuning pre-trained
language models on relevant language data followed by supervised training
performs well in this task.
| 2,020 |
Computation and Language
|
Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for
Pairwise Sentence Scoring Tasks
|
There are two approaches for pairwise sentence scoring: Cross-encoders, which
perform full-attention over the input pair, and Bi-encoders, which map each
input independently to a dense vector space. While cross-encoders often achieve
higher performance, they are too slow for many practical use cases.
Bi-encoders, on the other hand, require substantial training data and
fine-tuning over the target task to achieve competitive performance. We present
a simple yet efficient data augmentation strategy called Augmented SBERT, where
we use the cross-encoder to label a larger set of input pairs to augment the
training data for the bi-encoder. We show that, in this process, selecting the
sentence pairs is non-trivial and crucial for the success of the method. We
evaluate our approach on multiple tasks (in-domain) as well as on a domain
adaptation task. Augmented SBERT achieves an improvement of up to 6 points for
in-domain and of up to 37 points for domain adaptation tasks compared to the
original bi-encoder performance.
| 2,021 |
Computation and Language
|
Unsupervised Extractive Summarization by Pre-training Hierarchical
Transformers
|
Unsupervised extractive document summarization aims to select important
sentences from a document without using labeled summaries during training.
Existing methods are mostly graph-based with sentences as nodes and edge
weights measured by sentence similarities. In this work, we find that
transformer attentions can be used to rank sentences for unsupervised
extractive summarization. Specifically, we first pre-train a hierarchical
transformer model using unlabeled documents only. Then we propose a method to
rank sentences using sentence-level self-attentions and pre-training
objectives. Experiments on CNN/DailyMail and New York Times datasets show our
model achieves state-of-the-art performance on unsupervised summarization. We
also find in experiments that our model is less dependent on sentence
positions. When using a linear combination of our model and a recent
unsupervised model explicitly modeling sentence positions, we obtain even
better results.
| 2,020 |
Computation and Language
|
SIGTYP 2020 Shared Task: Prediction of Typological Features
|
Typological knowledge bases (KBs) such as WALS (Dryer and Haspelmath, 2013)
contain information about linguistic properties of the world's languages. They
have been shown to be useful for downstream applications, including
cross-lingual transfer learning and linguistic probing. A major drawback
hampering broader adoption of typological KBs is that they are sparsely
populated, in the sense that most languages only have annotations for some
features, and skewed, in that few features have wide coverage. As typological
features often correlate with one another, it is possible to predict them and
thus automatically populate typological KBs, which is also the focus of this
shared task. Overall, the task attracted 8 submissions from 5 teams, out of
which the most successful methods make use of such feature correlations.
However, our error analysis reveals that even the strongest submitted systems
struggle with predicting feature values for languages where few features are
known.
| 2,020 |
Computation and Language
|
Training Flexible Depth Model by Multi-Task Learning for Neural Machine
Translation
|
The standard neural machine translation model can only decode with the same
depth configuration as training. Restricted by this feature, we have to deploy
models of various sizes to maintain the same translation latency, because the
hardware conditions on different terminal devices (e.g., mobile phones) may
vary greatly. Such individual training leads to increased model maintenance
costs and slower model iterations, especially for the industry. In this work,
we propose to use multi-task learning to train a flexible depth model that can
adapt to different depth configurations during inference. Experimental results
show that our approach can simultaneously support decoding in 24 depth
configurations and is superior to the individual training and another flexible
depth model training method -- LayerDrop.
| 2,020 |
Computation and Language
|
It's not Greek to mBERT: Inducing Word-Level Translations from
Multilingual BERT
|
Recent works have demonstrated that multilingual BERT (mBERT) learns rich
cross-lingual representations, that allow for transfer across languages. We
study the word-level translation information embedded in mBERT and present two
simple methods that expose remarkable translation capabilities with no
fine-tuning. The results suggest that most of this information is encoded in a
non-linear way, while some of it can also be recovered with purely linear
tools. As part of our analysis, we test the hypothesis that mBERT learns
representations which contain both a language-encoding component and an
abstract, cross-lingual component, and explicitly identify an empirical
language-identity subspace within mBERT representations.
| 2,020 |
Computation and Language
|
Multi-task Learning of Negation and Speculation for Targeted Sentiment
Classification
|
The majority of work in targeted sentiment analysis has concentrated on
finding better methods to improve the overall results. Within this paper we
show that these models are not robust to linguistic phenomena, specifically
negation and speculation. In this paper, we propose a multi-task learning
method to incorporate information from syntactic and semantic auxiliary tasks,
including negation and speculation scope detection, to create English-language
models that are more robust to these phenomena. Further we create two challenge
datasets to evaluate model performance on negated and speculative samples. We
find that multi-task models and transfer learning via language modelling can
improve performance on these challenge datasets, but the overall performances
indicate that there is still much room for improvement. We release both the
datasets and the source code at
https://github.com/jerbarnes/multitask_negation_for_targeted_sentiment.
| 2,021 |
Computation and Language
|
Detecting ESG topics using domain-specific language models and data
augmentation approaches
|
Despite recent advances in deep learning-based language modelling, many
natural language processing (NLP) tasks in the financial domain remain
challenging due to the paucity of appropriately labelled data. Other issues
that can limit task performance are differences in word distribution between
the general corpora - typically used to pre-train language models - and
financial corpora, which often exhibit specialized language and symbology.
Here, we investigate two approaches that may help to mitigate these issues.
Firstly, we experiment with further language model pre-training using large
amounts of in-domain data from business and financial news. We then apply
augmentation approaches to increase the size of our dataset for model
fine-tuning. We report our findings on an Environmental, Social and Governance
(ESG) controversies dataset and demonstrate that both approaches are beneficial
to accuracy in classification tasks.
| 2,020 |
Computation and Language
|
QA2Explanation: Generating and Evaluating Explanations for Question
Answering Systems over Knowledge Graph
|
In the era of Big Knowledge Graphs, Question Answering (QA) systems have
reached a milestone in their performance and feasibility. However, their
applicability, particularly in specific domains such as the biomedical domain,
has not gained wide acceptance due to their "black box" nature, which hinders
transparency, fairness, and accountability of QA systems. Therefore, users are
unable to understand how and why particular questions have been answered,
whereas some others fail. To address this challenge, in this paper, we develop
an automatic approach for generating explanations during various stages of a
pipeline-based QA system. Our approach is a supervised and automatic approach
which considers three classes (i.e., success, no answer, and wrong answer) for
annotating the output of involved QA components. Upon our prediction, a
template explanation is chosen and integrated into the output of the
corresponding component. To measure the effectiveness of the approach, we
conducted a user survey as to how non-expert users perceive our generated
explanations. The results of our study show a significant increase in the four
dimensions of the human factor from the Human-computer interaction community.
| 2,020 |
Computation and Language
|
From Talk to Action with Accountability: Monitoring the Public
Discussion of Policy Makers with Deep Neural Networks and Topic Modelling
|
Decades of research on climate have provided a consensus that human activity
has changed the climate and we are currently heading into a climate crisis.
While public discussion and research efforts on climate change mitigation have
increased, potential solutions need to not only be discussed but also
effectively deployed. For preventing mismanagement and holding policy makers
accountable, transparency and degree of information about government processes
have been shown to be crucial. However, currently the quantity of information
about climate change discussions and the range of sources make it increasingly
difficult for the public and civil society to maintain an overview to hold
politicians accountable.
In response, we propose a multi-source topic aggregation system (MuSTAS)
which processes policy makers speech and rhetoric from several publicly
available sources into an easily digestible topic summary. MuSTAS uses novel
multi-source hybrid latent Dirichlet allocation to model topics from a variety
of documents. This topic digest will serve the general public and civil society
in assessing where, how, and when politicians talk about climate and climate
policies, enabling them to hold politicians accountable for their actions to
mitigate climate change and lack thereof.
| 2,021 |
Computation and Language
|
Vector-Vector-Matrix Architecture: A Novel Hardware-Aware Framework for
Low-Latency Inference in NLP Applications
|
Deep neural networks have become the standard approach to building reliable
Natural Language Processing (NLP) applications, ranging from Neural Machine
Translation (NMT) to dialogue systems. However, improving accuracy by
increasing the model size requires a large number of hardware computations,
which can slow down NLP applications significantly at inference time. To
address this issue, we propose a novel vector-vector-matrix architecture
(VVMA), which greatly reduces the latency at inference time for NMT. This
architecture takes advantage of specialized hardware that has low-latency
vector-vector operations and higher-latency vector-matrix operations. It also
reduces the number of parameters and FLOPs for virtually all models that rely
on efficient matrix multipliers without significantly impacting accuracy. We
present empirical results suggesting that our framework can reduce the latency
of sequence-to-sequence and Transformer models used for NMT by a factor of
four. Finally, we show evidence suggesting that our VVMA extends to other
domains, and we discuss novel hardware for its efficient use.
| 2,020 |
Computation and Language
|
Delaying Interaction Layers in Transformer-based Encoders for Efficient
Open Domain Question Answering
|
Open Domain Question Answering (ODQA) on a large-scale corpus of documents
(e.g. Wikipedia) is a key challenge in computer science. Although
transformer-based language models such as Bert have shown on SQuAD the ability
to surpass humans for extracting answers in small passages of text, they suffer
from their high complexity when faced to a much larger search space. The most
common way to tackle this problem is to add a preliminary Information Retrieval
step to heavily filter the corpus and only keep the relevant passages. In this
paper, we propose a more direct and complementary solution which consists in
applying a generic change in the architecture of transformer-based models to
delay the attention between subparts of the input and allow a more efficient
management of computations. The resulting variants are competitive with the
original models on the extractive task and allow, on the ODQA setting, a
significant speedup and even a performance improvement in many cases.
| 2,020 |
Computation and Language
|
Multi-Adversarial Learning for Cross-Lingual Word Embeddings
|
Generative adversarial networks (GANs) have succeeded in inducing
cross-lingual word embeddings -- maps of matching words across languages --
without supervision. Despite these successes, GANs' performance for the
difficult case of distant languages is still not satisfactory. These
limitations have been explained by GANs' incorrect assumption that source and
target embedding spaces are related by a single linear mapping and are
approximately isomorphic. We assume instead that, especially across distant
languages, the mapping is only piece-wise linear, and propose a
multi-adversarial learning method. This novel method induces the seed
cross-lingual dictionary through multiple mappings, each induced to fit the
mapping for one subspace. Our experiments on unsupervised bilingual lexicon
induction show that this method improves performance over previous
single-mapping methods, especially for distant languages.
| 2,021 |
Computation and Language
|
An efficient representation of chronological events in medical texts
|
In this work we addressed the problem of capturing sequential information
contained in longitudinal electronic health records (EHRs). Clinical notes,
which is a particular type of EHR data, are a rich source of information and
practitioners often develop clever solutions how to maximise the sequential
information contained in free-texts. We proposed a systematic methodology for
learning from chronological events available in clinical notes. The proposed
methodological {\it path signature} framework creates a non-parametric
hierarchical representation of sequential events of any type and can be used as
features for downstream statistical learning tasks. The methodology was
developed and externally validated using the largest in the UK secondary care
mental health EHR data on a specific task of predicting survival risk of
patients diagnosed with Alzheimer's disease. The signature-based model was
compared to a common survival random forest model. Our results showed a
15.4$\%$ increase of risk prediction AUC at the time point of 20 months after
the first admission to a specialist memory clinic and the signature method
outperformed the baseline mixed-effects model by 13.2 $\%$.
| 2,020 |
Computation and Language
|
Adaptive Feature Selection for End-to-End Speech Translation
|
Information in speech signals is not evenly distributed, making it an
additional challenge for end-to-end (E2E) speech translation (ST) to learn to
focus on informative features. In this paper, we propose adaptive feature
selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR
encoder and apply AFS to dynamically estimate the importance of each encoded
speech feature to SR. A ST encoder, stacked on top of the ASR encoder, then
receives the filtered features from the (frozen) ASR encoder. We take L0DROP
(Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech
features with respect to both temporal and feature dimensions. Results on
LibriSpeech En-Fr and MuST-C benchmarks show that AFS facilitates learning of
ST by pruning out ~84% temporal features, yielding an average translation gain
of ~1.3-1.6 BLEU and a decoding speedup of ~1.4x. In particular, AFS reduces
the performance gap compared to the cascade baseline, and outperforms it on
LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation)
| 2,020 |
Computation and Language
|
Detecting Objectifying Language in Online Professor Reviews
|
Student reviews often make reference to professors' physical appearances.
Until recently RateMyProfessors.com, the website of this study's focus, used a
design feature to encourage a "hot or not" rating of college professors. In the
wake of recent #MeToo and #TimesUp movements, social awareness of the
inappropriateness of these reviews has grown; however, objectifying comments
remain and continue to be posted in this online context. We describe two
supervised text classifiers for detecting objectifying commentary in professor
reviews. We then ensemble these classifiers and use the resulting model to
track objectifying commentary at scale. We measure correlations between
objectifying commentary, changes to the review website interface, and teacher
gender across a ten-year period.
| 2,020 |
Computation and Language
|
Mischief: A Simple Black-Box Attack Against Transformer Architectures
|
We introduce Mischief, a simple and lightweight method to produce a class of
human-readable, realistic adversarial examples for language models. We perform
exhaustive experimentations of our algorithm on four transformer-based
architectures, across a variety of downstream tasks, as well as under varying
concentrations of said examples. Our findings show that the presence of
Mischief-generated adversarial samples in the test set significantly degrades
(by up to $20\%$) the performance of these models with respect to their
reported baselines. Nonetheless, we also demonstrate that, by including similar
examples in the training set, it is possible to restore the baseline scores on
the adversarial test set. Moreover, for certain tasks, the models trained with
Mischief set show a modest increase on performance with respect to their
original, non-adversarial baseline.
| 2,020 |
Computation and Language
|
Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf
Language Models
|
Publicly available, large pretrained LanguageModels (LMs) generate text with
remarkable quality, but only sequentially from left to right. As a result, they
are not immediately applicable to generation tasks that break the
unidirectional assumption, such as paraphrasing or text-infilling,
necessitating task-specific supervision.
In this paper, we present Reflective Decoding, a novel unsupervised algorithm
that allows for direct application of unidirectional LMs to non-sequential
tasks. Our 2-step approach requires no supervision or even parallel corpora,
only two off-the-shelf pretrained LMs in opposite directions: forward and
backward. First, in the contextualization step, we use LMs to generate
ensembles of past and future contexts which collectively capture the input
(e.g. the source sentence for paraphrasing). Second, in the reflection step, we
condition on these "context ensembles", generating outputs that are compatible
with them. Comprehensive empirical results demonstrate that Reflective Decoding
outperforms strong unsupervised baselines on both paraphrasing and abductive
text infilling, significantly narrowing the gap between unsupervised and
supervised methods. Reflective Decoding surpasses multiple supervised baselines
on various metrics including human evaluation.
| 2,021 |
Computation and Language
|
Generating Fact Checking Summaries for Web Claims
|
We present SUMO, a neural attention-based approach that learns to establish
the correctness of textual claims based on evidence in the form of text
documents (e.g., news articles or Web documents). SUMO further generates an
extractive summary by presenting a diversified set of sentences from the
documents that explain its decision on the correctness of the textual claim.
Prior approaches to address the problem of fact checking and evidence
extraction have relied on simple concatenation of claim and document word
embeddings as an input to claim driven attention weight computation. This is
done so as to extract salient words and sentences from the documents that help
establish the correctness of the claim. However, this design of claim-driven
attention does not capture the contextual information in documents properly. We
improve on the prior art by using improved claim and title guided hierarchical
attention to model effective contextual cues. We show the efficacy of our
approach on datasets concerning political, healthcare, and environmental
issues.
| 2,020 |
Computation and Language
|
Linguistically-Informed Transformations (LIT): A Method for
Automatically Generating Contrast Sets
|
Although large-scale pretrained language models, such as BERT and RoBERTa,
have achieved superhuman performance on in-distribution test sets, their
performance suffers on out-of-distribution test sets (e.g., on contrast sets).
Building contrast sets often re-quires human-expert annotation, which is
expensive and hard to create on a large scale. In this work, we propose a
Linguistically-Informed Transformation (LIT) method to automatically generate
contrast sets, which enables practitioners to explore linguistic phenomena of
interests as well as compose different phenomena. Experimenting with our method
on SNLI and MNLI shows that current pretrained language models, although being
claimed to contain sufficient linguistic knowledge, struggle on our
automatically generated contrast sets. Furthermore, we improve models'
performance on the contrast sets by apply-ing LIT to augment the training data,
without affecting performance on the original data.
| 2,020 |
Computation and Language
|
Substance over Style: Document-Level Targeted Content Transfer
|
Existing language models excel at writing from scratch, but many real-world
scenarios require rewriting an existing document to fit a set of constraints.
Although sentence-level rewriting has been fairly well-studied, little work has
addressed the challenge of rewriting an entire document coherently. In this
work, we introduce the task of document-level targeted content transfer and
address it in the recipe domain, with a recipe as the document and a dietary
restriction (such as vegan or dairy-free) as the targeted constraint. We
propose a novel model for this task based on the generative pre-trained
language model (GPT-2) and train on a large number of roughly-aligned recipe
pairs (https://github.com/microsoft/document-level-targeted-content-transfer).
Both automatic and human evaluations show that our model out-performs existing
methods by generating coherent and diverse rewrites that obey the constraint
while remaining close to the original document. Finally, we analyze our model's
rewrites to assess progress toward the goal of making language generation more
attuned to constraints that are substantive rather than stylistic.
| 2,020 |
Computation and Language
|
Multimodal Speech Recognition with Unstructured Audio Masking
|
Visual context has been shown to be useful for automatic speech recognition
(ASR) systems when the speech signal is noisy or corrupted. Previous work,
however, has only demonstrated the utility of visual context in an unrealistic
setting, where a fixed set of words are systematically masked in the audio. In
this paper, we simulate a more realistic masking scenario during model
training, called RandWordMask, where the masking can occur for any word
segment. Our experiments on the Flickr 8K Audio Captions Corpus show that
multimodal ASR can generalize to recover different types of masked words in
this unstructured masking setting. Moreover, our analysis shows that our models
are capable of attending to the visual signal when the audio signal is
corrupted. These results show that multimodal ASR systems can leverage the
visual signal in more generalized noisy scenarios.
| 2,020 |
Computation and Language
|
Cross-Lingual Relation Extraction with Transformers
|
Relation extraction (RE) is one of the most important tasks in information
extraction, as it provides essential information for many NLP applications. In
this paper, we propose a cross-lingual RE approach that does not require any
human annotation in a target language or any cross-lingual resources. Building
upon unsupervised cross-lingual representation learning frameworks, we develop
several deep Transformer based RE models with a novel encoding scheme that can
effectively encode both entity location and entity type information. Our RE
models, when trained with English data, outperform several deep neural network
based English RE models. More importantly, our models can be applied to perform
zero-shot cross-lingual RE, achieving the state-of-the-art cross-lingual RE
performance on two datasets (68-89% of the accuracy of the supervised
target-language RE model). The high cross-lingual transfer efficiency without
requiring additional training data or cross-lingual resources shows that our RE
models are especially useful for low-resource languages.
| 2,020 |
Computation and Language
|
CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for
Natural Language Understanding
|
Data augmentation has been demonstrated as an effective strategy for
improving model generalization and data efficiency. However, due to the
discrete nature of natural language, designing label-preserving transformations
for text data tends to be more challenging. In this paper, we propose a novel
data augmentation framework dubbed CoDA, which synthesizes diverse and
informative augmented examples by integrating multiple transformations
organically. Moreover, a contrastive regularization objective is introduced to
capture the global relationship among all the data samples. A momentum encoder
along with a memory bank is further leveraged to better estimate the
contrastive loss. To verify the effectiveness of the proposed framework, we
apply CoDA to Transformer-based models on a wide range of natural language
understanding tasks. On the GLUE benchmark, CoDA gives rise to an average
improvement of 2.2% while applied to the RoBERTa-large model. More importantly,
it consistently exhibits stronger results relative to several competitive data
augmentation and adversarial training base-lines (including the low-resource
settings). Extensive experiments show that the proposed contrastive objective
can be flexibly combined with various data augmentation approaches to further
boost their performance, highlighting the wide applicability of the CoDA
framework.
| 2,020 |
Computation and Language
|
Example-Driven Intent Prediction with Observers
|
A key challenge of dialog systems research is to effectively and efficiently
adapt to new domains. A scalable paradigm for adaptation necessitates the
development of generalizable models that perform well in few-shot settings. In
this paper, we focus on the intent classification problem which aims to
identify user intents given utterances addressed to the dialog system. We
propose two approaches for improving the generalizability of utterance
classification models: (1) observers and (2) example-driven training. Prior
work has shown that BERT-like models tend to attribute a significant amount of
attention to the [CLS] token, which we hypothesize results in diluted
representations. Observers are tokens that are not attended to, and are an
alternative to the [CLS] token as a semantic representation of utterances.
Example-driven training learns to classify utterances by comparing to examples,
thereby using the underlying encoder as a sentence similarity model. These
methods are complementary; improving the representation through observers
allows the example-driven model to better measure sentence similarities. When
combined, the proposed methods attain state-of-the-art results on three intent
prediction datasets (\textsc{banking77}, \textsc{clinc150}, \textsc{hwu64}) in
both the full data and few-shot (10 examples per intent) settings. Furthermore,
we demonstrate that the proposed approach can transfer to new intents and
across datasets without any additional training.
| 2,021 |
Computation and Language
|
Factual Error Correction for Abstractive Summarization Models
|
Neural abstractive summarization systems have achieved promising progress,
thanks to the availability of large-scale datasets and models pre-trained with
self-supervised methods. However, ensuring the factual consistency of the
generated summaries for abstractive summarization systems is a challenge. We
propose a post-editing corrector module to address this issue by identifying
and correcting factual errors in generated summaries. The neural corrector
model is pre-trained on artificial examples that are created by applying a
series of heuristic transformations on reference summaries. These
transformations are inspired by an error analysis of state-of-the-art
summarization model outputs. Experimental results show that our model is able
to correct factual errors in summaries generated by other neural summarization
models and outperforms previous models on factual consistency evaluation on the
CNN/DailyMail dataset. We also find that transferring from artificial error
correction to downstream settings is still very challenging.
| 2,021 |
Computation and Language
|
A Corpus for English-Japanese Multimodal Neural Machine Translation with
Comparable Sentences
|
Multimodal neural machine translation (NMT) has become an increasingly
important area of research over the years because additional modalities, such
as image data, can provide more context to textual data. Furthermore, the
viability of training multimodal NMT models without a large parallel corpus
continues to be investigated due to low availability of parallel sentences with
images, particularly for English-Japanese data. However, this void can be
filled with comparable sentences that contain bilingual terms and parallel
phrases, which are naturally created through media such as social network posts
and e-commerce product descriptions. In this paper, we propose a new multimodal
English-Japanese corpus with comparable sentences that are compiled from
existing image captioning datasets. In addition, we supplement our comparable
sentences with a smaller parallel corpus for validation and test purposes. To
test the performance of this comparable sentence translation scenario, we train
several baseline NMT models with our comparable corpus and evaluate their
English-Japanese translation performance. Due to low translation scores in our
baseline experiments, we believe that current multimodal NMT models are not
designed to effectively utilize comparable sentence data. Despite this, we hope
for our corpus to be used to further research into multimodal NMT with
comparable sentences.
| 2,020 |
Computation and Language
|
Incorporate Semantic Structures into Machine Translation Evaluation via
UCCA
|
Copying mechanism has been commonly used in neural paraphrasing networks and
other text generation tasks, in which some important words in the input
sequence are preserved in the output sequence. Similarly, in machine
translation, we notice that there are certain words or phrases appearing in all
good translations of one source text, and these words tend to convey important
semantic information. Therefore, in this work, we define words carrying
important semantic meanings in sentences as semantic core words. Moreover, we
propose an MT evaluation approach named Semantically Weighted Sentence
Similarity (SWSS). It leverages the power of UCCA to identify semantic core
words, and then calculates sentence similarity scores on the overlap of
semantic core words. Experimental results show that SWSS can consistently
improve the performance of popular MT evaluation metrics which are based on
lexical similarity.
| 2,020 |
Computation and Language
|
RiSAWOZ: A Large-Scale Multi-Domain Wizard-of-Oz Dataset with Rich
Semantic Annotations for Task-Oriented Dialogue Modeling
|
In order to alleviate the shortage of multi-domain data and to capture
discourse phenomena for task-oriented dialogue modeling, we propose RiSAWOZ, a
large-scale multi-domain Chinese Wizard-of-Oz dataset with Rich Semantic
Annotations. RiSAWOZ contains 11.2K human-to-human (H2H) multi-turn
semantically annotated dialogues, with more than 150K utterances spanning over
12 domains, which is larger than all previous annotated H2H conversational
datasets. Both single- and multi-domain dialogues are constructed, accounting
for 65% and 35%, respectively. Each dialogue is labeled with comprehensive
dialogue annotations, including dialogue goal in the form of natural language
description, domain, dialogue states and acts at both the user and system side.
In addition to traditional dialogue annotations, we especially provide
linguistic annotations on discourse phenomena, e.g., ellipsis and coreference,
in dialogues, which are useful for dialogue coreference and ellipsis resolution
tasks. Apart from the fully annotated dataset, we also present a detailed
description of the data collection procedure, statistics and analysis of the
dataset. A series of benchmark models and results are reported, including
natural language understanding (intent detection & slot filling), dialogue
state tracking and dialogue context-to-text generation, as well as coreference
and ellipsis resolution, which facilitate the baseline comparison for future
research on this corpus.
| 2,020 |
Computation and Language
|
Drink Bleach or Do What Now? Covid-HeRA: A Study of Risk-Informed Health
Decision Making in the Presence of COVID-19 Misinformation
|
Given the widespread dissemination of inaccurate medical advice related to
the 2019 coronavirus pandemic (COVID-19), such as fake remedies, treatments and
prevention suggestions, misinformation detection has emerged as an open problem
of high importance and interest for the research community. Several works study
health misinformation detection, yet little attention has been given to the
perceived severity of misinformation posts. In this work, we frame health
misinformation as a risk assessment task. More specifically, we study the
severity of each misinformation story and how readers perceive this severity,
i.e., how harmful a message believed by the audience can be and what type of
signals can be used to recognize potentially malicious fake news and detect
refuted claims. To address our research questions, we introduce a new benchmark
dataset, accompanied by detailed data analysis. We evaluate several traditional
and state-of-the-art models and show there is a significant gap in performance
when applying traditional misinformation classification models to this task. We
conclude with open challenges and future directions.
| 2,022 |
Computation and Language
|
CUSATNLP@HASOC-Dravidian-CodeMix-FIRE2020:Identifying Offensive Language
from ManglishTweets
|
With the popularity of social media, communications through blogs, Facebook,
Twitter, and other plat-forms have increased. Initially, English was the only
medium of communication. Fortunately, now we can communicate in any language.
It has led to people using English and their own native or mother tongue
language in a mixed form. Sometimes, comments in other languages have English
transliterated format or other cases; people use the intended language scripts.
Identifying sentiments and offensive content from such code mixed tweets is a
necessary task in these times. We present a working model submitted for Task2
of the sub-track HASOC Offensive Language Identification- DravidianCodeMix in
Forum for Information Retrieval Evaluation, 2020. It is a message level
classification task. An embedding model-based classifier identifies offensive
and not offensive comments in our approach. We applied this method in the
Manglish dataset provided along with the sub-track.
| 2,020 |
Computation and Language
|
ArCOV19-Rumors: Arabic COVID-19 Twitter Dataset for Misinformation
Detection
|
In this paper we introduce ArCOV19-Rumors, an Arabic COVID-19 Twitter dataset
for misinformation detection composed of tweets containing claims from 27th
January till the end of April 2020. We collected 138 verified claims, mostly
from popular fact-checking websites, and identified 9.4K relevant tweets to
those claims. Tweets were manually-annotated by veracity to support research on
misinformation detection, which is one of the major problems faced during a
pandemic. ArCOV19-Rumors supports two levels of misinformation detection over
Twitter: verifying free-text claims (called claim-level verification) and
verifying claims expressed in tweets (called tweet-level verification). Our
dataset covers, in addition to health, claims related to other topical
categories that were influenced by COVID-19, namely, social, politics, sports,
entertainment, and religious. Moreover, we present benchmarking results for
tweet-level verification on the dataset. We experimented with SOTA models of
versatile approaches that either exploit content, user profiles features,
temporal features and propagation structure of the conversational threads for
tweet verification.
| 2,021 |
Computation and Language
|
Active Testing: An Unbiased Evaluation Method for Distantly Supervised
Relation Extraction
|
Distant supervision has been a widely used method for neural relation
extraction for its convenience of automatically labeling datasets. However,
existing works on distantly supervised relation extraction suffer from the low
quality of test set, which leads to considerable biased performance evaluation.
These biases not only result in unfair evaluations but also mislead the
optimization of neural relation extraction. To mitigate this problem, we
propose a novel evaluation method named active testing through utilizing both
the noisy test set and a few manual annotations. Experiments on a widely used
benchmark show that our proposed approach can yield approximately unbiased
evaluations for distantly supervised relation extractors.
| 2,020 |
Computation and Language
|
Consistency and Coherency Enhanced Story Generation
|
Story generation is a challenging task, which demands to maintain consistency
of the plots and characters throughout the story. Previous works have shown
that GPT2, a large-scale language model, has achieved good performance on story
generation. However, we observe that several serious issues still exist in the
stories generated by GPT2 which can be categorized into two folds: consistency
and coherency. In terms of consistency, on one hand, GPT2 cannot guarantee the
consistency of the plots explicitly. On the other hand, the generated stories
usually contain coreference errors. In terms of coherency, GPT2 does not take
account of the discourse relations between sentences of stories directly. To
enhance the consistency and coherency of the generated stories, we propose a
two-stage generation framework, where the first stage is to organize the story
outline which depicts the story plots and events, and the second stage is to
expand the outline into a complete story. Therefore the plots consistency can
be controlled and guaranteed explicitly. In addition, coreference supervision
signals are incorporated to reduce coreference errors and improve the
coreference consistency. Moreover, we design an auxiliary task of discourse
relation modeling to improve the coherency of the generated stories.
Experimental results on a story dataset show that our model outperforms the
baseline approaches in terms of both automatic metrics and human evaluation.
| 2,020 |
Computation and Language
|
Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
|
We study knowledge-grounded dialogue generation with pre-trained language
models. To leverage the redundant external knowledge under capacity constraint,
we propose equipping response generation defined by a pre-trained language
model with a knowledge selection module, and an unsupervised approach to
jointly optimizing knowledge selection and response generation with unlabeled
dialogues. Empirical results on two benchmarks indicate that our model can
significantly outperform state-of-the-art methods in both automatic evaluation
and human judgment.
| 2,020 |
Computation and Language
|
HABERTOR: An Efficient and Effective Deep Hatespeech Detector
|
We present our HABERTOR model for detecting hatespeech in large scale
user-generated content. Inspired by the recent success of the BERT model, we
propose several modifications to BERT to enhance the performance on the
downstream hatespeech classification task. HABERTOR inherits BERT's
architecture, but is different in four aspects: (i) it generates its own
vocabularies and is pre-trained from the scratch using the largest scale
hatespeech dataset; (ii) it consists of Quaternion-based factorized components,
resulting in a much smaller number of parameters, faster training and
inferencing, as well as less memory usage; (iii) it uses our proposed
multi-source ensemble heads with a pooling layer for separate input sources, to
further enhance its effectiveness; and (iv) it uses a regularized adversarial
training with our proposed fine-grained and adaptive noise magnitude to enhance
its robustness. Through experiments on the large-scale real-world hatespeech
dataset with 1.4M annotated comments, we show that HABERTOR works better than
15 state-of-the-art hatespeech detection methods, including fine-tuning
Language Models. In particular, comparing with BERT, our HABERTOR is 4~5 times
faster in the training/inferencing phase, uses less than 1/3 of the memory, and
has better performance, even though we pre-train it by using less than 1% of
the number of words. Our generalizability analysis shows that HABERTOR
transfers well to other unseen hatespeech datasets and is a more efficient and
effective alternative to BERT for the hatespeech classification.
| 2,020 |
Computation and Language
|
Question Answering over Knowledge Base using Language Model Embeddings
|
Knowledge Base, represents facts about the world, often in some form of
subsumption ontology, rather than implicitly, embedded in procedural code, the
way a conventional computer program does. While there is a rapid growth in
knowledge bases, it poses a challenge of retrieving information from them.
Knowledge Base Question Answering is one of the promising approaches for
extracting substantial knowledge from Knowledge Bases. Unlike web search,
Question Answering over a knowledge base gives accurate and concise results,
provided that natural language questions can be understood and mapped precisely
to an answer in the knowledge base. However, some of the existing
embedding-based methods for knowledge base question answering systems ignore
the subtle correlation between the question and the Knowledge Base (e.g.,
entity types, relation paths, and context) and suffer from the Out Of
Vocabulary problem. In this paper, we focused on using a pre-trained language
model for the Knowledge Base Question Answering task. Firstly, we used Bert
base uncased for the initial experiments. We further fine-tuned these
embeddings with a two-way attention mechanism from the knowledge base to the
asked question and from the asked question to the knowledge base answer
aspects. Our method is based on a simple Convolutional Neural Network
architecture with a Multi-Head Attention mechanism to represent the asked
question dynamically in multiple aspects. Our experimental results show the
effectiveness and the superiority of the Bert pre-trained language model
embeddings for question answering systems on knowledge bases over other
well-known embedding methods.
| 2,020 |
Computation and Language
|
Mixed-Lingual Pre-training for Cross-lingual Summarization
|
Cross-lingual Summarization (CLS) aims at producing a summary in the target
language for an article in the source language. Traditional solutions employ a
two-step approach, i.e. translate then summarize or summarize then translate.
Recently, end-to-end models have achieved better results, but these approaches
are mostly limited by their dependence on large-scale labeled data. We propose
a solution based on mixed-lingual pre-training that leverages both
cross-lingual tasks such as translation and monolingual tasks like masked
language models. Thus, our model can leverage the massive monolingual data to
enhance its modeling of language. Moreover, the architecture has no
task-specific components, which saves memory and increases optimization
efficiency. We show in experiments that this pre-training scheme can
effectively boost the performance of cross-lingual summarization. In Neural
Cross-Lingual Summarization (NCLS) dataset, our model achieves an improvement
of 2.82 (English to Chinese) and 1.15 (Chinese to English) ROUGE-1 scores over
state-of-the-art results.
| 2,020 |
Computation and Language
|
Towards Data Distillation for End-to-end Spoken Conversational Question
Answering
|
In spoken question answering, QA systems are designed to answer questions
from contiguous text spans within the related speech transcripts. However, the
most natural way that human seek or test their knowledge is via human
conversations. Therefore, we propose a new Spoken Conversational Question
Answering task (SCQA), aiming at enabling QA systems to model complex dialogues
flow given the speech utterances and text corpora. In this task, our main
objective is to build a QA system to deal with conversational questions both in
spoken and text forms, and to explore the plausibility of providing more cues
in spoken documents with systems in information gathering. To this end, instead
of adopting automatically generated speech transcripts with highly noisy data,
we propose a novel unified data distillation approach, DDNet, which directly
fuse audio-text features to reduce the misalignment between automatic speech
recognition hypotheses and the reference transcriptions. In addition, to
evaluate the capacity of QA systems in a dialogue-style interaction, we
assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with
more than 120k question-answer pairs. Experiments demonstrate that our proposed
method achieves superior performance in spoken conversational question
answering.
| 2,020 |
Computation and Language
|
Rethinking Document-level Neural Machine Translation
|
This paper does not aim at introducing a novel model for document-level
neural machine translation. Instead, we head back to the original Transformer
model and hope to answer the following question: Is the capacity of current
models strong enough for document-level translation? Interestingly, we observe
that the original Transformer with appropriate training techniques can achieve
strong results for document translation, even with a length of 2000 words. We
evaluate this model and several recent approaches on nine document-level
datasets and two sentence-level datasets across six languages. Experiments show
that document-level Transformer models outperforms sentence-level ones and many
previous methods in a comprehensive set of metrics, including BLEU, four
lexical indices, three newly proposed assistant linguistic indicators, and
human evaluation.
| 2,022 |
Computation and Language
|
hinglishNorm -- A Corpus of Hindi-English Code Mixed Sentences for Text
Normalization
|
We present hinglishNorm -- a human annotated corpus of Hindi-English
code-mixed sentences for text normalization task. Each sentence in the corpus
is aligned to its corresponding human annotated normalized form. To the best of
our knowledge, there is no corpus of Hindi-English code-mixed sentences for
text normalization task that is publicly available. Our work is the first
attempt in this direction. The corpus contains 13494 parallel segments.
Further, we present baseline normalization results on this corpus. We obtain a
Word Error Rate (WER) of 15.55, BiLingual Evaluation Understudy (BLEU) score of
71.2, and Metric for Evaluation of Translation with Explicit ORdering (METEOR)
score of 0.50.
| 2,020 |
Computation and Language
|
Querent Intent in Multi-Sentence Questions
|
Multi-sentence questions (MSQs) are sequences of questions connected by
relations which, unlike sequences of standalone questions, need to be answered
as a unit. Following Rhetorical Structure Theory (RST), we recognise that
different "question discourse relations" between the subparts of MSQs reflect
different speaker intents, and consequently elicit different answering
strategies. Correctly identifying these relations is therefore a crucial step
in automatically answering MSQs. We identify five different types of MSQs in
English, and define five novel relations to describe them. We extract over
162,000 MSQs from Stack Exchange to enable future research. Finally, we
implement a high-precision baseline classifier based on surface features.
| 2,020 |
Computation and Language
|
Towards Interpreting BERT for Reading Comprehension Based QA
|
BERT and its variants have achieved state-of-the-art performance in various
NLP tasks. Since then, various works have been proposed to analyze the
linguistic information being captured in BERT. However, the current works do
not provide an insight into how BERT is able to achieve near human-level
performance on the task of Reading Comprehension based Question Answering. In
this work, we attempt to interpret BERT for RCQA. Since BERT layers do not have
predefined roles, we define a layer's role or functionality using Integrated
Gradients. Based on the defined roles, we perform a preliminary analysis across
all layers. We observed that the initial layers focus on query-passage
interaction, whereas later layers focus more on contextual understanding and
enhancing the answer prediction. Specifically for quantifier questions (how
much/how many), we notice that BERT focuses on confusing words (i.e., on other
numerical quantities in the passage) in the later layers, but still manages to
predict the answer correctly. The fine-tuning and analysis scripts will be
publicly available at https://github.com/iitmnlp/BERT-Analysis-RCQA .
| 2,020 |
Computation and Language
|
Explaining and Improving Model Behavior with k Nearest Neighbor
Representations
|
Interpretability techniques in NLP have mainly focused on understanding
individual predictions using attention visualization or gradient-based saliency
maps over tokens. We propose using k nearest neighbor (kNN) representations to
identify training examples responsible for a model's predictions and obtain a
corpus-level understanding of the model's behavior. Apart from
interpretability, we show that kNN representations are effective at uncovering
learned spurious associations, identifying mislabeled examples, and improving
the fine-tuned model's performance. We focus on Natural Language Inference
(NLI) as a case study and experiment with multiple datasets. Our method deploys
backoff to kNN for BERT and RoBERTa on examples with low model confidence
without any update to the model parameters. Our results indicate that the kNN
approach makes the finetuned model more robust to adversarial inputs.
| 2,020 |
Computation and Language
|
Unsupervised Neural Machine Translation for Low-Resource Domains via
Meta-Learning
|
Unsupervised machine translation, which utilizes unpaired monolingual corpora
as training data, has achieved comparable performance against supervised
machine translation. However, it still suffers from data-scarce domains. To
address this issue, this paper presents a novel meta-learning algorithm for
unsupervised neural machine translation (UNMT) that trains the model to adapt
to another domain by utilizing only a small amount of training data. We assume
that domain-general knowledge is a significant factor in handling data-scarce
domains. Hence, we extend the meta-learning algorithm, which utilizes knowledge
learned from high-resource domains, to boost the performance of low-resource
UNMT. Our model surpasses a transfer learning-based approach by up to 2-4 BLEU
scores. Extensive experimental results show that our proposed algorithm is
pertinent for fast adaptation and consistently outperforms other baseline
models.
| 2,021 |
Computation and Language
|
UoB at SemEval-2020 Task 1: Automatic Identification of Novel Word
Senses
|
Much as the social landscape in which languages are spoken shifts, language
too evolves to suit the needs of its users. Lexical semantic change analysis is
a burgeoning field of semantic analysis which aims to trace changes in the
meanings of words over time. This paper presents an approach to lexical
semantic change detection based on Bayesian word sense induction suitable for
novel word sense identification. This approach is used for a submission to
SemEval-2020 Task 1, which shows the approach to be capable of the SemEval
task. The same approach is also applied to a corpus gleaned from 15 years of
Twitter data, the results of which are then used to identify words which may be
instances of slang.
| 2,020 |
Computation and Language
|
Incorporating Count-Based Features into Pre-Trained Models for Improved
Stance Detection
|
The explosive growth and popularity of Social Media has revolutionised the
way we communicate and collaborate. Unfortunately, this same ease of accessing
and sharing information has led to an explosion of misinformation and
propaganda. Given that stance detection can significantly aid in veracity
prediction, this work focuses on boosting automated stance detection, a task on
which pre-trained models have been extremely successful on, as on several other
tasks. This work shows that the task of stance detection can benefit from
feature based information, especially on certain under performing classes,
however, integrating such features into pre-trained models using ensembling is
challenging. We propose a novel architecture for integrating features with
pre-trained models that address these challenges and test our method on the
RumourEval 2019 dataset. This method achieves state-of-the-art results with an
F1-score of 63.94 on the test set.
| 2,020 |
Computation and Language
|
Chart-to-Text: Generating Natural Language Descriptions for Charts by
Adapting the Transformer Model
|
Information visualizations such as bar charts and line charts are very
popular for exploring data and communicating insights. Interpreting and making
sense of such visualizations can be challenging for some people, such as those
who are visually impaired or have low visualization literacy. In this work, we
introduce a new dataset and present a neural model for automatically generating
natural language summaries for charts. The generated summaries provide an
interpretation of the chart and convey the key insights found within that
chart. Our neural model is developed by extending the state-of-the-art model
for the data-to-text generation task, which utilizes a transformer-based
encoder-decoder architecture. We found that our approach outperforms the base
model on a content selection metric by a wide margin (55.42% vs. 8.49%) and
generates more informative, concise, and coherent summaries.
| 2,020 |
Computation and Language
|
Knowledge-guided Open Attribute Value Extraction with Reinforcement
Learning
|
Open attribute value extraction for emerging entities is an important but
challenging task. A lot of previous works formulate the problem as a
\textit{question-answering} (QA) task. While the collections of articles from
web corpus provide updated information about the emerging entities, the
retrieved texts can be noisy, irrelevant, thus leading to inaccurate answers.
Effectively filtering out noisy articles as well as bad answers is the key to
improving extraction accuracy. Knowledge graph (KG), which contains rich, well
organized information about entities, provides a good resource to address the
challenge. In this work, we propose a knowledge-guided reinforcement learning
(RL) framework for open attribute value extraction. Informed by relevant
knowledge in KG, we trained a deep Q-network to sequentially compare extracted
answers to improve extraction accuracy. The proposed framework is applicable to
different information extraction system. Our experimental results show that our
method outperforms the baselines by 16.5 - 27.8\%.
| 2,020 |
Computation and Language
|
SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline
|
The Scholarly Document Processing (SDP) workshop is to encourage more efforts
on natural language understanding of scientific task. It contains three shared
tasks and we participate in the LongSumm shared task. In this paper, we
describe our text summarization system, SciSummPip, inspired by SummPip (Zhao
et al., 2020) that is an unsupervised text summarization system for
multi-document in news domain. Our SciSummPip includes a transformer-based
language model SciBERT (Beltagy et al., 2019) for contextual sentence
representation, content selection with PageRank (Page et al., 1999), sentence
graph construction with both deep and linguistic information, sentence graph
clustering and within-graph summary generation. Our work differs from previous
method in that content selection and a summary length constraint is applied to
adapt to the scientific domain. The experiment results on both training dataset
and blind test dataset show the effectiveness of our method, and we empirically
verify the robustness of modules used in SciSummPip with BERTScore (Zhang et
al., 2019a).
| 2,020 |
Computation and Language
|
Infusing Sequential Information into Conditional Masked Translation
Model with Self-Review Mechanism
|
Non-autoregressive models generate target words in a parallel way, which
achieve a faster decoding speed but at the sacrifice of translation accuracy.
To remedy a flawed translation by non-autoregressive models, a promising
approach is to train a conditional masked translation model (CMTM), and refine
the generated results within several iterations. Unfortunately, such approach
hardly considers the \textit{sequential dependency} among target words, which
inevitably results in a translation degradation. Hence, instead of solely
training a Transformer-based CMTM, we propose a Self-Review Mechanism to infuse
sequential information into it. Concretely, we insert a left-to-right mask to
the same decoder of CMTM, and then induce it to autoregressively review whether
each generated word from CMTM is supposed to be replaced or kept. The
experimental results (WMT14 En$\leftrightarrow$De and WMT16
En$\leftrightarrow$Ro) demonstrate that our model uses dramatically less
training computations than the typical CMTM, as well as outperforms several
state-of-the-art non-autoregressive models by over 1 BLEU. Through knowledge
distillation, our model even surpasses a typical left-to-right Transformer
model, while significantly speeding up decoding.
| 2,020 |
Computation and Language
|
Auto-Encoding Variational Bayes for Inferring Topics and Visualization
|
Visualization and topic modeling are widely used approaches for text
analysis. Traditional visualization methods find low-dimensional
representations of documents in the visualization space (typically 2D or 3D)
that can be displayed using a scatterplot. In contrast, topic modeling aims to
discover topics from text, but for visualization, one needs to perform a
post-hoc embedding using dimensionality reduction methods. Recent approaches
propose using a generative model to jointly find topics and visualization,
allowing the semantics to be infused in the visualization space for a
meaningful interpretation. A major challenge that prevents these methods from
being used practically is the scalability of their inference algorithms. We
present, to the best of our knowledge, the first fast Auto-Encoding Variational
Bayes based inference method for jointly inferring topics and visualization.
Since our method is black box, it can handle model changes efficiently with
little mathematical rederivation effort. We demonstrate the efficiency and
effectiveness of our method on real-world large datasets and compare it with
existing baselines.
| 2,020 |
Computation and Language
|
Multi-hop Question Generation with Graph Convolutional Network
|
Multi-hop Question Generation (QG) aims to generate answer-related questions
by aggregating and reasoning over multiple scattered evidence from different
paragraphs. It is a more challenging yet under-explored task compared to
conventional single-hop QG, where the questions are generated from the sentence
containing the answer or nearby sentences in the same paragraph without complex
reasoning. To address the additional challenges in multi-hop QG, we propose
Multi-Hop Encoding Fusion Network for Question Generation (MulQG), which does
context encoding in multiple hops with Graph Convolutional Network and encoding
fusion via an Encoder Reasoning Gate. To the best of our knowledge, we are the
first to tackle the challenge of multi-hop reasoning over paragraphs without
any sentence-level information. Empirical results on HotpotQA dataset
demonstrate the effectiveness of our method, in comparison with baselines on
automatic evaluation metrics. Moreover, from the human evaluation, our proposed
model is able to generate fluent questions with high completeness and
outperforms the strongest baseline by 20.8% in the multi-hop evaluation. The
code is publicly available at
https://github.com/HLTCHKUST/MulQG}{https://github.com/HLTCHKUST/MulQG .
| 2,021 |
Computation and Language
|
Dimsum @LaySumm 20: BART-based Approach for Scientific Document
Summarization
|
Lay summarization aims to generate lay summaries of scientific papers
automatically. It is an essential task that can increase the relevance of
science for all of society. In this paper, we build a lay summary generation
system based on the BART model. We leverage sentence labels as extra
supervision signals to improve the performance of lay summarization. In the
CL-LaySumm 2020 shared task, our model achieves 46.00\% Rouge1-F1 score.
| 2,020 |
Computation and Language
|
Query-aware Tip Generation for Vertical Search
|
As a concise form of user reviews, tips have unique advantages to explain the
search results, assist users' decision making, and further improve user
experience in vertical search scenarios. Existing work on tip generation does
not take query into consideration, which limits the impact of tips in search
scenarios. To address this issue, this paper proposes a query-aware tip
generation framework, integrating query information into encoding and
subsequent decoding processes. Two specific adaptations of Transformer and
Recurrent Neural Network (RNN) are proposed. For Transformer, the query impact
is incorporated into the self-attention computation of both the encoder and the
decoder. As for RNN, the query-aware encoder adopts a selective network to
distill query-relevant information from the review, while the query-aware
decoder integrates the query information into the attention computation during
decoding. The framework consistently outperforms the competing methods on both
public and real-world industrial datasets. Last but not least, online
deployment experiments on Dianping demonstrate the advantage of the proposed
framework for tip generation as well as its online business values.
| 2,020 |
Computation and Language
|
Global Attention for Name Tagging
|
Many name tagging approaches use local contextual information with much
success, but fail when the local context is ambiguous or limited. We present a
new framework to improve name tagging by utilizing local, document-level, and
corpus-level contextual information. We retrieve document-level context from
other sentences within the same document and corpus-level context from
sentences in other topically related documents. We propose a model that learns
to incorporate document-level and corpus-level contextual information alongside
local contextual information via global attentions, which dynamically weight
their respective contextual information, and gating mechanisms, which determine
the influence of this information. Extensive experiments on benchmark datasets
show the effectiveness of our approach, which achieves state-of-the-art results
for Dutch, German, and Spanish on the CoNLL-2002 and CoNLL-2003 datasets.
| 2,020 |
Computation and Language
|
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
|
Probing complex language models has recently revealed several insights into
linguistic and semantic patterns found in the learned representations. In this
paper, we probe BERT specifically to understand and measure the relational
knowledge it captures. We utilize knowledge base completion tasks to probe
every layer of pre-trained as well as fine-tuned BERT (ranking, question
answering, NER). Our findings show that knowledge is not just contained in
BERT's final layers. Intermediate layers contribute a significant amount
(17-60%) to the total knowledge found. Probing intermediate layers also reveals
how different types of knowledge emerge at varying rates. When BERT is
fine-tuned, relational knowledge is forgotten but the extent of forgetting is
impacted by the fine-tuning objective but not the size of the dataset. We found
that ranking models forget the least and retain more knowledge in their final
layer. We release our code on github to repeat the experiments.
| 2,021 |
Computation and Language
|
Understanding Unnatural Questions Improves Reasoning over Text
|
Complex question answering (CQA) over raw text is a challenging task. A
prominent approach to this task is based on the programmer-interpreter
framework, where the programmer maps the question into a sequence of reasoning
actions which is then executed on the raw text by the interpreter. Learning an
effective CQA model requires large amounts of human-annotated data,consisting
of the ground-truth sequence of reasoning actions, which is time-consuming and
expensive to collect at scale. In this paper, we address the challenge of
learning a high-quality programmer (parser) by projecting natural
human-generated questions into unnatural machine-generated questions which are
more convenient to parse. We firstly generate synthetic (question,action
sequence) pairs by a data generator, and train a semantic parser that
associates synthetic questions with their corresponding action sequences. To
capture the diversity when applied tonatural questions, we learn a projection
model to map natural questions into their most similar unnatural questions for
which the parser can work well. Without any natural training data, our
projection model provides high-quality action sequences for the CQA task.
Experimental results show that the QA model trained exclusively with synthetic
data generated by our method outperforms its state-of-the-art counterpart
trained on human-labeled data.
| 2,020 |
Computation and Language
|
The RELX Dataset and Matching the Multilingual Blanks for Cross-Lingual
Relation Classification
|
Relation classification is one of the key topics in information extraction,
which can be used to construct knowledge bases or to provide useful information
for question answering. Current approaches for relation classification are
mainly focused on the English language and require lots of training data with
human annotations. Creating and annotating a large amount of training data for
low-resource languages is impractical and expensive. To overcome this issue, we
propose two cross-lingual relation classification models: a baseline model
based on Multilingual BERT and a new multilingual pretraining setup, which
significantly improves the baseline with distant supervision. For evaluation,
we introduce a new public benchmark dataset for cross-lingual relation
classification in English, French, German, Spanish, and Turkish, called RELX.
We also provide the RELX-Distant dataset, which includes hundreds of thousands
of sentences with relations from Wikipedia and Wikidata collected by distant
supervision for these languages. Our code and data are available at:
https://github.com/boun-tabi/RELX
| 2,020 |
Computation and Language
|
Revisiting Modularized Multilingual NMT to Meet Industrial Demands
|
The complete sharing of parameters for multilingual translation (1-1) has
been the mainstream approach in current research. However, degraded performance
due to the capacity bottleneck and low maintainability hinders its extensive
adoption in industries. In this study, we revisit the multilingual neural
machine translation model that only share modules among the same languages (M2)
as a practical alternative to 1-1 to satisfy industrial requirements. Through
comprehensive experiments, we identify the benefits of multi-way training and
demonstrate that the M2 can enjoy these benefits without suffering from the
capacity bottleneck. Furthermore, the interlingual space of the M2 allows
convenient modification of the model. By leveraging trained modules, we find
that incrementally added modules exhibit better performance than singly trained
models. The zero-shot performance of the added modules is even comparable to
supervised models. Our findings suggest that the M2 can be a competent
candidate for multilingual translation in industries.
| 2,020 |
Computation and Language
|
Unsupervised Pretraining for Neural Machine Translation Using Elastic
Weight Consolidation
|
This work presents our ongoing research of unsupervised pretraining in neural
machine translation (NMT). In our method, we initialize the weights of the
encoder and decoder with two language models that are trained with monolingual
data and then fine-tune the model on parallel data using Elastic Weight
Consolidation (EWC) to avoid forgetting of the original language modeling
tasks. We compare the regularization by EWC with the previous work that focuses
on regularization by language modeling objectives. The positive result is that
using EWC with the decoder achieves BLEU scores similar to the previous work.
However, the model converges 2-3 times faster and does not require the original
unlabeled training data during the fine-tuning stage. In contrast, the
regularization using EWC is less effective if the original and new tasks are
not closely related. We show that initializing the bidirectional NMT encoder
with a left-to-right language model and forcing the model to remember the
original left-to-right language modeling task limits the learning capacity of
the encoder for the whole bidirectional context.
| 2,020 |
Computation and Language
|
Unsupervised Expressive Rules Provide Explainability and Assist Human
Experts Grasping New Domains
|
Approaching new data can be quite deterrent; you do not know how your
categories of interest are realized in it, commonly, there is no labeled data
at hand, and the performance of domain adaptation methods is unsatisfactory.
Aiming to assist domain experts in their first steps into a new task over a
new corpus, we present an unsupervised approach to reveal complex rules which
cluster the unexplored corpus by its prominent categories (or facets).
These rules are human-readable, thus providing an important ingredient which
has become in short supply lately - explainability. Each rule provides an
explanation for the commonality of all the texts it clusters together.
We present an extensive evaluation of the usefulness of these rules in
identifying target categories, as well as a user study which assesses their
interpretability.
| 2,020 |
Computation and Language
|
Diving Deep into Context-Aware Neural Machine Translation
|
Context-aware neural machine translation (NMT) is a promising direction to
improve the translation quality by making use of the additional context, e.g.,
document-level translation, or having meta-information. Although there exist
various architectures and analyses, the effectiveness of different
context-aware NMT models is not well explored yet. This paper analyzes the
performance of document-level NMT models on four diverse domains with a varied
amount of parallel document-level bilingual data. We conduct a comprehensive
set of experiments to investigate the impact of document-level NMT. We find
that there is no single best approach to document-level NMT, but rather that
different architectures come out on top on different tasks. Looking at
task-specific problems, such as pronoun resolution or headline translation, we
find improvements in the context-aware systems, even in cases where the
corpus-level metrics like BLEU show no significant improvement. We also show
that document-level back-translation significantly helps to compensate for the
lack of document-level bi-texts.
| 2,020 |
Computation and Language
|
Heads-up! Unsupervised Constituency Parsing via Self-Attention Heads
|
Transformer-based pre-trained language models (PLMs) have dramatically
improved the state of the art in NLP across many tasks. This has led to
substantial interest in analyzing the syntactic knowledge PLMs learn. Previous
approaches to this question have been limited, mostly using test suites or
probes. Here, we propose a novel fully unsupervised parsing approach that
extracts constituency trees from PLM attention heads. We rank transformer
attention heads based on their inherent properties, and create an ensemble of
high-ranking heads to produce the final tree. Our method is adaptable to
low-resource languages, as it does not rely on development sets, which can be
expensive to annotate. Our experiments show that the proposed method often
outperform existing approaches if there is no development set present. Our
unsupervised parser can also be used as a tool to analyze the grammars PLMs
learn implicitly. For this, we use the parse trees induced by our method to
train a neural PCFG and compare it to a grammar derived from a human-annotated
treebank.
| 2,020 |
Computation and Language
|
Cold-start Active Learning through Self-supervised Language Modeling
|
Active learning strives to reduce annotation costs by choosing the most
critical examples to label. Typically, the active learning strategy is
contingent on the classification model. For instance, uncertainty sampling
depends on poorly calibrated model confidence scores. In the cold-start
setting, active learning is impractical because of model instability and data
scarcity. Fortunately, modern NLP provides an additional source of information:
pre-trained language models. The pre-training loss can find examples that
surprise the model and should be labeled for efficient fine-tuning. Therefore,
we treat the language modeling loss as a proxy for classification uncertainty.
With BERT, we develop a simple strategy based on the masked language modeling
loss that minimizes labeling costs for text classification. Compared to other
baselines, our approach reaches higher accuracy within less sampling iterations
and computation time.
| 2,020 |
Computation and Language
|
Better Distractions: Transformer-based Distractor Generation and
Multiple Choice Question Filtering
|
For the field of education, being able to generate semantically correct and
educationally relevant multiple choice questions (MCQs) could have a large
impact. While question generation itself is an active research topic,
generating distractors (the incorrect multiple choice options) receives much
less attention. A missed opportunity, since there is still a lot of room for
improvement in this area. In this work, we train a GPT-2 language model to
generate three distractors for a given question and text context, using the
RACE dataset. Next, we train a BERT language model to answer MCQs, and use this
model as a filter, to select only questions that can be answered and therefore
presumably make sense. To evaluate our work, we start by using text generation
metrics, which show that our model outperforms earlier work on distractor
generation (DG) and achieves state-of-the-art performance. Also, by calculating
the question answering ability, we show that larger base models lead to better
performance. Moreover, we conducted a human evaluation study, which confirmed
the quality of the generated questions, but showed no statistically significant
effect of the QA filter.
| 2,020 |
Computation and Language
|
Drug Repurposing for COVID-19 via Knowledge Graph Completion
|
Objective: To discover candidate drugs to repurpose for COVID-19 using
literature-derived knowledge and knowledge graph completion methods. Methods:
We propose a novel, integrative, and neural network-based literature-based
discovery (LBD) approach to identify drug candidates from both PubMed and
COVID-19-focused research literature. Our approach relies on semantic triples
extracted using SemRep (via SemMedDB). We identified an informative subset of
semantic triples using filtering rules and an accuracy classifier developed on
a BERT variant, and used this subset to construct a knowledge graph. Five SOTA,
neural knowledge graph completion algorithms were used to predict drug
repurposing candidates. The models were trained and assessed using a time
slicing approach and the predicted drugs were compared with a list of drugs
reported in the literature and evaluated in clinical trials. These models were
complemented by a discovery pattern-based approach. Results: Accuracy
classifier based on PubMedBERT achieved the best performance (F1= 0.854) in
classifying semantic predications. Among five knowledge graph completion
models, TransE outperformed others (MR = 0.923, Hits@1=0.417). Some known drugs
linked to COVID-19 in the literature were identified, as well as some candidate
drugs that have not yet been studied. Discovery patterns enabled generation of
plausible hypotheses regarding the relationships between the candidate drugs
and COVID-19. Among them, five highly ranked and novel drugs (paclitaxel, SB
203580, alpha 2-antiplasmin, pyrrolidine dithiocarbamate, and butylated
hydroxytoluene) with their mechanistic explanations were further discussed.
Conclusion: We show that an LBD approach can be feasible for discovering drug
candidates for COVID-19, and for generating mechanistic explanations. Our
approach can be generalized to other diseases as well as to other clinical
questions.
| 2,021 |
Computation and Language
|
Incorporating Terminology Constraints in Automatic Post-Editing
|
Users of machine translation (MT) may want to ensure the use of specific
lexical terminologies. While there exist techniques for incorporating
terminology constraints during inference for MT, current APE approaches cannot
ensure that they will appear in the final translation. In this paper, we
present both autoregressive and non-autoregressive models for lexically
constrained APE, demonstrating that our approach enables preservation of 95% of
the terminologies and also improves translation quality on English-German
benchmarks. Even when applied to lexically constrained MT output, our approach
is able to improve preservation of the terminologies. However, we show that our
models do not learn to copy constraints systematically and suggest a simple
data augmentation technique that leads to improved performance and robustness.
| 2,020 |
Computation and Language
|
An Empirical Study for Vietnamese Constituency Parsing with Pre-training
|
In this work, we use a span-based approach for Vietnamese constituency
parsing. Our method follows the self-attention encoder architecture and a chart
decoder using a CKY-style inference algorithm. We present analyses of the
experiment results of the comparison of our empirical method using pre-training
models XLM-Roberta and PhoBERT on both Vietnamese datasets VietTreebank and
NIIVTB1. The results show that our model with XLM-Roberta archived the
significantly F1-score better than other pre-training models, VietTreebank at
81.19% and NIIVTB1 at 85.70%.
| 2,020 |
Computation and Language
|
Adaptive Attentional Network for Few-Shot Knowledge Graph Completion
|
Few-shot Knowledge Graph (KG) completion is a focus of current research,
where each task aims at querying unseen facts of a relation given its few-shot
reference entity pairs. Recent attempts solve this problem by learning static
representations of entities and references, ignoring their dynamic properties,
i.e., entities may exhibit diverse roles within task relations, and references
may make different contributions to queries. This work proposes an adaptive
attentional network for few-shot KG completion by learning adaptive entity and
reference representations. Specifically, entities are modeled by an adaptive
neighbor encoder to discern their task-oriented roles, while references are
modeled by an adaptive query-aware aggregator to differentiate their
contributions. Through the attention mechanism, both entities and references
can capture their fine-grained semantic meanings, and thus render more
expressive representations. This will be more predictive for knowledge
acquisition in the few-shot scenario. Evaluation in link prediction on two
public datasets shows that our approach achieves new state-of-the-art results
with different few-shot sizes.
| 2,020 |
Computation and Language
|
PySBD: Pragmatic Sentence Boundary Disambiguation
|
In this paper, we present a rule-based sentence boundary disambiguation
Python package that works out-of-the-box for 22 languages. We aim to provide a
realistic segmenter which can provide logical sentences even when the format
and domain of the input text is unknown. In our work, we adapt the Golden Rules
Set (a language-specific set of sentence boundary exemplars) originally
implemented as a ruby gem - pragmatic_segmenter - which we ported to Python
with additional improvements and functionality. PySBD passes 97.92% of the
Golden Rule Set exemplars for English, an improvement of 25% over the next best
open-source Python tool.
| 2,020 |
Computation and Language
|
Summary-Oriented Question Generation for Informational Queries
|
Users frequently ask simple factoid questions for question answering (QA)
systems, attenuating the impact of myriad recent works that support more
complex questions. Prompting users with automatically generated suggested
questions (SQs) can improve user understanding of QA system capabilities and
thus facilitate more effective use. We aim to produce self-explanatory
questions that focus on main document topics and are answerable with variable
length passages as appropriate. We satisfy these requirements by using a
BERT-based Pointer-Generator Network trained on the Natural Questions (NQ)
dataset. Our model shows SOTA performance of SQ generation on the NQ dataset
(20.1 BLEU-4). We further apply our model on out-of-domain news articles,
evaluating with a QA system due to the lack of gold questions and demonstrate
that our model produces better SQs for news articles -- with further
confirmation via a human evaluation.
| 2,021 |
Computation and Language
|
Subtitles to Segmentation: Improving Low-Resource Speech-to-Text
Translation Pipelines
|
In this work, we focus on improving ASR output segmentation in the context of
low-resource language speech-to-text translation. ASR output segmentation is
crucial, as ASR systems segment the input audio using purely acoustic
information and are not guaranteed to output sentence-like segments. Since most
MT systems expect sentences as input, feeding in longer unsegmented passages
can lead to sub-optimal performance. We explore the feasibility of using
datasets of subtitles from TV shows and movies to train better ASR segmentation
models. We further incorporate part-of-speech (POS) tag and dependency label
information (derived from the unsegmented ASR outputs) into our segmentation
model. We show that this noisy syntactic information can improve model
accuracy. We evaluate our models intrinsically on segmentation quality and
extrinsically on downstream MT performance, as well as downstream tasks
including cross-lingual information retrieval (CLIR) tasks and human relevance
assessments. Our model shows improved performance on downstream tasks for
Lithuanian and Bulgarian.
| 2,020 |
Computation and Language
|
Technical Question Answering across Tasks and Domains
|
Building automatic technical support system is an important yet challenge
task. Conceptually, to answer a user question on a technical forum, a human
expert has to first retrieve relevant documents, and then read them carefully
to identify the answer snippet. Despite huge success the researchers have
achieved in coping with general domain question answering (QA), much less
attentions have been paid for investigating technical QA. Specifically,
existing methods suffer from several unique challenges (i) the question and
answer rarely overlaps substantially and (ii) very limited data size. In this
paper, we propose a novel framework of deep transfer learning to effectively
address technical QA across tasks and domains. To this end, we present an
adjustable joint learning approach for document retrieval and reading
comprehension tasks. Our experiments on the TechQA demonstrates superior
performance compared with state-of-the-art methods.
| 2,021 |
Computation and Language
|
Adversarial Training for Code Retrieval with Question-Description
Relevance Regularization
|
Code retrieval is a key task aiming to match natural and programming
languages. In this work, we propose adversarial learning for code retrieval,
that is regularized by question-description relevance. First, we adapt a simple
adversarial learning technique to generate difficult code snippets given the
input question, which can help the learning of code retrieval that faces
bi-modal and data-scarce challenges. Second, we propose to leverage
question-description relevance to regularize adversarial learning, such that a
generated code snippet should contribute more to the code retrieval training
loss, only if its paired natural language description is predicted to be less
relevant to the user given question. Experiments on large-scale code retrieval
datasets of two programming languages show that our adversarial learning method
is able to improve the performance of state-of-the-art models. Moreover, using
an additional duplicate question prediction model to regularize adversarial
learning further improves the performance, and this is more effective than
using the duplicated questions in strong multi-task learning baselines
| 2,020 |
Computation and Language
|
Cross-Lingual Transfer in Zero-Shot Cross-Language Entity Linking
|
Cross-language entity linking grounds mentions in multiple languages to a
single-language knowledge base. We propose a neural ranking architecture for
this task that uses multilingual BERT representations of the mention and the
context in a neural network. We find that the multilingual ability of BERT
leads to robust performance in monolingual and multilingual settings.
Furthermore, we explore zero-shot language transfer and find surprisingly
robust performance. We investigate the zero-shot degradation and find that it
can be partially mitigated by a proposed auxiliary training objective, but that
the remaining error can best be attributed to domain shift rather than language
transfer.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.