Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Inferring COVID-19 Biological Pathways from Clinical Phenotypes via
Topological Analysis
|
COVID-19 has caused thousands of deaths around the world and also resulted in
a large international economic disruption. Identifying the pathways associated
with this illness can help medical researchers to better understand the
properties of the condition. This process can be carried out by analyzing the
medical records. It is crucial to develop tools and models that can aid
researchers with this process in a timely manner. However, medical records are
often unstructured clinical notes, and this poses significant challenges to
developing the automated systems. In this article, we propose a pipeline to aid
practitioners in analyzing clinical notes and revealing the pathways associated
with this disease. Our pipeline relies on topological properties and consists
of three steps: 1) pre-processing the clinical notes to extract the salient
concepts, 2) constructing a feature space of the patients to characterize the
extracted concepts, and finally, 3) leveraging the topological properties to
distill the available knowledge and visualize the result. Our experiments on a
publicly available dataset of COVID-19 clinical notes testify that our pipeline
can indeed extract meaningful pathways.
| 2,022 |
Computation and Language
|
Single versus Multiple Annotation for Named Entity Recognition of
Mutations
|
The focus of this paper is to address the knowledge acquisition bottleneck
for Named Entity Recognition (NER) of mutations, by analysing different
approaches to build manually-annotated data. We address first the impact of
using a single annotator vs two annotators, in order to measure whether
multiple annotators are required. Once we evaluate the performance loss when
using a single annotator, we apply different methods to sample the training
data for second annotation, aiming at improving the quality of the dataset
without requiring a full pass. We use held-out double-annotated data to build
two scenarios with different types of rankings: similarity-based and confidence
based. We evaluate both approaches on: (i) their ability to identify training
instances that are erroneous (cases where single-annotator labels differ from
double-annotation after discussion), and (ii) on Mutation NER performance for
state-of-the-art classifiers after integrating the fixes at different
thresholds.
| 2,021 |
Computation and Language
|
UniSpeech: Unified Speech Representation Learning with Labeled and
Unlabeled Data
|
In this paper, we propose a unified pre-training approach called UniSpeech to
learn speech representations with both unlabeled and labeled data, in which
supervised phonetic CTC learning and phonetically-aware contrastive
self-supervised learning are conducted in a multi-task learning manner. The
resultant representations can capture information more correlated with phonetic
structures and improve the generalization across languages and domains. We
evaluate the effectiveness of UniSpeech for cross-lingual representation
learning on public CommonVoice corpus. The results show that UniSpeech
outperforms self-supervised pretraining and supervised transfer learning for
speech recognition by a maximum of 13.4% and 17.8% relative phone error rate
reductions respectively (averaged over all testing languages). The
transferability of UniSpeech is also demonstrated on a domain-shift speech
recognition task, i.e., a relative word error rate reduction of 6% against the
previous approach.
| 2,021 |
Computation and Language
|
Situation and Behavior Understanding by Trope Detection on Films
|
The human ability of deep cognitive skills are crucial for the development of
various real-world applications that process diverse and abundant user
generated input. While recent progress of deep learning and natural language
processing have enabled learning system to reach human performance on some
benchmarks requiring shallow semantics, such human ability still remains
challenging for even modern contextual embedding models, as pointed out by many
recent studies. Existing machine comprehension datasets assume sentence-level
input, lack of casual or motivational inferences, or could be answered with
question-answer bias. Here, we present a challenging novel task, trope
detection on films, in an effort to create a situation and behavior
understanding for machines. Tropes are storytelling devices that are frequently
used as ingredients in recipes for creative works. Comparing to existing movie
tag prediction tasks, tropes are more sophisticated as they can vary widely,
from a moral concept to a series of circumstances, and embedded with
motivations and cause-and-effects. We introduce a new dataset, Tropes in Movie
Synopses (TiMoS), with 5623 movie synopses and 95 different tropes collecting
from a Wikipedia-style database, TVTropes. We present a multi-stream
comprehension network (MulCom) leveraging multi-level attention of words,
sentences, and role relations. Experimental result demonstrates that modern
models including BERT contextual embedding, movie tag prediction systems, and
relational networks, perform at most 37% of human performance (23.97/64.87) in
terms of F1 score. Our MulCom outperforms all modern baselines, by 1.5 to 5.0
F1 score and 1.5 to 3.0 mean of average precision (mAP) score. We also provide
a detailed analysis and human evaluation to pave ways for future research.
| 2,021 |
Computation and Language
|
Challenges for Computational Lexical Semantic Change
|
The computational study of lexical semantic change (LSC) has taken off in the
past few years and we are seeing increasing interest in the field, from both
computational sciences and linguistics. Most of the research so far has focused
on methods for modelling and detecting semantic change using large diachronic
textual data, with the majority of the approaches employing neural embeddings.
While methods that offer easy modelling of diachronic text are one of the main
reasons for the spiking interest in LSC, neural models leave many aspects of
the problem unsolved. The field has several open and complex challenges. In
this chapter, we aim to describe the most important of these challenges and
outline future directions.
| 2,021 |
Computation and Language
|
Towards Facilitating Empathic Conversations in Online Mental Health
Support: A Reinforcement Learning Approach
|
Online peer-to-peer support platforms enable conversations between millions
of people who seek and provide mental health support. If successful, web-based
mental health conversations could improve access to treatment and reduce the
global disease burden. Psychologists have repeatedly demonstrated that empathy,
the ability to understand and feel the emotions and experiences of others, is a
key component leading to positive outcomes in supportive conversations.
However, recent studies have shown that highly empathic conversations are rare
in online mental health platforms.
In this paper, we work towards improving empathy in online mental health
support conversations. We introduce a new task of empathic rewriting which aims
to transform low-empathy conversational posts to higher empathy. Learning such
transformations is challenging and requires a deep understanding of empathy
while maintaining conversation quality through text fluency and specificity to
the conversational context. Here we propose PARTNER, a deep reinforcement
learning agent that learns to make sentence-level edits to posts in order to
increase the expressed level of empathy while maintaining conversation quality.
Our RL agent leverages a policy network, based on a transformer language model
adapted from GPT-2, which performs the dual task of generating candidate
empathic sentences and adding those sentences at appropriate positions. During
training, we reward transformations that increase empathy in posts while
maintaining text fluency, context specificity and diversity. Through a
combination of automatic and human evaluation, we demonstrate that PARTNER
successfully generates more empathic, specific, and diverse responses and
outperforms NLP methods from related tasks like style transfer and empathic
dialogue generation. Our work has direct implications for facilitating empathic
conversations on web-based platforms.
| 2,021 |
Computation and Language
|
Towards Confident Machine Reading Comprehension
|
There has been considerable progress on academic benchmarks for the Reading
Comprehension (RC) task with State-of-the-Art models closing the gap with human
performance on extractive question answering. Datasets such as SQuAD 2.0 & NQ
have also introduced an auxiliary task requiring models to predict when a
question has no answer in the text. However, in production settings, it is also
necessary to provide confidence estimates for the performance of the underlying
RC model at both answer extraction and "answerability" detection. We propose a
novel post-prediction confidence estimation model, which we call Mr.C (short
for Mr. Confident), that can be trained to improve a system's ability to
refrain from making incorrect predictions with improvements of up to 4 points
as measured by Area Under the Curve (AUC) scores. Mr.C can benefit from a novel
white-box feature that leverages the underlying RC model's gradients.
Performance prediction is particularly important in cases of domain shift (as
measured by training RC models on SQUAD 2.0 and evaluating on NQ), where Mr.C
not only improves AUC, but also traditional answerability prediction (as
measured by a 5 point improvement in F1).
| 2,021 |
Computation and Language
|
WeChat AI & ICT's Submission for DSTC9 Interactive Dialogue Evaluation
Track
|
We participate in the DSTC9 Interactive Dialogue Evaluation Track (Gunasekara
et al. 2020) sub-task 1 (Knowledge Grounded Dialogue) and sub-task 2
(Interactive Dialogue). In sub-task 1, we employ a pre-trained language model
to generate topic-related responses and propose a response ensemble method for
response selection. In sub-task2, we propose a novel Dialogue Planning Model
(DPM) to capture conversation flow in the interaction with humans. We also
design an integrated open-domain dialogue system containing pre-process,
dialogue model, scoring model, and post-process, which can generate fluent,
coherent, consistent, and humanlike responses. We tie 1st on human ratings and
also get the highest Meteor, and Bert-score in sub-task 1, and rank 3rd on
interactive human evaluation in sub-task 2.
| 2,021 |
Computation and Language
|
Divide and Conquer: An Ensemble Approach for Hostile Post Detection in
Hindi
|
Recently the NLP community has started showing interest towards the
challenging task of Hostile Post Detection. This paper present our system for
Shared Task at Constraint2021 on "Hostile Post Detection in Hindi". The data
for this shared task is provided in Hindi Devanagari script which was collected
from Twitter and Facebook. It is a multi-label multi-class classification
problem where each data instance is annotated into one or more of the five
classes: fake, hate, offensive, defamation, and non-hostile. We propose a two
level architecture which is made up of BERT based classifiers and statistical
classifiers to solve this problem. Our team 'Albatross', scored 0.9709 Coarse
grained hostility F1 score measure on Hostile Post Detection in Hindi subtask
and secured 2nd rank out of 45 teams for the task. Our submission is ranked 2nd
and 3rd out of a total of 156 submissions with Coarse grained hostility F1
score of 0.9709 and 0.9703 respectively. Our fine grained scores are also very
encouraging and can be improved with further finetuning. The code is publicly
available.
| 2,021 |
Computation and Language
|
The Challenges of Persian User-generated Textual Content: A Machine
Learning-Based Approach
|
Over recent years a lot of research papers and studies have been published on
the development of effective approaches that benefit from a large amount of
user-generated content and build intelligent predictive models on top of them.
This research applies machine learning-based approaches to tackle the hurdles
that come with Persian user-generated textual content. Unfortunately, there is
still inadequate research in exploiting machine learning approaches to
classify/cluster Persian text. Further, analyzing Persian text suffers from a
lack of resources; specifically from datasets and text manipulation tools.
Since the syntax and semantics of the Persian language is different from
English and other languages, the available resources from these languages are
not instantly usable for Persian. In addition, recognition of nouns and
pronouns, parts of speech tagging, finding words' boundary, stemming or
character manipulations for Persian language are still unsolved issues that
require further studying. Therefore, efforts have been made in this research to
address some of the challenges. This presented approach uses a
machine-translated datasets to conduct sentiment analysis for the Persian
language. Finally, the dataset has been rehearsed with different classifiers
and feature engineering approaches. The results of the experiments have shown
promising state-of-the-art performance in contrast to the previous efforts; the
best classifier was Support Vector Machines which achieved a precision of
91.22%, recall of 91.71%, and F1 score of 91.46%.
| 2,021 |
Computation and Language
|
A survey of joint intent detection and slot-filling models in natural
language understanding
|
Intent classification and slot filling are two critical tasks for natural
language understanding. Traditionally the two tasks have been deemed to proceed
independently. However, more recently, joint models for intent classification
and slot filling have achieved state-of-the-art performance, and have proved
that there exists a strong relationship between the two tasks. This article is
a compilation of past work in natural language understanding, especially joint
intent classification and slot filling. We observe three milestones in this
research so far: Intent detection to identify the speaker's intention, slot
filling to label each word token in the speech/text, and finally, joint intent
classification and slot filling tasks. In this article, we describe trends,
approaches, issues, data sets, evaluation metrics in intent classification and
slot filling. We also discuss representative performance values, describe
shared tasks, and provide pointers to future work, as given in prior works. To
interpret the state-of-the-art trends, we provide multiple tables that describe
and summarise past research along different dimensions, including the types of
features, base approaches, and dataset domain used.
| 2,021 |
Computation and Language
|
Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
|
Despite pre-trained language models such as BERT have achieved appealing
performance in a wide range of natural language processing tasks, they are
computationally expensive to be deployed in real-time applications. A typical
method is to adopt knowledge distillation to compress these large pre-trained
models (teacher models) to small student models. However, for a target domain
with scarce training data, the teacher can hardly pass useful knowledge to the
student, which yields performance degradation for the student models. To tackle
this problem, we propose a method to learn to augment for data-scarce domain
BERT knowledge distillation, by learning a cross-domain manipulation scheme
that automatically augments the target with the help of resource-rich source
domains. Specifically, the proposed method generates samples acquired from a
stationary distribution near the target data and adopts a reinforced selector
to automatically refine the augmentation strategy according to the performance
of the student. Extensive experiments demonstrate that the proposed method
significantly outperforms state-of-the-art baselines on four different tasks,
and for the data-scarce domains, the compressed student models even perform
better than the original large teacher model, with much fewer parameters (only
${\sim}13.3\%$) when only a few labeled examples available.
| 2,021 |
Computation and Language
|
Classifying Scientific Publications with BERT -- Is Self-Attention a
Feature Selection Method?
|
We investigate the self-attention mechanism of BERT in a fine-tuning scenario
for the classification of scientific articles over a taxonomy of research
disciplines. We observe how self-attention focuses on words that are highly
related to the domain of the article. Particularly, a small subset of
vocabulary words tends to receive most of the attention. We compare and
evaluate the subset of the most attended words with feature selection methods
normally used for text classification in order to characterize self-attention
as a possible feature selection approach. Using ConceptNet as ground truth, we
also find that attended words are more related to the research fields of the
articles. However, conventional feature selection methods are still a better
option to learn classifiers from scratch. This result suggests that, while
self-attention identifies domain-relevant terms, the discriminatory information
in BERT is encoded in the contextualized outputs and the classification layer.
It also raises the question whether injecting feature selection methods in the
self-attention mechanism could further optimize single sequence classification
using transformers.
| 2,021 |
Computation and Language
|
Active Learning for Sequence Tagging with Deep Pre-trained Models and
Bayesian Uncertainty Estimates
|
Annotating training data for sequence tagging of texts is usually very
time-consuming. Recent advances in transfer learning for natural language
processing in conjunction with active learning open the possibility to
significantly reduce the necessary annotation budget. We are the first to
thoroughly investigate this powerful combination for the sequence tagging task.
We conduct an extensive empirical study of various Bayesian uncertainty
estimation methods and Monte Carlo dropout options for deep pre-trained models
in the active learning framework and find the best combinations for different
types of models. Besides, we also demonstrate that to acquire instances during
active learning, a full-size Transformer can be substituted with a distilled
version, which yields better computational performance and reduces obstacles
for applying deep active learning in practice.
| 2,021 |
Computation and Language
|
Can Taxonomy Help? Improving Semantic Question Matching using Question
Taxonomy
|
In this paper, we propose a hybrid technique for semantic question matching.
It uses our proposed two-layered taxonomy for English questions by augmenting
state-of-the-art deep learning models with question classes obtained from a
deep learning based question classifier. Experiments performed on three
open-domain datasets demonstrate the effectiveness of our proposed approach. We
achieve state-of-the-art results on partial ordering question ranking (POQR)
benchmark dataset. Our empirical analysis shows that coupling standard
distributional features (provided by the question encoder) with knowledge from
taxonomy is more effective than either deep learning (DL) or taxonomy-based
knowledge alone.
| 2,021 |
Computation and Language
|
Word Alignment by Fine-tuning Embeddings on Parallel Corpora
|
Word alignment over parallel corpora has a wide variety of applications,
including learning translation lexicons, cross-lingual transfer of language
processing tools, and automatic evaluation or analysis of translation outputs.
The great majority of past work on word alignment has worked by performing
unsupervised learning on parallel texts. Recently, however, other work has
demonstrated that pre-trained contextualized word embeddings derived from
multilingually trained language models (LMs) prove an attractive alternative,
achieving competitive results on the word alignment task even in the absence of
explicit training on parallel data. In this paper, we examine methods to marry
the two approaches: leveraging pre-trained LMs but fine-tuning them on parallel
text with objectives designed to improve alignment quality, and proposing
methods to effectively extract alignments from these fine-tuned models. We
perform experiments on five language pairs and demonstrate that our model can
consistently outperform previous state-of-the-art models of all varieties. In
addition, we demonstrate that we are able to train multilingual word aligners
that can obtain robust performance on different language pairs. Our aligner,
AWESOME (Aligning Word Embedding Spaces of Multilingual Encoders), with
pre-trained models is available at https://github.com/neulab/awesome-align
| 2,021 |
Computation and Language
|
Data-to-text Generation by Splicing Together Nearest Neighbors
|
We propose to tackle data-to-text generation tasks by directly splicing
together retrieved segments of text from "neighbor" source-target pairs. Unlike
recent work that conditions on retrieved neighbors but generates text
token-by-token, left-to-right, we learn a policy that directly manipulates
segments of neighbor text, by inserting or replacing them in partially
constructed generations. Standard techniques for training such a policy require
an oracle derivation for each generation, and we prove that finding the
shortest such derivation can be reduced to parsing under a particular weighted
context-free grammar. We find that policies learned in this way perform on par
with strong baselines in terms of automatic and human evaluation, but allow for
more interpretable and controllable generation.
| 2,021 |
Computation and Language
|
Zero-shot Generalization in Dialog State Tracking through Generative
Question Answering
|
Dialog State Tracking (DST), an integral part of modern dialog systems, aims
to track user preferences and constraints (slots) in task-oriented dialogs. In
real-world settings with constantly changing services, DST systems must
generalize to new domains and unseen slot types. Existing methods for DST do
not generalize well to new slot names and many require known ontologies of slot
types and values for inference. We introduce a novel ontology-free framework
that supports natural language queries for unseen constraints and slots in
multi-domain task-oriented dialogs. Our approach is based on generative
question-answering using a conditional language model pre-trained on
substantive English sentences. Our model improves joint goal accuracy in
zero-shot domain adaptation settings by up to 9% (absolute) over the previous
state-of-the-art on the MultiWOZ 2.1 dataset.
| 2,021 |
Computation and Language
|
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual
Retrieval
|
Pretrained multilingual text encoders based on neural Transformer
architectures, such as multilingual BERT (mBERT) and XLM, have achieved strong
performance on a myriad of language understanding tasks. Consequently, they
have been adopted as a go-to paradigm for multilingual and cross-lingual
representation learning and transfer, rendering cross-lingual word embeddings
(CLWEs) effectively obsolete. However, questions remain to which extent this
finding generalizes 1) to unsupervised settings and 2) for ad-hoc cross-lingual
IR (CLIR) tasks. Therefore, in this work we present a systematic empirical
study focused on the suitability of the state-of-the-art multilingual encoders
for cross-lingual document and sentence retrieval tasks across a large number
of language pairs. In contrast to supervised language understanding, our
results indicate that for unsupervised document-level CLIR -- a setup with no
relevance judgments for IR-specific fine-tuning -- pretrained encoders fail to
significantly outperform models based on CLWEs. For sentence-level CLIR, we
demonstrate that state-of-the-art performance can be achieved. However, the
peak performance is not met using the general-purpose multilingual text
encoders `off-the-shelf', but rather relying on their variants that have been
further specialized for sentence understanding tasks.
| 2,021 |
Computation and Language
|
ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase
Generation
|
We propose ParaSCI, the first large-scale paraphrase dataset in the
scientific field, including 33,981 paraphrase pairs from ACL (ParaSCI-ACL) and
316,063 pairs from arXiv (ParaSCI-arXiv). Digging into characteristics and
common patterns of scientific papers, we construct this dataset though
intra-paper and inter-paper methods, such as collecting citations to the same
paper or aggregating definitions by scientific terms. To take advantage of
sentences paraphrased partially, we put up PDBERT as a general paraphrase
discovering method. The major advantages of paraphrases in ParaSCI lie in the
prominent length and textual diversity, which is complementary to existing
paraphrase datasets. ParaSCI obtains satisfactory results on human evaluation
and downstream tasks, especially long paraphrase generation.
| 2,021 |
Computation and Language
|
Content Selection Network for Document-grounded Retrieval-based Chatbots
|
Grounding human-machine conversation in a document is an effective way to
improve the performance of retrieval-based chatbots. However, only a part of
the document content may be relevant to help select the appropriate response at
a round. It is thus crucial to select the part of document content relevant to
the current conversation context. In this paper, we propose a document content
selection network (CSN) to perform explicit selection of relevant document
contents, and filter out the irrelevant parts. We show in experiments on two
public document-grounded conversation datasets that CSN can effectively help
select the relevant document contents to the conversation context, and it
produces better results than the state-of-the-art approaches. Our code and
datasets are available at https://github.com/DaoD/CSN.
| 2,021 |
Computation and Language
|
Adv-OLM: Generating Textual Adversaries via OLM
|
Deep learning models are susceptible to adversarial examples that have
imperceptible perturbations in the original input, resulting in adversarial
attacks against these models. Analysis of these attacks on the state of the art
transformers in NLP can help improve the robustness of these models against
such adversarial inputs. In this paper, we present Adv-OLM, a black-box attack
method that adapts the idea of Occlusion and Language Models (OLM) to the
current state of the art attack methods. OLM is used to rank words of a
sentence, which are later substituted using word replacement strategies. We
experimentally show that our approach outperforms other attack methods for
several text classification tasks.
| 2,021 |
Computation and Language
|
Validating Label Consistency in NER Data Annotation
|
Data annotation plays a crucial role in ensuring your named entity
recognition (NER) projects are trained with the right information to learn
from. Producing the most accurate labels is a challenge due to the complexity
involved with annotation. Label inconsistency between multiple subsets of data
annotation (e.g., training set and test set, or multiple training subsets) is
an indicator of label mistakes. In this work, we present an empirical method to
explore the relationship between label (in-)consistency and NER model
performance. It can be used to validate the label consistency (or catches the
inconsistency) in multiple sets of NER data annotation. In experiments, our
method identified the label inconsistency of test data in SCIERC and CoNLL03
datasets (with 26.7% and 5.4% label mistakes). It validated the consistency in
the corrected version of both datasets.
| 2,021 |
Computation and Language
|
Multi-sense embeddings through a word sense disambiguation process
|
Natural Language Understanding has seen an increasing number of publications
in the last few years, especially after robust word embeddings models became
prominent, when they proved themselves able to capture and represent semantic
relationships from massive amounts of data. Nevertheless, traditional models
often fall short in intrinsic issues of linguistics, such as polysemy and
homonymy. Any expert system that makes use of natural language in its core, can
be affected by a weak semantic representation of text, resulting in inaccurate
outcomes based on poor decisions. To mitigate such issues, we propose a novel
approach called Most Suitable Sense Annotation (MSSA), that disambiguates and
annotates each word by its specific sense, considering the semantic effects of
its context. Our approach brings three main contributions to the semantic
representation scenario: (i) an unsupervised technique that disambiguates and
annotates words by their senses, (ii) a multi-sense embeddings model that can
be extended to any traditional word embeddings algorithm, and (iii) a recurrent
methodology that allows our models to be re-used and their representations
refined. We test our approach on six different benchmarks for the word
similarity task, showing that our approach can produce state-of-the-art results
and outperforms several more complex state-of-the-art systems.
| 2,019 |
Computation and Language
|
Distilling Large Language Models into Tiny and Effective Students using
pQRNN
|
Large pre-trained multilingual models like mBERT, XLM-R achieve state of the
art results on language understanding tasks. However, they are not well suited
for latency critical applications on both servers and edge devices. It's
important to reduce the memory and compute resources required by these models.
To this end, we propose pQRNN, a projection-based embedding-free neural encoder
that is tiny and effective for natural language processing tasks. Without
pre-training, pQRNNs significantly outperform LSTM models with pre-trained
embeddings despite being 140x smaller. With the same number of parameters, they
outperform transformer baselines thereby showcasing their parameter efficiency.
Additionally, we show that pQRNNs are effective student architectures for
distilling large pre-trained language models. We perform careful ablations
which study the effect of pQRNN parameters, data augmentation, and distillation
settings. On MTOP, a challenging multilingual semantic parsing dataset, pQRNN
students achieve 95.9\% of the performance of an mBERT teacher while being 350x
smaller. On mATIS, a popular parsing task, pQRNN students on average are able
to get to 97.1\% of the teacher while again being 350x smaller. Our strong
results suggest that our approach is great for latency-sensitive applications
while being able to leverage large mBERT-like models.
| 2,021 |
Computation and Language
|
Enriching Non-Autoregressive Transformer with Syntactic and
SemanticStructures for Neural Machine Translation
|
The non-autoregressive models have boosted the efficiency of neural machine
translation through parallelized decoding at the cost of effectiveness when
comparing with the autoregressive counterparts. In this paper, we claim that
the syntactic and semantic structures among natural language are critical for
non-autoregressive machine translation and can further improve the performance.
However, these structures are rarely considered in the existing
non-autoregressive models. Inspired by this intuition, we propose to
incorporate the explicit syntactic and semantic structures of languages into a
non-autoregressive Transformer, for the task of neural machine translation.
Moreover, we also consider the intermediate latent alignment within target
sentences to better learn the long-term token dependencies. Experimental
results on two real-world datasets (i.e., WMT14 En-De and WMT16 En-Ro) show
that our model achieves a significantly faster speed, as well as keeps the
translation quality when compared with several state-of-the-art
non-autoregressive models.
| 2,021 |
Computation and Language
|
Knowledge Graph Completion with Text-aided Regularization
|
Knowledge Graph Completion is a task of expanding the knowledge graph/base
through estimating possible entities, or proper nouns, that can be connected
using a set of predefined relations, or verb/predicates describing
interconnections of two things. Generally, we describe this problem as adding
new edges to a current network of vertices and edges. Traditional approaches
mainly focus on using the existing graphical information that is intrinsic of
the graph and train the corresponding embeddings to describe the information;
however, we think that the corpus that are related to the entities should also
contain information that can positively influence the embeddings to better make
predictions. In our project, we try numerous ways of using extracted or raw
textual information to help existing KG embedding frameworks reach better
prediction results, in the means of adding a similarity function to the
regularization part in the loss function. Results have shown that we have made
decent improvements over baseline KG embedding methods.
| 2,021 |
Computation and Language
|
CMSAOne@Dravidian-CodeMix-FIRE2020: A Meta Embedding and Transformer
model for Code-Mixed Sentiment Analysis on Social Media Text
|
Code-mixing(CM) is a frequently observed phenomenon that uses multiple
languages in an utterance or sentence. CM is mostly practiced on various social
media platforms and in informal conversations. Sentiment analysis (SA) is a
fundamental step in NLP and is well studied in the monolingual text.
Code-mixing adds a challenge to sentiment analysis due to its non-standard
representations. This paper proposes a meta embedding with a transformer method
for sentiment analysis on the Dravidian code-mixed dataset. In our method, we
used meta embeddings to capture rich text representations. We used the proposed
method for the Task: "Sentiment Analysis for Dravidian Languages in Code-Mixed
Text", and it achieved an F1 score of $0.58$ and $0.66$ for the given Dravidian
code mixed data sets. The code is provided in the Github
https://github.com/suman101112/fire-2020-Dravidian-CodeMix.
| 2,021 |
Computation and Language
|
HASOCOne@FIRE-HASOC2020: Using BERT and Multilingual BERT models for
Hate Speech Detection
|
Hateful and Toxic content has become a significant concern in today's world
due to an exponential rise in social media. The increase in hate speech and
harmful content motivated researchers to dedicate substantial efforts to the
challenging direction of hateful content identification. In this task, we
propose an approach to automatically classify hate speech and offensive
content. We have used the datasets obtained from FIRE 2019 and 2020 shared
tasks. We perform experiments by taking advantage of transfer learning models.
We observed that the pre-trained BERT model and the multilingual-BERT model
gave the best results. The code is made publically available at
https://github.com/suman101112/hasoc-fire-2020.
| 2,021 |
Computation and Language
|
Does a Hybrid Neural Network based Feature Selection Model Improve Text
Classification?
|
Text classification is a fundamental problem in the field of natural language
processing. Text classification mainly focuses on giving more importance to all
the relevant features that help classify the textual data. Apart from these,
the text can have redundant or highly correlated features. These features
increase the complexity of the classification algorithm. Thus, many
dimensionality reduction methods were proposed with the traditional machine
learning classifiers. The use of dimensionality reduction methods with machine
learning classifiers has achieved good results. In this paper, we propose a
hybrid feature selection method for obtaining relevant features by combining
various filter-based feature selection methods and fastText classifier. We then
present three ways of implementing a feature selection and neural network
pipeline. We observed a reduction in training time when feature selection
methods are used along with neural networks. We also observed a slight increase
in accuracy on some datasets.
| 2,021 |
Computation and Language
|
Multilingual Pre-Trained Transformers and Convolutional NN
Classification Models for Technical Domain Identification
|
In this paper, we present a transfer learning system to perform technical
domain identification on multilingual text data. We have submitted two runs,
one uses the transformer model BERT, and the other uses XLM-ROBERTa with the
CNN model for text classification. These models allowed us to identify the
domain of the given sentences for the ICON 2020 shared Task, TechDOfication:
Technical Domain Identification. Our system ranked the best for the subtasks
1d, 1g for the given TechDOfication dataset.
| 2,021 |
Computation and Language
|
Unsupervised Technical Domain Terms Extraction using Term Extractor
|
Terminology extraction, also known as term extraction, is a subtask of
information extraction. The goal of terminology extraction is to extract
relevant words or phrases from a given corpus automatically. This paper focuses
on the unsupervised automated domain term extraction method that considers
chunking, preprocessing, and ranking domain-specific terms using relevance and
cohesion functions for ICON 2020 shared task 2: TermTraction.
| 2,021 |
Computation and Language
|
Enhanced word embeddings using multi-semantic representation through
lexical chains
|
The relationship between words in a sentence often tells us more about the
underlying semantic content of a document than its actual words, individually.
In this work, we propose two novel algorithms, called Flexible Lexical Chain II
and Fixed Lexical Chain II. These algorithms combine the semantic relations
derived from lexical chains, prior knowledge from lexical databases, and the
robustness of the distributional hypothesis in word embeddings as building
blocks forming a single system. In short, our approach has three main
contributions: (i) a set of techniques that fully integrate word embeddings and
lexical chains; (ii) a more robust semantic representation that considers the
latent relation between words in a document; and (iii) lightweight word
embeddings models that can be extended to any natural language task. We intend
to assess the knowledge of pre-trained models to evaluate their robustness in
the document classification task. The proposed techniques are tested against
seven word embeddings algorithms using five different machine learning
classifiers over six scenarios in the document classification task. Our results
show the integration between lexical chains and word embeddings representations
sustain state-of-the-art results, even against more complex systems.
| 2,020 |
Computation and Language
|
Lexical semantic change for Ancient Greek and Latin
|
Change and its precondition, variation, are inherent in languages. Over time,
new words enter the lexicon, others become obsolete, and existing words acquire
new senses. Associating a word's correct meaning in its historical context is a
central challenge in diachronic research. Historical corpora of classical
languages, such as Ancient Greek and Latin, typically come with rich metadata,
and existing models are limited by their inability to exploit contextual
information beyond the document timestamp. While embedding-based methods
feature among the current state of the art systems, they are lacking in the
interpretative power. In contrast, Bayesian models provide explicit and
interpretable representations of semantic change phenomena. In this chapter we
build on GASC, a recent computational approach to semantic change based on a
dynamic Bayesian mixture model. In this model, the evolution of word senses
over time is based not only on distributional information of lexical nature,
but also on text genres. We provide a systematic comparison of dynamic Bayesian
mixture models for semantic change with state-of-the-art embedding-based
models. On top of providing a full description of meaning change over time, we
show that Bayesian mixture models are highly competitive approaches to detect
binary semantic change in both Ancient Greek and Latin.
| 2,021 |
Computation and Language
|
Evaluation Discrepancy Discovery: A Sentence Compression Case-study
|
Reliable evaluation protocols are of utmost importance for reproducible NLP
research. In this work, we show that sometimes neither metric nor conventional
human evaluation is sufficient to draw conclusions about system performance.
Using sentence compression as an example task, we demonstrate how a system can
game a well-established dataset to achieve state-of-the-art results. In
contrast with the results reported in previous work that showed correlation
between human judgements and metric scores, our manual analysis of
state-of-the-art system outputs demonstrates that high metric scores may only
indicate a better fit to the data, but not better outputs, as perceived by
humans.
| 2,021 |
Computation and Language
|
A multi-perspective combined recall and rank framework for Chinese
procedure terminology normalization
|
Medical terminology normalization aims to map the clinical mention to
terminologies come from a knowledge base, which plays an important role in
analyzing Electronic Health Record(EHR) and many downstream tasks. In this
paper, we focus on Chinese procedure terminology normalization. The expression
of terminologies are various and one medical mention may be linked to multiple
terminologies. Previous study explores some methods such as multi-class
classification or learning to rank(LTR) to sort the terminologies by literature
and semantic information. However, these information is inadequate to find the
right terminologies, particularly in multi-implication cases. In this work, we
propose a combined recall and rank framework to solve the above problems. This
framework is composed of a multi-task candidate generator(MTCG), a keywords
attentive ranker(KAR) and a fusion block(FB). MTCG is utilized to predict the
mention implication number and recall candidates with semantic similarity. KAR
is based on Bert with a keywords attentive mechanism which focuses on keywords
such as procedure sites and procedure types. FB merges the similarity come from
MTCG and KAR to sort the terminologies from different perspectives. Detailed
experimental analysis shows our proposed framework has a remarkable improvement
on both performance and efficiency.
| 2,021 |
Computation and Language
|
The heads hypothesis: A unifying statistical approach towards
understanding multi-headed attention in BERT
|
Multi-headed attention heads are a mainstay in transformer-based models.
Different methods have been proposed to classify the role of each attention
head based on the relations between tokens which have high pair-wise attention.
These roles include syntactic (tokens with some syntactic relation), local
(nearby tokens), block (tokens in the same sentence) and delimiter (the special
[CLS], [SEP] tokens). There are two main challenges with existing methods for
classification: (a) there are no standard scores across studies or across
functional roles, and (b) these scores are often average quantities measured
across sentences without capturing statistical significance. In this work, we
formalize a simple yet effective score that generalizes to all the roles of
attention heads and employs hypothesis testing on this score for robust
inference. This provides us the right lens to systematically analyze attention
heads and confidently comment on many commonly posed questions on analyzing the
BERT model. In particular, we comment on the co-location of multiple functional
roles in the same attention head, the distribution of attention heads across
layers, and effect of fine-tuning for specific NLP tasks on these functional
roles.
| 2,021 |
Computation and Language
|
Streaming Models for Joint Speech Recognition and Translation
|
Using end-to-end models for speech translation (ST) has increasingly been the
focus of the ST community. These models condense the previously cascaded
systems by directly converting sound waves into translated text. However,
cascaded models have the advantage of including automatic speech recognition
output, useful for a variety of practical ST systems that often display
transcripts to the user alongside the translations. To bridge this gap, recent
work has shown initial progress into the feasibility for end-to-end models to
produce both of these outputs. However, all previous work has only looked at
this problem from the consecutive perspective, leaving uncertainty on whether
these approaches are effective in the more challenging streaming setting. We
develop an end-to-end streaming ST model based on a re-translation approach and
compare against standard cascading approaches. We also introduce a novel
inference method for the joint case, interleaving both transcript and
translation in generation and removing the need to use separate decoders. Our
evaluation across a range of metrics capturing accuracy, latency, and
consistency shows that our end-to-end models are statistically similar to
cascading models, while having half the number of parameters. We also find that
both systems provide strong translation quality at low latency, keeping 99% of
consecutive quality at a lag of just under a second.
| 2,021 |
Computation and Language
|
Extracting Lifestyle Factors for Alzheimer's Disease from Clinical Notes
Using Deep Learning with Weak Supervision
|
Since no effective therapies exist for Alzheimer's disease (AD), prevention
has become more critical through lifestyle factor changes and interventions.
Analyzing electronic health records (EHR) of patients with AD can help us
better understand lifestyle's effect on AD. However, lifestyle information is
typically stored in clinical narratives. Thus, the objective of the study was
to demonstrate the feasibility of natural language processing (NLP) models to
classify lifestyle factors (e.g., physical activity and excessive diet) from
clinical texts. We automatically generated labels for the training data by
using a rule-based NLP algorithm. We conducted weak supervision for pre-trained
Bidirectional Encoder Representations from Transformers (BERT) models on the
weakly labeled training corpus. These models include the BERT base model,
PubMedBERT(abstracts + full text), PubMedBERT(only abstracts), Unified Medical
Language System (UMLS) BERT, Bio BERT, and Bio-clinical BERT. We performed two
case studies: physical activity and excessive diet, in order to validate the
effectiveness of BERT models in classifying lifestyle factors for AD. These
models were compared on the developed Gold Standard Corpus (GSC) on the two
case studies. The PubmedBERT(Abs) model achieved the best performance for
physical activity, with its precision, recall, and F-1 scores of 0.96, 0.96,
and 0.96, respectively. Regarding classifying excessive diet, the Bio BERT
model showed the highest performance with perfect precision, recall, and F-1
scores. The proposed approach leveraging weak supervision could significantly
increase the sample size, which is required for training the deep learning
models. The study also demonstrates the effectiveness of BERT models for
extracting lifestyle factors for Alzheimer's disease from clinical notes.
| 2,021 |
Computation and Language
|
Beyond Domain APIs: Task-oriented Conversational Modeling with
Unstructured Knowledge Access Track in DSTC9
|
Most prior work on task-oriented dialogue systems are restricted to a limited
coverage of domain APIs, while users oftentimes have domain related requests
that are not covered by the APIs. This challenge track aims to expand the
coverage of task-oriented dialogue systems by incorporating external
unstructured knowledge sources. We define three tasks: knowledge-seeking turn
detection, knowledge selection, and knowledge-grounded response generation. We
introduce the data sets and the neural baseline models for three tasks. The
challenge track received a total of 105 entries from 24 participating teams. In
the evaluation results, the ensemble methods with different large-scale
pretrained language models achieved high performances with improved knowledge
selection capability and better generalization into unseen data.
| 2,021 |
Computation and Language
|
Censorship of Online Encyclopedias: Implications for NLP Models
|
While artificial intelligence provides the backbone for many tools people use
around the world, recent work has brought to attention that the algorithms
powering AI are not free of politics, stereotypes, and bias. While most work in
this area has focused on the ways in which AI can exacerbate existing
inequalities and discrimination, very little work has studied how governments
actively shape training data. We describe how censorship has affected the
development of Wikipedia corpuses, text data which are regularly used for
pre-trained inputs into NLP algorithms. We show that word embeddings trained on
Baidu Baike, an online Chinese encyclopedia, have very different associations
between adjectives and a range of concepts about democracy, freedom, collective
action, equality, and people and historical events in China than its regularly
blocked but uncensored counterpart - Chinese language Wikipedia. We examine the
implications of these discrepancies by studying their use in downstream AI
applications. Our paper shows how government repression, censorship, and
self-censorship may impact training data and the applications that draw from
them.
| 2,021 |
Computation and Language
|
Drug and Disease Interpretation Learning with Biomedical Entity
Representation Transformer
|
Concept normalization in free-form texts is a crucial step in every
text-mining pipeline. Neural architectures based on Bidirectional Encoder
Representations from Transformers (BERT) have achieved state-of-the-art results
in the biomedical domain. In the context of drug discovery and development,
clinical trials are necessary to establish the efficacy and safety of drugs. We
investigate the effectiveness of transferring concept normalization from the
general biomedical domain to the clinical trials domain in a zero-shot setting
with an absence of labeled data. We propose a simple and effective two-stage
neural approach based on fine-tuned BERT architectures. In the first stage, we
train a metric learning model that optimizes relative similarity of mentions
and concepts via triplet loss. The model is trained on available labeled
corpora of scientific abstracts to obtain vector embeddings of concept names
and entity mentions from texts. In the second stage, we find the closest
concept name representation in an embedding space to a given clinical mention.
We evaluated several models, including state-of-the-art architectures, on a
dataset of abstracts and a real-world dataset of trial records with
interventions and conditions mapped to drug and disease terminologies.
Extensive experiments validate the effectiveness of our approach in knowledge
transfer from the scientific literature to clinical trials.
| 2,021 |
Computation and Language
|
$k$-Neighbor Based Curriculum Sampling for Sequence Prediction
|
Multi-step ahead prediction in language models is challenging due to the
discrepancy between training and test time processes. At test time, a sequence
predictor is required to make predictions given past predictions as the input,
instead of the past targets that are provided during training. This difference,
known as exposure bias, can lead to the compounding of errors along a generated
sequence at test time. To improve generalization in neural language models and
address compounding errors, we propose \textit{Nearest-Neighbor Replacement
Sampling} -- a curriculum learning-based method that gradually changes an
initially deterministic teacher policy to a stochastic policy. A token at a
given time-step is replaced with a sampled nearest neighbor of the past target
with a truncated probability proportional to the cosine similarity between the
original word and its top $k$ most similar words. This allows the learner to
explore alternatives when the current policy provided by the teacher is
sub-optimal or difficult to learn from. The proposed method is straightforward,
online and requires little additional memory requirements. We report our
findings on two language modelling benchmarks and find that the proposed method
further improves performance when used in conjunction with scheduled sampling.
| 2,021 |
Computation and Language
|
BERT Transformer model for Detecting Arabic GPT2 Auto-Generated Tweets
|
During the last two decades, we have progressively turned to the Internet and
social media to find news, entertain conversations and share opinion. Recently,
OpenAI has developed a ma-chine learning system called GPT-2 for Generative
Pre-trained Transformer-2, which can pro-duce deepfake texts. It can generate
blocks of text based on brief writing prompts that look like they were written
by humans, facilitating the spread false or auto-generated text. In line with
this progress, and in order to counteract potential dangers, several methods
have been pro-posed for detecting text written by these language models. In
this paper, we propose a transfer learning based model that will be able to
detect if an Arabic sentence is written by humans or automatically generated by
bots. Our dataset is based on tweets from a previous work, which we have
crawled and extended using the Twitter API. We used GPT2-Small-Arabic to
generate fake Arabic Sentences. For evaluation, we compared different recurrent
neural network (RNN) word embeddings based baseline models, namely: LSTM,
BI-LSTM, GRU and BI-GRU, with a transformer-based model. Our new
transfer-learning model has obtained an accuracy up to 98%. To the best of our
knowledge, this work is the first study where ARABERT and GPT2 were combined to
detect and classify the Arabic auto-generated texts.
| 2,020 |
Computation and Language
|
Effects of Pre- and Post-Processing on type-based Embeddings in Lexical
Semantic Change Detection
|
Lexical semantic change detection is a new and innovative research field. The
optimal fine-tuning of models including pre- and post-processing is largely
unclear. We optimize existing models by (i) pre-training on large corpora and
refining on diachronic target corpora tackling the notorious small data
problem, and (ii) applying post-processing transformations that have been shown
to improve performance on synchronic tasks. Our results provide a guide for the
application and optimization of lexical semantic change detection models across
various learning scenarios.
| 2,021 |
Computation and Language
|
Slot Self-Attentive Dialogue State Tracking
|
An indispensable component in task-oriented dialogue systems is the dialogue
state tracker, which keeps track of users' intentions in the course of
conversation. The typical approach towards this goal is to fill in multiple
pre-defined slots that are essential to complete the task. Although various
dialogue state tracking methods have been proposed in recent years, most of
them predict the value of each slot separately and fail to consider the
correlations among slots. In this paper, we propose a slot self-attention
mechanism that can learn the slot correlations automatically. Specifically, a
slot-token attention is first utilized to obtain slot-specific features from
the dialogue context. Then a stacked slot self-attention is applied on these
features to learn the correlations among slots. We conduct comprehensive
experiments on two multi-domain task-oriented dialogue datasets, including
MultiWOZ 2.0 and MultiWOZ 2.1. The experimental results demonstrate that our
approach achieves state-of-the-art performance on both datasets, verifying the
necessity and effectiveness of taking slot correlations into consideration.
| 2,021 |
Computation and Language
|
Analyzing Team Performance with Embeddings from Multiparty Dialogues
|
Good communication is indubitably the foundation of effective teamwork. Over
time teams develop their own communication styles and often exhibit
entrainment, a conversational phenomena in which humans synchronize their
linguistic choices. This paper examines the problem of predicting team
performance from embeddings learned from multiparty dialogues such that teams
with similar conflict scores lie close to one another in vector space.
Embeddings were extracted from three types of features: 1) dialogue acts 2)
sentiment polarity 3) syntactic entrainment. Although all of these features can
be used to effectively predict team performance, their utility varies by the
teamwork phase. We separate the dialogues of players playing a cooperative game
into stages: 1) early (knowledge building) 2) middle (problem-solving) and 3)
late (culmination). Unlike syntactic entrainment, both dialogue act and
sentiment embeddings are effective for classifying team performance, even
during the initial phase. This finding has potential ramifications for the
development of conversational agents that facilitate teaming.
| 2,021 |
Computation and Language
|
Towards Natural Language Question Answering over Earth Observation
Linked Data using Attention-based Neural Machine Translation
|
With an increase in Geospatial Linked Open Data being adopted and published
over the web, there is a need to develop intuitive interfaces and systems for
seamless and efficient exploratory analysis of such rich heterogeneous
multi-modal datasets. This work is geared towards improving the exploration
process of Earth Observation (EO) Linked Data by developing a natural language
interface to facilitate querying. Questions asked over Earth Observation Linked
Data have an inherent spatio-temporal dimension and can be represented using
GeoSPARQL. This paper seeks to study and analyze the use of RNN-based neural
machine translation with attention for transforming natural language questions
into GeoSPARQL queries. Specifically, it aims to assess the feasibility of a
neural approach for identifying and mapping spatial predicates in natural
language to GeoSPARQL's topology vocabulary extension including - Egenhofer and
RCC8 relations. The queries can then be executed over a triple store to yield
answers for the natural language questions. A dataset consisting of mappings
from natural language questions to GeoSPARQL queries over the Corine Land
Cover(CLC) Linked Data has been created to train and validate the deep neural
network. From our experiments, it is evident that neural machine translation
with attention is a promising approach for the task of translating spatial
predicates in natural language questions to GeoSPARQL queries.
| 2,021 |
Computation and Language
|
Reproducibility, Replicability and Beyond: Assessing Production
Readiness of Aspect Based Sentiment Analysis in the Wild
|
With the exponential growth of online marketplaces and user-generated content
therein, aspect-based sentiment analysis has become more important than ever.
In this work, we critically review a representative sample of the models
published during the past six years through the lens of a practitioner, with an
eye towards deployment in production. First, our rigorous empirical evaluation
reveals poor reproducibility: an average 4-5% drop in test accuracy across the
sample. Second, to further bolster our confidence in empirical evaluation, we
report experiments on two challenging data slices, and observe a consistent
12-55% drop in accuracy. Third, we study the possibility of transfer across
domains and observe that as little as 10-25% of the domain-specific training
dataset, when used in conjunction with datasets from other domains within the
same locale, largely closes the gap between complete cross-domain and complete
in-domain predictive performance. Lastly, we open-source two large-scale
annotated review corpora from a large e-commerce portal in India in order to
aid the study of replicability and transfer, with the hope that it will fuel
further growth of the field.
| 2,021 |
Computation and Language
|
ARTH: Algorithm For Reading Text Handily -- An AI Aid for People having
Word Processing Issues
|
The objective of this project is to solve one of the major problems faced by
the people having word processing issues like trauma, or mild mental
disability. "ARTH" is the short form of Algorithm for Reading Handily. ARTH is
a self-learning set of algorithms that is an intelligent way of fulfilling the
need for "reading and understanding the text effortlessly" which adjusts
according to the needs of every user. The research project propagates in two
steps. In the first step, the algorithm tries to identify the difficult words
present in the text based on two features -- the number of syllables and usage
frequency -- using a clustering algorithm. After the analysis of the clusters,
the algorithm labels these clusters, according to their difficulty level. In
the second step, the algorithm interacts with the user. It aims to test the
user's comprehensibility of the text and his/her vocabulary level by taking an
automatically generated quiz. The algorithm identifies the clusters which are
difficult for the user, based on the result of the analysis. The meaning of
perceived difficult words is displayed next to them. The technology "ARTH"
focuses on the revival of the joy of reading among those people, who have a
poor vocabulary or any word processing issues.
| 2,021 |
Computation and Language
|
WebSRC: A Dataset for Web-Based Structural Reading Comprehension
|
Web search is an essential way for humans to obtain information, but it's
still a great challenge for machines to understand the contents of web pages.
In this paper, we introduce the task of structural reading comprehension (SRC)
on web. Given a web page and a question about it, the task is to find the
answer from the web page. This task requires a system not only to understand
the semantics of texts but also the structure of the web page. Moreover, we
proposed WebSRC, a novel Web-based Structural Reading Comprehension dataset.
WebSRC consists of 400K question-answer pairs, which are collected from 6.4K
web pages. Along with the QA pairs, corresponding HTML source code,
screenshots, and metadata are also provided in our dataset. Each question in
WebSRC requires a certain structural understanding of a web page to answer, and
the answer is either a text span on the web page or yes/no. We evaluate various
baselines on our dataset to show the difficulty of our task. We also
investigate the usefulness of structural information and visual features. Our
dataset and baselines have been publicly available at
https://x-lance.github.io/WebSRC/.
| 2,021 |
Computation and Language
|
Training Multilingual Pre-trained Language Model with Byte-level
Subwords
|
The pre-trained language models have achieved great successes in various
natural language understanding (NLU) tasks due to its capacity to capture the
deep contextualized information in text by pre-training on large-scale corpora.
One of the fundamental components in pre-trained language models is the
vocabulary, especially for training multilingual models on many different
languages. In the technical report, we present our practices on training
multilingual pre-trained language models with BBPE: Byte-Level BPE (i.e., Byte
Pair Encoding). In the experiment, we adopted the architecture of NEZHA as the
underlying pre-trained language model and the results show that NEZHA trained
with byte-level subwords consistently outperforms Google multilingual BERT and
vanilla NEZHA by a notable margin in several multilingual NLU tasks. We release
the source code of our byte-level vocabulary building tools and the
multilingual pre-trained language models.
| 2,021 |
Computation and Language
|
Debiasing Pre-trained Contextualised Embeddings
|
In comparison to the numerous debiasing methods proposed for the static
non-contextualised word embeddings, the discriminative biases in contextualised
embeddings have received relatively little attention. We propose a fine-tuning
method that can be applied at token- or sentence-levels to debias pre-trained
contextualised embeddings. Our proposed method can be applied to any
pre-trained contextualised embedding model, without requiring to retrain those
models. Using gender bias as an illustrative example, we then conduct a
systematic study using several state-of-the-art (SoTA) contextualised
representations on multiple benchmark datasets to evaluate the level of biases
encoded in different contextualised embeddings before and after debiasing using
the proposed method. We find that applying token-level debiasing for all tokens
and across all layers of a contextualised embedding model produces the best
performance. Interestingly, we observe that there is a trade-off between
creating an accurate vs. unbiased contextualised embedding model, and different
contextualised embedding models respond differently to this trade-off.
| 2,021 |
Computation and Language
|
Dictionary-based Debiasing of Pre-trained Word Embeddings
|
Word embeddings trained on large corpora have shown to encode high levels of
unfair discriminatory gender, racial, religious and ethnic biases.
In contrast, human-written dictionaries describe the meanings of words in a
concise, objective and an unbiased manner.
We propose a method for debiasing pre-trained word embeddings using
dictionaries, without requiring access to the original training resources or
any knowledge regarding the word embedding algorithms used.
Unlike prior work, our proposed method does not require the types of biases
to be pre-defined in the form of word lists, and learns the constraints that
must be satisfied by unbiased word embeddings automatically from dictionary
definitions of the words.
Specifically, we learn an encoder to generate a debiased version of an input
word embedding such that it
(a) retains the semantics of the pre-trained word embeddings,
(b) agrees with the unbiased definition of the word according to the
dictionary, and
(c) remains orthogonal to the vector space spanned by any biased basis
vectors in the pre-trained word embedding space.
Experimental results on standard benchmark datasets show that the proposed
method can accurately remove unfair biases encoded in pre-trained word
embeddings, while preserving useful semantics.
| 2,021 |
Computation and Language
|
On the Evolution of Word Order
|
Most natural languages have a predominant or fixed word order. For example in
English the word order is usually Subject-Verb-Object. This work attempts to
explain this phenomenon as well as other typological findings regarding word
order from a functional perspective. In particular, we examine whether fixed
word order provides a functional advantage, explaining why these languages are
prevalent. To this end, we consider an evolutionary model of language and
demonstrate, both theoretically and using genetic algorithms, that a language
with a fixed word order is optimal. We also show that adding information to the
sentence, such as case markers and noun-verb distinction, reduces the need for
fixed word order, in accordance with the typological findings.
| 2,021 |
Computation and Language
|
WangchanBERTa: Pretraining transformer-based Thai Language Models
|
Transformer-based language models, more specifically BERT-based architectures
have achieved state-of-the-art performance in many downstream tasks. However,
for a relatively low-resource language such as Thai, the choices of models are
limited to training a BERT-based model based on a much smaller dataset or
finetuning multi-lingual models, both of which yield suboptimal downstream
performance. Moreover, large-scale multi-lingual pretraining does not take into
account language-specific features for Thai. To overcome these limitations, we
pretrain a language model based on RoBERTa-base architecture on a large,
deduplicated, cleaned training set (78GB in total size), curated from diverse
domains of social media posts, news articles and other publicly available
datasets. We apply text processing rules that are specific to Thai most
importantly preserving spaces, which are important chunk and sentence
boundaries in Thai before subword tokenization. We also experiment with
word-level, syllable-level and SentencePiece tokenization with a smaller
dataset to explore the effects on tokenization on downstream performance. Our
model wangchanberta-base-att-spm-uncased trained on the 78.5GB dataset
outperforms strong baselines (NBSVM, CRF and ULMFit) and multi-lingual models
(XLMR and mBERT) on both sequence classification and token classification tasks
in human-annotated, mono-lingual contexts.
| 2,021 |
Computation and Language
|
Does Dialog Length matter for Next Response Selection task? An Empirical
Study
|
In the last few years, the release of BERT, a multilingual transformer based
model, has taken the NLP community by storm. BERT-based models have achieved
state-of-the-art results on various NLP tasks, including dialog tasks. One of
the limitation of BERT is the lack of ability to handle long text sequence. By
default, BERT has a maximum wordpiece token sequence length of 512. Recently,
there has been renewed interest to tackle the BERT limitation to handle long
text sequences with the addition of new self-attention based architectures.
However, there has been little to no research on the impact of this limitation
with respect to dialog tasks. Dialog tasks are inherently different from other
NLP tasks due to: a) the presence of multiple utterances from multiple
speakers, which may be interlinked to each other across different turns and b)
longer length of dialogs. In this work, we empirically evaluate the impact of
dialog length on the performance of BERT model for the Next Response Selection
dialog task on four publicly available and one internal multi-turn dialog
datasets. We observe that there is little impact on performance with long
dialogs and even the simplest approach of truncating input works really well.
| 2,021 |
Computation and Language
|
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
Fine-tuned Language Models
|
This paper proposes two intuitive metrics, skew and stereotype, that quantify
and analyse the gender bias present in contextual language models when tackling
the WinoBias pronoun resolution task. We find evidence that gender stereotype
correlates approximately negatively with gender skew in out-of-the-box models,
suggesting that there is a trade-off between these two forms of bias. We
investigate two methods to mitigate bias. The first approach is an online
method which is effective at removing skew at the expense of stereotype. The
second, inspired by previous work on ELMo, involves the fine-tuning of BERT
using an augmented gender-balanced dataset. We show that this reduces both skew
and stereotype relative to its unaugmented fine-tuned counterpart. However, we
find that existing gender bias benchmarks do not fully probe professional bias
as pronoun resolution may be obfuscated by cross-correlations from other
manifestations of gender prejudice. Our code is available online, at
https://github.com/12kleingordon34/NLP_masters_project.
| 2,021 |
Computation and Language
|
A2P-MANN: Adaptive Attention Inference Hops Pruned Memory-Augmented
Neural Networks
|
In this work, to limit the number of required attention inference hops in
memory-augmented neural networks, we propose an online adaptive approach called
A2P-MANN. By exploiting a small neural network classifier, an adequate number
of attention inference hops for the input query is determined. The technique
results in elimination of a large number of unnecessary computations in
extracting the correct answer. In addition, to further lower computations in
A2P-MANN, we suggest pruning weights of the final FC (fully-connected) layers.
To this end, two pruning approaches, one with negligible accuracy loss and the
other with controllable loss on the final accuracy, are developed. The efficacy
of the technique is assessed by using the twenty question-answering (QA) tasks
of bAbI dataset. The analytical assessment reveals, on average, more than 42%
fewer computations compared to the baseline MANN at the cost of less than 1%
accuracy loss. In addition, when used along with the previously published
zero-skipping technique, a computation count reduction of up to 68% is
achieved. Finally, when the proposed approach (without zero-skipping) is
implemented on the CPU and GPU platforms, up to 43% runtime reduction is
achieved.
| 2,022 |
Computation and Language
|
Fast Sequence Generation with Multi-Agent Reinforcement Learning
|
Autoregressive sequence Generation models have achieved state-of-the-art
performance in areas like machine translation and image captioning. These
models are autoregressive in that they generate each word by conditioning on
previously generated words, which leads to heavy latency during inference.
Recently, non-autoregressive decoding has been proposed in machine translation
to speed up the inference time by generating all words in parallel. Typically,
these models use the word-level cross-entropy loss to optimize each word
independently. However, such a learning process fails to consider the
sentence-level consistency, thus resulting in inferior generation quality of
these non-autoregressive models. In this paper, we propose a simple and
efficient model for Non-Autoregressive sequence Generation (NAG) with a novel
training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL). CMAL
formulates NAG as a multi-agent reinforcement learning system where element
positions in the target sequence are viewed as agents that learn to
cooperatively maximize a sentence-level reward. On MSCOCO image captioning
benchmark, our NAG method achieves a performance comparable to state-of-the-art
autoregressive models, while brings 13.9x decoding speedup. On WMT14 EN-DE
machine translation dataset, our method outperforms cross-entropy trained
baseline by 6.0 BLEU points while achieves the greatest decoding speedup of
17.46x.
| 2,021 |
Computation and Language
|
Does Head Label Help for Long-Tailed Multi-Label Text Classification
|
Multi-label text classification (MLTC) aims to annotate documents with the
most relevant labels from a number of candidate labels. In real applications,
the distribution of label frequency often exhibits a long tail, i.e., a few
labels are associated with a large number of documents (a.k.a. head labels),
while a large fraction of labels are associated with a small number of
documents (a.k.a. tail labels). To address the challenge of insufficient
training data on tail label classification, we propose a Head-to-Tail Network
(HTTN) to transfer the meta-knowledge from the data-rich head labels to
data-poor tail labels. The meta-knowledge is the mapping from few-shot network
parameters to many-shot network parameters, which aims to promote the
generalizability of tail classifiers. Extensive experimental results on three
benchmark datasets demonstrate that HTTN consistently outperforms the
state-of-the-art methods. The code and hyper-parameter settings are released
for reproducibility
| 2,021 |
Computation and Language
|
A Novel Two-stage Framework for Extracting Opinionated Sentences from
News Articles
|
This paper presents a novel two-stage framework to extract opinionated
sentences from a given news article. In the first stage, Naive Bayes classifier
by utilizing the local features assigns a score to each sentence - the score
signifies the probability of the sentence to be opinionated. In the second
stage, we use this prior within the HITS (Hyperlink-Induced Topic Search)
schema to exploit the global structure of the article and relation between the
sentences. In the HITS schema, the opinionated sentences are treated as Hubs
and the facts around these opinions are treated as the Authorities. The
algorithm is implemented and evaluated against a set of manually marked data.
We show that using HITS significantly improves the precision over the baseline
Naive Bayes classifier. We also argue that the proposed method actually
discovers the underlying structure of the article, thus extracting various
opinions, grouped with supporting facts as well as other supporting opinions
from the article.
| 2,021 |
Computation and Language
|
RomeBERT: Robust Training of Multi-Exit BERT
|
BERT has achieved superior performances on Natural Language Understanding
(NLU) tasks. However, BERT possesses a large number of parameters and demands
certain resources to deploy. For acceleration, Dynamic Early Exiting for BERT
(DeeBERT) has been proposed recently, which incorporates multiple exits and
adopts a dynamic early-exit mechanism to ensure efficient inference. While
obtaining an efficiency-performance tradeoff, the performances of early exits
in multi-exit BERT are significantly worse than late exits. In this paper, we
leverage gradient regularized self-distillation for RObust training of
Multi-Exit BERT (RomeBERT), which can effectively solve the performance
imbalance problem between early and late exits. Moreover, the proposed RomeBERT
adopts a one-stage joint training strategy for multi-exits and the BERT
backbone while DeeBERT needs two stages that require more training time.
Extensive experiments on GLUE datasets are performed to demonstrate the
superiority of our approach. Our code is available at
https://github.com/romebert/RomeBERT.
| 2,021 |
Computation and Language
|
Belief-based Generation of Argumentative Claims
|
When engaging in argumentative discourse, skilled human debaters tailor
claims to the beliefs of the audience, to construct effective arguments.
Recently, the field of computational argumentation witnessed extensive effort
to address the automatic generation of arguments. However, existing approaches
do not perform any audience-specific adaptation. In this work, we aim to bridge
this gap by studying the task of belief-based claim generation: Given a
controversial topic and a set of beliefs, generate an argumentative claim
tailored to the beliefs. To tackle this task, we model the people's prior
beliefs through their stances on controversial topics and extend
state-of-the-art text generation models to generate claims conditioned on the
beliefs. Our automatic evaluation confirms the ability of our approach to adapt
claims to a set of given beliefs. In a manual study, we additionally evaluate
the generated claims in terms of informativeness and their likelihood to be
uttered by someone with a respective belief. Our results reveal the limitations
of modeling users' beliefs based on their stances, but demonstrate the
potential of encoding beliefs into argumentative texts, laying the ground for
future exploration of audience reach.
| 2,021 |
Computation and Language
|
Knowledge Grounded Conversational Symptom Detection with Graph Memory
Networks
|
In this work, we propose a novel goal-oriented dialog task, automatic symptom
detection. We build a system that can interact with patients through dialog to
detect and collect clinical symptoms automatically, which can save a doctor's
time interviewing the patient. Given a set of explicit symptoms provided by the
patient to initiate a dialog for diagnosing, the system is trained to collect
implicit symptoms by asking questions, in order to collect more information for
making an accurate diagnosis. After getting the reply from the patient for each
question, the system also decides whether current information is enough for a
human doctor to make a diagnosis. To achieve this goal, we propose two neural
models and a training pipeline for the multi-step reasoning task. We also build
a knowledge graph as additional inputs to further improve model performance.
Experiments show that our model significantly outperforms the baseline by 4%,
discovering 67% of implicit symptoms on average with a limited number of
questions.
| 2,021 |
Computation and Language
|
Evaluating Models of Robust Word Recognition with Serial Reproduction
|
Spoken communication occurs in a "noisy channel" characterized by high levels
of environmental noise, variability within and between speakers, and lexical
and syntactic ambiguity. Given these properties of the received linguistic
input, robust spoken word recognition -- and language processing more generally
-- relies heavily on listeners' prior knowledge to evaluate whether candidate
interpretations of that input are more or less likely. Here we compare several
broad-coverage probabilistic generative language models in their ability to
capture human linguistic expectations. Serial reproduction, an experimental
paradigm where spoken utterances are reproduced by successive participants
similar to the children's game of "Telephone," is used to elicit a sample that
reflects the linguistic expectations of English-speaking adults. When we
evaluate a suite of probabilistic generative language models against the
yielded chains of utterances, we find that those models that make use of
abstract representations of preceding linguistic context (i.e., phrase
structure) best predict the changes made by people in the course of serial
reproduction. A logistic regression model predicting which words in an
utterance are most likely to be lost or changed in the course of spoken
transmission corroborates this result. We interpret these findings in light of
research highlighting the interaction of memory-based constraints and
representations in language processing.
| 2,021 |
Computation and Language
|
FakeFlow: Fake News Detection by Modeling the Flow of Affective
Information
|
Fake news articles often stir the readers' attention by means of emotional
appeals that arouse their feelings. Unlike in short news texts, authors of
longer articles can exploit such affective factors to manipulate readers by
adding exaggerations or fabricating events, in order to affect the readers'
emotions. To capture this, we propose in this paper to model the flow of
affective information in fake news articles using a neural architecture. The
proposed model, FakeFlow, learns this flow by combining topic and affective
information extracted from text. We evaluate the model's performance with
several experiments on four real-world datasets. The results show that FakeFlow
achieves superior results when compared against state-of-the-art methods, thus
confirming the importance of capturing the flow of the affective information in
news articles.
| 2,021 |
Computation and Language
|
MadDog: A Web-based System for Acronym Identification and Disambiguation
|
Acronyms and abbreviations are the short-form of longer phrases and they are
ubiquitously employed in various types of writing. Despite their usefulness to
save space in writing and reader's time in reading, they also provide
challenges for understanding the text especially if the acronym is not defined
in the text or if it is used far from its definition in long texts. To
alleviate this issue, there are considerable efforts both from the research
community and software developers to build systems for identifying acronyms and
finding their correct meanings in the text. However, none of the existing works
provide a unified solution capable of processing acronyms in various domains
and to be publicly available. Thus, we provide the first web-based acronym
identification and disambiguation system which can process acronyms from
various domains including scientific, biomedical, and general domains. The
web-based system is publicly available at http://iq.cs.uoregon.edu:5000 and a
demo video is available at https://youtu.be/IkSh7LqI42M. The system source code
is also available at https://github.com/amirveyseh/MadDog.
| 2,021 |
Computation and Language
|
GP: Context-free Grammar Pre-training for Text-to-SQL Parsers
|
A new method for Text-to-SQL parsing, Grammar Pre-training (GP), is proposed
to decode deep relations between question and database. Firstly, to better
utilize the information of databases, a random value is added behind a question
word which is recognized as a column, and the new sentence serves as the model
input. Secondly, initialization of vectors for decoder part is optimized, with
reference to the former encoding so that question information can be concerned.
Finally, a new approach called flooding level is adopted to get the non-zero
training loss which can generalize better results. By encoding the sentence
with GRAPPA and RAT-SQL model, we achieve better performance on spider, a
cross-DB Text-to-SQL dataset (72.8 dev, 69.8 test). Experiments show that our
method is easier to converge during training and has excellent robustness.
| 2,021 |
Computation and Language
|
EGFI: Drug-Drug Interaction Extraction and Generation with Fusion of
Enriched Entity and Sentence Information
|
The rapid growth in literature accumulates diverse and yet comprehensive
biomedical knowledge hidden to be mined such as drug interactions. However, it
is difficult to extract the heterogeneous knowledge to retrieve or even
discover the latest and novel knowledge in an efficient manner. To address such
a problem, we propose EGFI for extracting and consolidating drug interactions
from large-scale medical literature text data. Specifically, EGFI consists of
two parts: classification and generation. In the classification part, EGFI
encompasses the language model BioBERT which has been comprehensively
pre-trained on biomedical corpus. In particular, we propose the multi-head
attention mechanism and pack BiGRU to fuse multiple semantic information for
rigorous context modeling. In the generation part, EGFI utilizes another
pre-trained language model BioGPT-2 where the generation sentences are selected
based on filtering rules. We evaluated the classification part on "DDIs 2013"
dataset and "DTIs" dataset, achieving the FI score of 0.842 and 0.720
respectively. Moreover, we applied the classification part to distinguish
high-quality generated sentences and verified with the exiting growth truth to
confirm the filtered sentences. The generated sentences that are not recorded
in DrugBank and DDIs 2013 dataset also demonstrate the potential of EGFI to
identify novel drug relationships.
| 2,021 |
Computation and Language
|
CHOLAN: A Modular Approach for Neural Entity Linking on Wikipedia and
Wikidata
|
In this paper, we propose CHOLAN, a modular approach to target end-to-end
entity linking (EL) over knowledge bases. CHOLAN consists of a pipeline of two
transformer-based models integrated sequentially to accomplish the EL task. The
first transformer model identifies surface forms (entity mentions) in a given
text. For each mention, a second transformer model is employed to classify the
target entity among a predefined candidates list. The latter transformer is fed
by an enriched context captured from the sentence (i.e. local context), and
entity description gained from Wikipedia. Such external contexts have not been
used in the state of the art EL approaches. Our empirical study was conducted
on two well-known knowledge bases (i.e., Wikidata and Wikipedia). The empirical
results suggest that CHOLAN outperforms state-of-the-art approaches on standard
datasets such as CoNLL-AIDA, MSNBC, AQUAINT, ACE2004, and T-REx.
| 2,021 |
Computation and Language
|
Unsupervised Key-phrase Extraction and Clustering for Classification
Scheme in Scientific Publications
|
Several methods have been explored for automating parts of Systematic Mapping
(SM) and Systematic Review (SR) methodologies. Challenges typically evolve
around the gaps in semantic understanding of text, as well as lack of domain
and background knowledge necessary to bridge that gap. In this paper we
investigate possible ways of automating parts of the SM/SR process, i.e. that
of extracting keywords and key-phrases from scientific documents using
unsupervised methods, which are then used as a basis to construct the
corresponding Classification Scheme using semantic key-phrase clustering
techniques. Specifically, we explore the effect of ensemble scores measure in
key-phrase extraction, we explore semantic network based word embedding in
embedding representation of phrase semantics and finally we also explore how
clustering can be used to group related key-phrases. The evaluation is
conducted on a dataset of publications pertaining the domain of "Explainable
AI" which we constructed using standard publicly available digital libraries
and sets of indexing terms (keywords). Results shows that: ensemble ranking
score does improve the key-phrase extraction performance. Semantic-network
based word embedding based on the ConceptNet Semantic Network has similar
performance with contextualized word embedding, however the former are
computationally more efficient. Finally Semantic key-phrase clustering at
term-level can group similar terms together that can be suitable for
classification scheme.
| 2,021 |
Computation and Language
|
A Simple Disaster-Related Knowledge Base for Intelligent Agents
|
In this paper, we describe our efforts in establishing a simple knowledge
base by building a semantic network composed of concepts and word relationships
in the context of disasters in the Philippines. Our primary source of data is a
collection of news articles scraped from various Philippine news websites.
Using word embeddings, we extract semantically similar and co-occurring words
from an initial seed words list. We arrive at an expanded ontology with a total
of 450 word assertions. We let experts from the fields of linguistics,
disasters, and weather science evaluate our knowledge base and arrived at an
agreeability rate of 64%. We then perform a time-based analysis of the
assertions to identify important semantic changes captured by the knowledge
base such as the (a) trend of roles played by human entities, (b) memberships
of human entities, and (c) common association of disaster-related words. The
context-specific knowledge base developed from this study can be adapted by
intelligent agents such as chat bots integrated in platforms such as Facebook
Messenger for answering disaster-related queries.
| 2,021 |
Computation and Language
|
Facilitating Terminology Translation with Target Lemma Annotations
|
Most of the recent work on terminology integration in machine translation has
assumed that terminology translations are given already inflected in forms that
are suitable for the target language sentence. In day-to-day work of
professional translators, however, it is seldom the case as translators work
with bilingual glossaries where terms are given in their dictionary forms;
finding the right target language form is part of the translation process. We
argue that the requirement for apriori specified target language forms is
unrealistic and impedes the practical applicability of previous work. In this
work, we propose to train machine translation systems using a source-side data
augmentation method that annotates randomly selected source language words with
their target language lemmas. We show that systems trained on such augmented
data are readily usable for terminology integration in real-life translation
scenarios. Our experiments on terminology translation into the morphologically
complex Baltic and Uralic languages show an improvement of up to 7 BLEU points
over baseline systems with no means for terminology integration and an average
improvement of 4 BLEU points over the previous work. Results of the human
evaluation indicate a 47.7% absolute improvement over the previous work in term
translation accuracy when translating into Latvian.
| 2,021 |
Computation and Language
|
SpanEmo: Casting Multi-label Emotion Classification as Span-prediction
|
Emotion recognition (ER) is an important task in Natural Language Processing
(NLP), due to its high impact in real-world applications from health and
well-being to author profiling, consumer analysis and security. Current
approaches to ER, mainly classify emotions independently without considering
that emotions can co-exist. Such approaches overlook potential ambiguities, in
which multiple emotions overlap. We propose a new model "SpanEmo" casting
multi-label emotion classification as span-prediction, which can aid ER models
to learn associations between labels and words in a sentence. Furthermore, we
introduce a loss function focused on modelling multiple co-existing emotions in
the input sentence. Experiments performed on the SemEval2018 multi-label
emotion data over three language sets (i.e., English, Arabic and Spanish)
demonstrate our method's effectiveness. Finally, we present different analyses
that illustrate the benefits of our method in terms of improving the model
performance and learning meaningful associations between emotion classes and
words in the sentence.
| 2,021 |
Computation and Language
|
Cross-lingual Visual Pre-training for Multimodal Machine Translation
|
Pre-trained language models have been shown to improve performance in many
natural language tasks substantially. Although the early focus of such models
was single language pre-training, recent advances have resulted in
cross-lingual and visual pre-training methods. In this paper, we combine these
two approaches to learn visually-grounded cross-lingual representations.
Specifically, we extend the translation language modelling (Lample and Conneau,
2019) with masked region classification and perform pre-training with three-way
parallel vision & language corpora. We show that when fine-tuned for multimodal
machine translation, these models obtain state-of-the-art performance. We also
provide qualitative insights into the usefulness of the learned grounded
representations.
| 2,021 |
Computation and Language
|
RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding
|
Embedding entities and relations of a knowledge graph in a low-dimensional
space has shown impressive performance in predicting missing links between
entities. Although progresses have been achieved, existing methods are
heuristically motivated and theoretical understanding of such embeddings is
comparatively underdeveloped. This paper extends the random walk model (Arora
et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs) to
derive a scoring function that evaluates the strength of a relation R between
two entities h (head) and t (tail). Moreover, we show that marginal loss
minimisation, a popular objective used in much prior work in KGE, follows
naturally from the log-likelihood ratio maximisation under the probabilities
estimated from the KGEs according to our theoretical relationship. We propose a
learning objective motivated by the theoretical analysis to learn KGEs from a
given knowledge graph. Using the derived objective, accurate KGEs are learnt
from FB15K237 and WN18RR benchmark datasets, providing empirical evidence in
support of the theory.
| 2,021 |
Computation and Language
|
With Measured Words: Simple Sentence Selection for Black-Box
Optimization of Sentence Compression Algorithms
|
Sentence Compression is the task of generating a shorter, yet grammatical
version of a given sentence, preserving the essence of the original sentence.
This paper proposes a Black-Box Optimizer for Compression (B-BOC): given a
black-box compression algorithm and assuming not all sentences need be
compressed -- find the best candidates for compression in order to maximize
both compression rate and quality. Given a required compression ratio, we
consider two scenarios: (i) single-sentence compression, and (ii)
sentences-sequence compression. In the first scenario, our optimizer is trained
to predict how well each sentence could be compressed while meeting the
specified ratio requirement. In the latter, the desired compression ratio is
applied to a sequence of sentences (e.g., a paragraph) as a whole, rather than
on each individual sentence. To achieve that, we use B-BOC to assign an optimal
compression ratio to each sentence, then cast it as a Knapsack problem, which
we solve using bounded dynamic programming. We evaluate B-BOC on both scenarios
on three datasets, demonstrating that our optimizer improves both accuracy and
Rouge-F1-score compared to direct application of other compression algorithms.
| 2,021 |
Computation and Language
|
Open-Mindedness and Style Coordination in Argumentative Discussions
|
Linguistic accommodation is the process in which speakers adjust their
accent, diction, vocabulary, and other aspects of language according to the
communication style of one another. Previous research has shown how linguistic
accommodation correlates with gaps in the power and status of the speakers and
the way it promotes approval and discussion efficiency. In this work, we
provide a novel perspective on the phenomena, exploring its correlation with
the open-mindedness of a speaker, rather than to her social status. We process
thousands of unstructured argumentative discussions that took place in Reddit's
Change My View (CMV) subreddit, demonstrating that open-mindedness relates to
the assumed role of a speaker in different contexts. On the discussion level,
we surprisingly find that discussions that reach agreement present lower levels
of accommodation.
| 2,021 |
Computation and Language
|
A Hybrid Approach to Measure Semantic Relatedness in Biomedical Concepts
|
Objective: This work aimed to demonstrate the effectiveness of a hybrid
approach based on Sentence BERT model and retrofitting algorithm to compute
relatedness between any two biomedical concepts. Materials and Methods: We
generated concept vectors by encoding concept preferred terms using ELMo, BERT,
and Sentence BERT models. We used BioELMo and Clinical ELMo. We used Ontology
Knowledge Free (OKF) models like PubMedBERT, BioBERT, BioClinicalBERT, and
Ontology Knowledge Injected (OKI) models like SapBERT, CoderBERT, KbBERT, and
UmlsBERT. We trained all the BERT models using Siamese network on SNLI and STSb
datasets to allow the models to learn more semantic information at the phrase
or sentence level so that they can represent multi-word concepts better.
Finally, to inject ontology relationship knowledge into concept vectors, we
used retrofitting algorithm and concepts from various UMLS relationships. We
evaluated our hybrid approach on four publicly available datasets which also
includes the recently released EHR-RelB dataset. EHR-RelB is the largest
publicly available relatedness dataset in which 89% of terms are multi-word
which makes it more challenging. Results: Sentence BERT models mostly
outperformed corresponding BERT models. The concept vectors generated using the
Sentence BERT model based on SapBERT and retrofitted using UMLS-related
concepts achieved the best results on all four datasets. Conclusions: Sentence
BERT models are more effective compared to BERT models in computing relatedness
scores in most of the cases. Injecting ontology knowledge into concept vectors
further enhances their quality and contributes to better relatedness scores.
| 2,021 |
Computation and Language
|
A Trigger-Sense Memory Flow Framework for Joint Entity and Relation
Extraction
|
Joint entity and relation extraction framework constructs a unified model to
perform entity recognition and relation extraction simultaneously, which can
exploit the dependency between the two tasks to mitigate the error propagation
problem suffered by the pipeline model. Current efforts on joint entity and
relation extraction focus on enhancing the interaction between entity
recognition and relation extraction through parameter sharing, joint decoding,
or other ad-hoc tricks (e.g., modeled as a semi-Markov decision process, cast
as a multi-round reading comprehension task). However, there are still two
issues on the table. First, the interaction utilized by most methods is still
weak and uni-directional, which is unable to model the mutual dependency
between the two tasks. Second, relation triggers are ignored by most methods,
which can help explain why humans would extract a relation in the sentence.
They're essential for relation extraction but overlooked. To this end, we
present a Trigger-Sense Memory Flow Framework (TriMF) for joint entity and
relation extraction. We build a memory module to remember category
representations learned in entity recognition and relation extraction tasks.
And based on it, we design a multi-level memory flow attention mechanism to
enhance the bi-directional interaction between entity recognition and relation
extraction. Moreover, without any human annotations, our model can enhance
relation trigger information in a sentence through a trigger sensor module,
which improves the model performance and makes model predictions with better
interpretation. Experiment results show that our proposed framework achieves
state-of-the-art results by improves the relation F1 to 52.44% (+3.2%) on
SciERC, 66.49% (+4.9%) on ACE05, 72.35% (+0.6%) on CoNLL04 and 80.66% (+2.3%)
on ADE.
| 2,021 |
Computation and Language
|
Process-Level Representation of Scientific Protocols with Interactive
Annotation
|
We develop Process Execution Graphs (PEG), a document-level representation of
real-world wet lab biochemistry protocols, addressing challenges such as
cross-sentence relations, long-range coreference, grounding, and implicit
arguments. We manually annotate PEGs in a corpus of complex lab protocols with
a novel interactive textual simulator that keeps track of entity traits and
semantic constraints during annotation. We use this data to develop
graph-prediction models, finding them to be good at entity identification and
local relation extraction, while our corpus facilitates further exploration of
challenging long-range relations.
| 2,021 |
Computation and Language
|
Learning From Revisions: Quality Assessment of Claims in Argumentation
at Scale
|
Assessing the quality of arguments and of the claims the arguments are
composed of has become a key task in computational argumentation. However, even
if different claims share the same stance on the same topic, their assessment
depends on the prior perception and weighting of the different aspects of the
topic being discussed. This renders it difficult to learn topic-independent
quality indicators. In this paper, we study claim quality assessment
irrespective of discussed aspects by comparing different revisions of the same
claim. We compile a large-scale corpus with over 377k claim revision pairs of
various types from kialo.com, covering diverse topics from politics, ethics,
entertainment, and others. We then propose two tasks: (a) assessing which claim
of a revision pair is better, and (b) ranking all versions of a claim by
quality. Our first experiments with embedding-based logistic regression and
transformer-based neural networks show promising results, suggesting that
learned indicators generalize well across topics. In a detailed error analysis,
we give insights into what quality dimensions of claims can be assessed
reliably. We provide the data and scripts needed to reproduce all results.
| 2,021 |
Computation and Language
|
TDMSci: A Specialized Corpus for Scientific Literature Entity Tagging of
Tasks Datasets and Metrics
|
Tasks, Datasets and Evaluation Metrics are important concepts for
understanding experimental scientific papers. However, most previous work on
information extraction for scientific literature mainly focuses on the
abstracts only, and does not treat datasets as a separate type of entity (Zadeh
and Schumann, 2016; Luan et al., 2018). In this paper, we present a new corpus
that contains domain expert annotations for Task (T), Dataset (D), Metric (M)
entities on 2,000 sentences extracted from NLP papers. We report experiment
results on TDM extraction using a simple data augmentation strategy and apply
our tagger to around 30,000 NLP papers from the ACL Anthology. The corpus is
made publicly available to the community for fostering research on scientific
publication summarization (Erera et al., 2019) and knowledge discovery.
| 2,021 |
Computation and Language
|
PAWLS: PDF Annotation With Labels and Structure
|
Adobe's Portable Document Format (PDF) is a popular way of distributing
view-only documents with a rich visual markup. This presents a challenge to NLP
practitioners who wish to use the information contained within PDF documents
for training models or data analysis, because annotating these documents is
difficult. In this paper, we present PDF Annotation with Labels and Structure
(PAWLS), a new annotation tool designed specifically for the PDF document
format. PAWLS is particularly suited for mixed-mode annotation and scenarios in
which annotators require extended context to annotate accurately. PAWLS
supports span-based textual annotation, N-ary relations and freeform,
non-textual bounding boxes, all of which can be exported in convenient formats
for training multi-modal machine learning models. A read-only PAWLS server is
available at https://pawls.apps.allenai.org/ and the source code is available
at https://github.com/allenai/pawls.
| 2,021 |
Computation and Language
|
Meta-Learning for Effective Multi-task and Multilingual Modelling
|
Natural language processing (NLP) tasks (e.g. question-answering in English)
benefit from knowledge of other tasks (e.g. named entity recognition in
English) and knowledge of other languages (e.g. question-answering in Spanish).
Such shared representations are typically learned in isolation, either across
tasks or across languages. In this work, we propose a meta-learning approach to
learn the interactions between both tasks and languages. We also investigate
the role of different sampling strategies used during meta-learning. We present
experiments on five different tasks and six different languages from the XTREME
multilingual benchmark dataset. Our meta-learned model clearly improves in
performance compared to competitive baseline models that also include
multi-task baselines. We also present zero-shot evaluations on unseen target
languages to demonstrate the utility of our proposed model.
| 2,021 |
Computation and Language
|
The Power of Language: Understanding Sentiment Towards the Climate
Emergency using Twitter Data
|
Understanding how attitudes towards the Climate Emergency vary can hold the
key to driving policy changes for effective action to mitigate climate related
risk. The Oil and Gas industry account for a significant proportion of global
emissions and so it could be speculated that there is a relationship between
Crude Oil Futures and sentiment towards the Climate Emergency. Using Latent
Dirichlet Allocation for Topic Modelling on a bespoke Twitter dataset, this
study shows that it is possible to split the conversation surrounding the
Climate Emergency into 3 distinct topics. Forecasting Crude Oil Futures using
Seasonal AutoRegressive Integrated Moving Average Modelling gives promising
results with a root mean squared error of 0.196 and 0.209 on the training and
testing data respectively. Understanding variation in attitudes towards climate
emergency provides inconclusive results which could be improved using
spatial-temporal analysis methods such as Density Based Clustering (DBSCAN).
| 2,021 |
Computation and Language
|
English Machine Reading Comprehension Datasets: A Survey
|
This paper surveys 60 English Machine Reading Comprehension datasets, with a
view to providing a convenient resource for other researchers interested in
this problem. We categorize the datasets according to their question and answer
form and compare them across various dimensions including size, vocabulary,
data source, method of creation, human performance level, and first question
word. Our analysis reveals that Wikipedia is by far the most common data source
and that there is a relative lack of why, when, and where questions across
datasets.
| 2,021 |
Computation and Language
|
Randomized Deep Structured Prediction for Discourse-Level Processing
|
Expressive text encoders such as RNNs and Transformer Networks have been at
the center of NLP models in recent work. Most of the effort has focused on
sentence-level tasks, capturing the dependencies between words in a single
sentence, or pairs of sentences. However, certain tasks, such as argumentation
mining, require accounting for longer texts and complicated structural
dependencies between them. Deep structured prediction is a general framework to
combine the complementary strengths of expressive neural encoders and
structured inference for highly structured domains. Nevertheless, when the need
arises to go beyond sentences, most work relies on combining the output scores
of independently trained classifiers. One of the main reasons for this is that
constrained inference comes at a high computational cost. In this paper, we
explore the use of randomized inference to alleviate this concern and show that
we can efficiently leverage deep structured prediction and expressive neural
encoders for a set of tasks involving complicated argumentative structures.
| 2,021 |
Computation and Language
|
PolyLM: Learning about Polysemy through Language Modeling
|
To avoid the "meaning conflation deficiency" of word embeddings, a number of
models have aimed to embed individual word senses. These methods at one time
performed well on tasks such as word sense induction (WSI), but they have since
been overtaken by task-specific techniques which exploit contextualized
embeddings. However, sense embeddings and contextualization need not be
mutually exclusive. We introduce PolyLM, a method which formulates the task of
learning sense embeddings as a language modeling problem, allowing
contextualization techniques to be applied. PolyLM is based on two underlying
assumptions about word senses: firstly, that the probability of a word
occurring in a given context is equal to the sum of the probabilities of its
individual senses occurring; and secondly, that for a given occurrence of a
word, one of its senses tends to be much more plausible in the context than the
others. We evaluate PolyLM on WSI, showing that it performs considerably better
than previous sense embedding techniques, and matches the current
state-of-the-art specialized WSI method despite having six times fewer
parameters. Code and pre-trained models are available at
https://github.com/AlanAnsell/PolyLM.
| 2,021 |
Computation and Language
|
A Digital Corpus of St. Lawrence Island Yupik
|
St. Lawrence Island Yupik (ISO 639-3: ess) is an endangered polysynthetic
language in the Inuit-Yupik language family indigenous to Alaska and Chukotka.
This work presents a step-by-step pipeline for the digitization of written
texts, and the first publicly available digital corpus for St. Lawrence Island
Yupik, created using that pipeline. This corpus has great potential for future
linguistic inquiry and research in NLP. It was also developed for use in Yupik
language education and revitalization, with a primary goal of enabling easy
access to Yupik texts by educators and by members of the Yupik community. A
secondary goal is to support development of language technology such as
spell-checkers, text-completion systems, interactive e-books, and language
learning apps for use by the Yupik community.
| 2,021 |
Computation and Language
|
El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic
Parsing
|
Being able to parse code-switched (CS) utterances, such as Spanish+English or
Hindi+English, is essential to democratize task-oriented semantic parsing
systems for certain locales. In this work, we focus on Spanglish
(Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances
alongside their semantic parses. We examine the CS generalizability of various
Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language
models when data for only one language is present. As such, we focus on
improving the pre-trained models for the case when only English corpus
alongside either zero or a few CS training instances are available. We propose
two data augmentation methods for the zero-shot and the few-shot settings:
fine-tune using translate-and-align and augment using a generation model
followed by match-and-filter. Combining the few-shot setting with the above
improvements decreases the initial 30-point accuracy gap between the zero-shot
and the full-data settings by two thirds.
| 2,021 |
Computation and Language
|
Application of Lexical Features Towards Improvement of Filipino
Readability Identification of Children's Literature
|
Proper identification of grade levels of children's reading materials is an
important step towards effective learning. Recent studies in readability
assessment for the English domain applied modern approaches in natural language
processing (NLP) such as machine learning (ML) techniques to automate the
process. There is also a need to extract the correct linguistic features when
modeling readability formulas. In the context of the Filipino language, limited
work has been done [1, 2], especially in considering the language's lexical
complexity as main features. In this paper, we explore the use of lexical
features towards improving the development of readability identification of
children's books written in Filipino. Results show that combining lexical
features (LEX) consisting of type-token ratio, lexical density, lexical
variation, foreign word count with traditional features (TRAD) used by previous
works such as sentence length, average syllable length, polysyllabic words,
word, sentence, and phrase counts increased the performance of readability
models by almost a 5% margin (from 42% to 47.2%). Further analysis and ranking
of the most important features were shown to identify which features contribute
the most in terms of reading complexity.
| 2,021 |
Computation and Language
|
Arabic aspect based sentiment analysis using bidirectional GRU based
models
|
Aspect-based Sentiment analysis (ABSA) accomplishes a fine-grained analysis
that defines the aspects of a given document or sentence and the sentiments
conveyed regarding each aspect. This level of analysis is the most detailed
version that is capable of exploring the nuanced viewpoints of the reviews. The
bulk of study in ABSA focuses on English with very little work available in
Arabic. Most previous work in Arabic has been based on regular methods of
machine learning that mainly depends on a group of rare resources and tools for
analyzing and processing Arabic content such as lexicons, but the lack of those
resources presents another challenge. In order to address these challenges,
Deep Learning (DL)-based methods are proposed using two models based on Gated
Recurrent Units (GRU) neural networks for ABSA. The first is a DL model that
takes advantage of word and character representations by combining
bidirectional GRU, Convolutional Neural Network (CNN), and Conditional Random
Field (CRF) making up the (BGRU-CNN-CRF) model to extract the main opinionated
aspects (OTE). The second is an interactive attention network based on
bidirectional GRU (IAN-BGRU) to identify sentiment polarity toward extracted
aspects. We evaluated our models using the benchmarked Arabic hotel reviews
dataset. The results indicate that the proposed methods are better than
baseline research on both tasks having 39.7% enhancement in F1-score for
opinion target extraction (T2) and 7.58% in accuracy for aspect-based sentiment
polarity classification (T3). Achieving F1 score of 70.67% for T2, and accuracy
of 83.98% for T3.
| 2,021 |
Computation and Language
|
RESPER: Computationally Modelling Resisting Strategies in Persuasive
Conversations
|
Modelling persuasion strategies as predictors of task outcome has several
real-world applications and has received considerable attention from the
computational linguistics community. However, previous research has failed to
account for the resisting strategies employed by an individual to foil such
persuasion attempts. Grounded in prior literature in cognitive and social
psychology, we propose a generalised framework for identifying resisting
strategies in persuasive conversations. We instantiate our framework on two
distinct datasets comprising persuasion and negotiation conversations. We also
leverage a hierarchical sequence-labelling neural architecture to infer the
aforementioned resisting strategies automatically. Our experiments reveal the
asymmetry of power roles in non-collaborative goal-directed conversations and
the benefits accrued from incorporating resisting strategies on the final
conversation outcome. We also investigate the role of different resisting
strategies on the conversation outcome and glean insights that corroborate with
past findings. We also make the code and the dataset of this work publicly
available at https://github.com/americast/resper.
| 2,021 |
Computation and Language
|
Coloring the Black Box: What Synesthesia Tells Us about Character
Embeddings
|
In contrast to their word- or sentence-level counterparts, character
embeddings are still poorly understood. We aim at closing this gap with an
in-depth study of English character embeddings. For this, we use resources from
research on grapheme-color synesthesia -- a neuropsychological phenomenon where
letters are associated with colors, which give us insight into which characters
are similar for synesthetes and how characters are organized in color space.
Comparing 10 different character embeddings, we ask: How similar are character
embeddings to a synesthete's perception of characters? And how similar are
character embeddings extracted from different models? We find that LSTMs agree
with humans more than transformers. Comparing across tasks, grapheme-to-phoneme
conversion results in the most human-like character embeddings. Finally, ELMo
embeddings differ from both humans and other models.
| 2,021 |
Computation and Language
|
Representations for Question Answering from Documents with Tables and
Text
|
Tables in Web documents are pervasive and can be directly used to answer many
of the queries searched on the Web, motivating their integration in question
answering. Very often information presented in tables is succinct and hard to
interpret with standard language representations. On the other hand, tables
often appear within textual context, such as an article describing the table.
Using the information from an article as additional context can potentially
enrich table representations. In this work we aim to improve question answering
from tables by refining table representations based on information from
surrounding text. We also present an effective method to combine text and
table-based predictions for question answering from full documents, obtaining
significant improvements on the Natural Questions dataset.
| 2,021 |
Computation and Language
|
Generating Syntactically Controlled Paraphrases without Using Annotated
Parallel Pairs
|
Paraphrase generation plays an essential role in natural language process
(NLP), and it has many downstream applications. However, training supervised
paraphrase models requires many annotated paraphrase pairs, which are usually
costly to obtain. On the other hand, the paraphrases generated by existing
unsupervised approaches are usually syntactically similar to the source
sentences and are limited in diversity. In this paper, we demonstrate that it
is possible to generate syntactically various paraphrases without the need for
annotated paraphrase pairs. We propose Syntactically controlled Paraphrase
Generator (SynPG), an encoder-decoder based model that learns to disentangle
the semantics and the syntax of a sentence from a collection of unannotated
texts. The disentanglement enables SynPG to control the syntax of output
paraphrases by manipulating the embedding in the syntactic space. Extensive
experiments using automatic metrics and human evaluation show that SynPG
performs better syntactic control than unsupervised baselines, while the
quality of the generated paraphrases is competitive. We also demonstrate that
the performance of SynPG is competitive or even better than supervised models
when the unannotated data is large. Finally, we show that the syntactically
controlled paraphrases generated by SynPG can be utilized for data augmentation
to improve the robustness of NLP models.
| 2,021 |
Computation and Language
|
Low Resource Recognition and Linking of Biomedical Concepts from a Large
Ontology
|
Tools to explore scientific literature are essential for scientists,
especially in biomedicine, where about a million new papers are published every
year. Many such tools provide users the ability to search for specific entities
(e.g. proteins, diseases) by tracking their mentions in papers. PubMed, the
most well known database of biomedical papers, relies on human curators to add
these annotations. This can take several weeks for new papers, and not all
papers get tagged. Machine learning models have been developed to facilitate
the semantic indexing of scientific papers. However their performance on the
more comprehensive ontologies of biomedical concepts does not reach the levels
of typical entity recognition problems studied in NLP. In large part this is
due to their low resources, where the ontologies are large, there is a lack of
descriptive text defining most entities, and labeled data can only cover a
small portion of the ontology. In this paper, we develop a new model that
overcomes these challenges by (1) generalizing to entities unseen at training
time, and (2) incorporating linking predictions into the mention segmentation
decisions. Our approach achieves new state-of-the-art results for the UMLS
ontology in both traditional recognition/linking (+8 F1 pts) as well as
semantic indexing-based evaluation (+10 F1 pts).
| 2,021 |
Computation and Language
|
Evaluation of BERT and ALBERT Sentence Embedding Performance on
Downstream NLP Tasks
|
Contextualized representations from a pre-trained language model are central
to achieve a high performance on downstream NLP task. The pre-trained BERT and
A Lite BERT (ALBERT) models can be fine-tuned to give state-ofthe-art results
in sentence-pair regressions such as semantic textual similarity (STS) and
natural language inference (NLI). Although BERT-based models yield the [CLS]
token vector as a reasonable sentence embedding, the search for an optimal
sentence embedding scheme remains an active research area in computational
linguistics. This paper explores on sentence embedding models for BERT and
ALBERT. In particular, we take a modified BERT network with siamese and triplet
network structures called Sentence-BERT (SBERT) and replace BERT with ALBERT to
create Sentence-ALBERT (SALBERT). We also experiment with an outer CNN
sentence-embedding network for SBERT and SALBERT. We evaluate performances of
all sentence-embedding models considered using the STS and NLI datasets. The
empirical results indicate that our CNN architecture improves ALBERT models
substantially more than BERT models for STS benchmark. Despite significantly
fewer model parameters, ALBERT sentence embedding is highly competitive to BERT
in downstream NLP evaluations.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.