Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A learning perspective on the emergence of abstractions: the curious
case of phonemes
|
In the present paper we use a range of modeling techniques to investigate
whether an abstract phone could emerge from exposure to speech sounds. In
effect, the study represents an attempt for operationalize a theoretical device
of Usage-based Linguistics of emergence of an abstraction from language use.
Our quest focuses on the simplest of such hypothesized abstractions. We test
two opposing principles regarding the development of language knowledge in
linguistically untrained language users: Memory-Based Learning (MBL) and
Error-Correction Learning (ECL). A process of generalization underlies the
abstractions linguists operate with, and we probed whether MBL and ECL could
give rise to a type of language knowledge that resembles linguistic
abstractions. Each model was presented with a significant amount of
pre-processed speech produced by one speaker. We assessed the consistency or
stability of what these simple models have learned and their ability to give
rise to abstract categories. Both types of models fare differently with regard
to these tests. We show that ECL models can learn abstractions and that at
least part of the phone inventory and grouping into traditional types can be
reliably identified from the input.
| 2,021 |
Computation and Language
|
Exploiting BERT to improve aspect-based sentiment analysis performance
on Persian language
|
Aspect-based sentiment analysis (ABSA) is a more detailed task in sentiment
analysis, by identifying opinion polarity toward a certain aspect in a text.
This method is attracting more attention from the community, due to the fact
that it provides more thorough and useful information. However, there are few
language-specific researches on Persian language. The present research aims to
improve the ABSA on the Persian Pars-ABSA dataset. This research shows the
potential of using pre-trained BERT model and taking advantage of using
sentence-pair input on an ABSA task. The results indicate that employing
Pars-BERT pre-trained model along with natural language inference auxiliary
sentence (NLI-M) could boost the ABSA task accuracy up to 91% which is 5.5%
(absolute) higher than state-of-the-art studies on Pars-ABSA dataset.
| 2,020 |
Computation and Language
|
Linguistic Classification using Instance-Based Learning
|
Traditionally linguists have organized languages of the world as language
families modelled as trees. In this work we take a contrarian approach and
question the tree-based model that is rather restrictive. For example, the
affinity that Sanskrit independently has with languages across Indo-European
languages is better illustrated using a network model. We can say the same
about inter-relationship between languages in India, where the
inter-relationships are better discovered than assumed. To enable such a
discovery, in this paper we have made use of instance-based learning techniques
to assign language labels to words. We vocalize each word and then classify it
by making use of our custom linguistic distance metric of the word relative to
training sets containing language labels. We construct the training sets by
making use of word clusters and assigning a language and category label to that
cluster. Further, we make use of clustering coefficients as a quality metric
for our research. We believe our work has the potential to usher in a new era
in linguistics. We have limited this work for important languages in India.
This work can be further strengthened by applying Adaboost for classification
coupled with structural equivalence concepts of social network analysis.
| 2,020 |
Computation and Language
|
Data-Driven Regular Expressions Evolution for Medical Text
Classification Using Genetic Programming
|
In medical fields, text classification is one of the most important tasks
that can significantly reduce human workload through structured information
digitization and intelligent decision support. Despite the popularity of
learning-based text classification techniques, it is hard for human to
understand or manually fine-tune the classification results for better
precision and recall, due to the black box nature of learning. This study
proposes a novel regular expression-based text classification method making use
of genetic programming (GP) approaches to evolve regular expressions that can
classify a given medical text inquiry with satisfactory precision and recall
while allow human to read the classifier and fine-tune accordingly if
necessary. Given a seed population of regular expressions (can be randomly
initialized or manually constructed by experts), our method evolves a
population of regular expressions according to chosen fitness function, using a
novel regular expression syntax and a series of carefully chosen reproduction
operators. Our method is evaluated with real-life medical text inquiries from
an online healthcare provider and shows promising performance. More
importantly, our method generates classifiers that can be fully understood,
checked and updated by medical doctors, which are fundamentally crucial for
medical related practices.
| 2,020 |
Computation and Language
|
Meta learning to classify intent and slot labels with noisy few shot
examples
|
Recently deep learning has dominated many machine learning areas, including
spoken language understanding (SLU). However, deep learning models are
notorious for being data-hungry, and the heavily optimized models are usually
sensitive to the quality of the training examples provided and the consistency
between training and inference conditions. To improve the performance of SLU
models on tasks with noisy and low training resources, we propose a new SLU
benchmarking task: few-shot robust SLU, where SLU comprises two core problems,
intent classification (IC) and slot labeling (SL). We establish the task by
defining few-shot splits on three public IC/SL datasets, ATIS, SNIPS, and TOP,
and adding two types of natural noises (adaptation example missing/replacing
and modality mismatch) to the splits. We further propose a novel noise-robust
few-shot SLU model based on prototypical networks. We show the model
consistently outperforms the conventional fine-tuning baseline and another
popular meta-learning method, Model-Agnostic Meta-Learning (MAML), in terms of
achieving better IC accuracy and SL F1, and yielding smaller performance
variation when noises are present.
| 2,020 |
Computation and Language
|
Fake News Detection in Social Media using Graph Neural Networks and NLP
Techniques: A COVID-19 Use-case
|
The paper presents our solutions for the MediaEval 2020 task namely FakeNews:
Corona Virus and 5G Conspiracy Multimedia Twitter-Data-Based Analysis. The task
aims to analyze tweets related to COVID-19 and 5G conspiracy theories to detect
misinformation spreaders. The task is composed of two sub-tasks namely (i)
text-based, and (ii) structure-based fake news detection. For the first task,
we propose six different solutions relying on Bag of Words (BoW) and BERT
embedding. Three of the methods aim at binary classification task by
differentiating in 5G conspiracy and the rest of the COVID-19 related tweets
while the rest of them treat the task as ternary classification problem. In the
ternary classification task, our BoW and BERT based methods obtained an
F1-score of .606% and .566% on the development set, respectively. On the binary
classification, the BoW and BERT based solutions obtained an average F1-score
of .666% and .693%, respectively. On the other hand, for structure-based fake
news detection, we rely on Graph Neural Networks (GNNs) achieving an average
ROC of .95% on the development set.
| 2,020 |
Computation and Language
|
Procode: the Swiss Multilingual Solution for Automatic Coding and
Recoding of Occupations and Economic Activities
|
Objective. Epidemiological studies require data that are in alignment with
the classifications established for occupations or economic activities. The
classifications usually include hundreds of codes and titles. Manual coding of
raw data may result in misclassification and be time consuming. The goal was to
develop and test a web-tool, named Procode, for coding of free-texts against
classifications and recoding between different classifications. Methods. Three
text classifiers, i.e. Complement Naive Bayes (CNB), Support Vector Machine
(SVM) and Random Forest Classifier (RFC), were investigated using a k-fold
cross-validation. 30 000 free-texts with manually assigned classification codes
of French classification of occupations (PCS) and French classification of
activities (NAF) were available. For recoding, Procode integrated a workflow
that converts codes of one classification to another according to existing
crosswalks. Since this is a straightforward operation, only the recoding time
was measured. Results. Among the three investigated text classifiers, CNB
resulted in the best performance, where the classifier predicted accurately
57-81% and 63-83% classification codes for PCS and NAF, respectively. SVM lead
to somewhat lower results (by 1-2%), while RFC coded accurately up to 30% of
the data. The coding operation required one minute per 10 000 records, while
the recoding was faster, i.e. 5-10 seconds. Conclusion. The algorithm
integrated in Procode showed satisfactory performance, since the tool had to
assign the right code by choosing between 500-700 different choices. Based on
the results, the authors decided to implement CNB in Procode. In future, if
another classifier shows a superior performance, an update will include the
required modifications.
| 2,020 |
Computation and Language
|
Regularizing Recurrent Neural Networks via Sequence Mixup
|
In this paper, we extend a class of celebrated regularization techniques
originally proposed for feed-forward neural networks, namely Input Mixup (Zhang
et al., 2017) and Manifold Mixup (Verma et al., 2018), to the realm of
Recurrent Neural Networks (RNN). Our proposed methods are easy to implement and
have a low computational complexity, while leverage the performance of simple
neural architectures in a variety of tasks. We have validated our claims
through several experiments on real-world datasets, and also provide an
asymptotic theoretical analysis to further investigate the properties and
potential impacts of our proposed techniques. Applying sequence mixup to
BiLSTM-CRF model (Huang et al., 2015) to Named Entity Recognition task on
CoNLL-2003 data (Sang and De Meulder, 2003) has improved the F-1 score on the
test stage and reduced the loss, considerably.
| 2,020 |
Computation and Language
|
Disentangling Homophemes in Lip Reading using Perplexity Analysis
|
The performance of automated lip reading using visemes as a classification
schema has achieved less success compared with the use of ASCII characters and
words largely due to the problem of different words sharing identical visemes.
The Generative Pre-Training transformer is an effective autoregressive language
model used for many tasks in Natural Language Processing, including sentence
prediction and text classification.
This paper proposes a new application for this model and applies it in the
context of lip reading, where it serves as a language model to convert visual
speech in the form of visemes, to language in the form of words and sentences.
The network uses the search for optimal perplexity to perform the
viseme-to-word mapping and is thus a solution to the one-to-many mapping
problem that exists whereby various words that sound different when spoken look
identical. This paper proposes a method to tackle the one-to-many mapping
problem when performing automated lip reading using solely visual cues in two
separate scenarios: the first scenario is where the word boundary, that is, the
beginning and the ending of a word, is unknown; and the second scenario is
where the boundary is known.
Sentences from the benchmark BBC dataset "Lip Reading Sentences in the
Wild"(LRS2), are classified with a character error rate of 10.7% and a word
error rate of 18.0%. The main contribution of this paper is to propose a method
of predicting words through the use of perplexity analysis when only visual
cues are present, using an autoregressive language model.
| 2,020 |
Computation and Language
|
Effect of Word Embedding Models on Hate and Offensive Speech Detection
|
Deep neural networks have been adopted successfully in hate speech detection
problems. Nevertheless, the effect of the word embedding models on the neural
network's performance has not been appropriately examined in the literature. In
our study, through different detection tasks, 2-class, 3-class, and 6-class
classification, we investigate the impact of both word embedding models and
neural network architectures on the predictive accuracy. Our focus is on the
Arabic language. We first train several word embedding models on a large-scale
unlabelled Arabic text corpus. Next, based on a dataset of Arabic hate and
offensive speech, for each detection task, we train several neural network
classifiers using the pre-trained word embedding models. This task yields a
large number of various learned models, which allows conducting an exhaustive
comparison. The empirical analysis demonstrates, on the one hand, the
superiority of the skip-gram models and, on the other hand, the superiority of
the CNN network across the three detection tasks.
| 2,020 |
Computation and Language
|
Ensemble Distillation Approaches for Grammatical Error Correction
|
Ensemble approaches are commonly used techniques to improving a system by
combining multiple model predictions. Additionally these schemes allow the
uncertainty, as well as the source of the uncertainty, to be derived for the
prediction. Unfortunately these benefits come at a computational and memory
cost. To address this problem ensemble distillation (EnD) and more recently
ensemble distribution distillation (EnDD) have been proposed that compress the
ensemble into a single model, representing either the ensemble average
prediction or prediction distribution respectively. This paper examines the
application of both these distillation approaches to a sequence prediction
task, grammatical error correction (GEC). This is an important application area
for language learning tasks as it can yield highly useful feedback to the
learner. It is, however, more challenging than the standard tasks investigated
for distillation as the prediction of any grammatical correction to a word will
be highly dependent on both the input sequence and the generated output history
for the word. The performance of both EnD and EnDD are evaluated on both
publicly available GEC tasks as well as a spoken language task.
| 2,020 |
Computation and Language
|
Movie Summarization via Sparse Graph Construction
|
We summarize full-length movies by creating shorter videos containing their
most informative scenes. We explore the hypothesis that a summary can be
created by assembling scenes which are turning points (TPs), i.e., key events
in a movie that describe its storyline. We propose a model that identifies TP
scenes by building a sparse movie graph that represents relations between
scenes and is constructed using multimodal information. According to human
judges, the summaries created by our approach are more informative and
complete, and receive higher ratings, than the outputs of sequence-based models
and general-purpose summarization algorithms. The induced graphs are
interpretable, displaying different topology for different movie genres.
| 2,020 |
Computation and Language
|
Sentiment analysis in Bengali via transfer learning using multi-lingual
BERT
|
Sentiment analysis (SA) in Bengali is challenging due to this Indo-Aryan
language's highly inflected properties with more than 160 different inflected
forms for verbs and 36 different forms for noun and 24 different forms for
pronouns. The lack of standard labeled datasets in the Bengali domain makes the
task of SA even harder. In this paper, we present manually tagged 2-class and
3-class SA datasets in Bengali. We also demonstrate that the multi-lingual BERT
model with relevant extensions can be trained via the approach of transfer
learning over those novel datasets to improve the state-of-the-art performance
in sentiment classification tasks. This deep learning model achieves an
accuracy of 71\% for 2-class sentiment classification compared to the current
state-of-the-art accuracy of 68\%. We also present the very first Bengali SA
classifier for the 3-class manually tagged dataset, and our proposed model
achieves an accuracy of 60\%. We further use this model to analyze the
sentiment of public comments in the online daily newspaper. Our analysis shows
that people post negative comments for political or sports news more often,
while the religious article comments represent positive sentiment. The dataset
and code is publicly available at
https://github.com/KhondokerIslam/Bengali\_Sentiment.
| 2,020 |
Computation and Language
|
Towards unsupervised phone and word segmentation using self-supervised
vector-quantized neural networks
|
We investigate segmenting and clustering speech into low-bitrate phone-like
sequences without supervision. We specifically constrain pretrained
self-supervised vector-quantized (VQ) neural networks so that blocks of
contiguous feature vectors are assigned to the same code, thereby giving a
variable-rate segmentation of the speech into discrete units. Two segmentation
methods are considered. In the first, features are greedily merged until a
prespecified number of segments are reached. The second uses dynamic
programming to optimize a squared error with a penalty term to encourage fewer
but longer segments. We show that these VQ segmentation methods can be used
without alteration across a wide range of tasks: unsupervised phone
segmentation, ABX phone discrimination, same-different word discrimination, and
as inputs to a symbolic word segmentation algorithm. The penalized dynamic
programming method generally performs best. While performance on individual
tasks is only comparable to the state-of-the-art in some cases, in all tasks a
reasonable competing approach is outperformed at a substantially lower bitrate.
| 2,021 |
Computation and Language
|
An End-to-End Solution for Named Entity Recognition in eCommerce Search
|
Named entity recognition (NER) is a critical step in modern search query
understanding. In the domain of eCommerce, identifying the key entities, such
as brand and product type, can help a search engine retrieve relevant products
and therefore offer an engaging shopping experience. Recent research shows
promising results on shared benchmark NER tasks using deep learning methods,
but there are still unique challenges in the industry regarding domain
knowledge, training data, and model production. This paper demonstrates an
end-to-end solution to address these challenges. The core of our solution is a
novel model training framework "TripleLearn" which iteratively learns from
three separate training datasets, instead of one training set as is
traditionally done. Using this approach, the best model lifts the F1 score from
69.5 to 93.3 on the holdout test data. In our offline experiments, TripleLearn
improved the model performance compared to traditional training approaches
which use a single set of training data. Moreover, in the online A/B test, we
see significant improvements in user engagement and revenue conversion. The
model has been live on homedepot.com for more than 9 months, boosting search
conversions and revenue. Beyond our application, this TripleLearn framework, as
well as the end-to-end process, is model-independent and problem-independent,
so it can be generalized to more industrial applications, especially to the
eCommerce industry which has similar data foundations and problems.
| 2,020 |
Computation and Language
|
Leveraging Transfer Learning for Reliable Intelligence Identification on
Vietnamese SNSs (ReINTEL)
|
This paper proposed several transformer-based approaches for Reliable
Intelligence Identification on Vietnamese social network sites at VLSP 2020
evaluation campaign. We exploit both of monolingual and multilingual
pre-trained models. Besides, we utilize the ensemble method to improve the
robustness of different approaches. Our team achieved a score of 0.9378 at
ROC-AUC metric in the private test set which is competitive to other
participants.
| 2,020 |
Computation and Language
|
A Practical Approach towards Causality Mining in Clinical Text using
Active Transfer Learning
|
Objective: Causality mining is an active research area, which requires the
application of state-of-the-art natural language processing techniques. In the
healthcare domain, medical experts create clinical text to overcome the
limitation of well-defined and schema driven information systems. The objective
of this research work is to create a framework, which can convert clinical text
into causal knowledge. Methods: A practical approach based on term expansion,
phrase generation, BERT based phrase embedding and semantic matching, semantic
enrichment, expert verification, and model evolution has been used to construct
a comprehensive causality mining framework. This active transfer learning based
framework along with its supplementary services, is able to extract and enrich,
causal relationships and their corresponding entities from clinical text.
Results: The multi-model transfer learning technique when applied over multiple
iterations, gains performance improvements in terms of its accuracy and recall
while keeping the precision constant. We also present a comparative analysis of
the presented techniques with their common alternatives, which demonstrate the
correctness of our approach and its ability to capture most causal
relationships. Conclusion: The presented framework has provided cutting-edge
results in the healthcare domain. However, the framework can be tweaked to
provide causality detection in other domains, as well. Significance: The
presented framework is generic enough to be utilized in any domain, healthcare
services can gain massive benefits due to the voluminous and various nature of
its data. This causal knowledge extraction framework can be used to summarize
clinical text, create personas, discover medical knowledge, and provide
evidence to clinical decision making.
| 2,021 |
Computation and Language
|
Automating Document Classification with Distant Supervision to Increase
the Efficiency of Systematic Reviews
|
Objective: Systematic reviews of scholarly documents often provide complete
and exhaustive summaries of literature relevant to a research question.
However, well-done systematic reviews are expensive, time-demanding, and
labor-intensive. Here, we propose an automatic document classification approach
to significantly reduce the effort in reviewing documents. Methods: We first
describe a manual document classification procedure that is used to curate a
pertinent training dataset and then propose three classifiers: a keyword-guided
method, a cluster analysis-based refined method, and a random forest approach
that utilizes a large set of feature tokens. As an example, this approach is
used to identify documents studying female sex workers that are assumed to
contain content relevant to either HIV or violence. We compare the performance
of the three classifiers by cross-validation and conduct a sensitivity analysis
on the portion of data utilized in training the model. Results: The random
forest approach provides the highest area under the curve (AUC) for both
receiver operating characteristic (ROC) and precision/recall (PR). Analyses of
precision and recall suggest that random forest could facilitate manually
reviewing 20\% of the articles while containing 80\% of the relevant cases.
Finally, we found a good classifier could be obtained by using a relatively
small training sample size. Conclusions: In sum, the automated procedure of
document classification presented here could improve both the precision and
efficiency of systematic reviews, as well as facilitating live reviews, where
reviews are updated regularly.
| 2,020 |
Computation and Language
|
Large-scale Quantitative Evidence of Media Impact on Public Opinion
toward China
|
Do mass media influence people's opinion of other countries? Using BERT, a
deep neural network-based natural language processing model, we analyze a large
corpus of 267,907 China-related articles published by The New York Times since
1970. We then compare our output from The New York Times to a longitudinal data
set constructed from 101 cross-sectional surveys of the American public's views
on China. We find that the reporting of The New York Times on China in one year
explains 54% of the variance in American public opinion on China in the next.
Our result confirms hypothesized links between media and public opinion and
helps shed light on how mass media can influence public opinion of foreign
countries.
| 2,021 |
Computation and Language
|
Modelling General Properties of Nouns by Selectively Averaging
Contextualised Embeddings
|
While the success of pre-trained language models has largely eliminated the
need for high-quality static word vectors in many NLP applications, such
vectors continue to play an important role in tasks where words need to be
modelled in the absence of linguistic context. In this paper, we explore how
the contextualised embeddings predicted by BERT can be used to produce
high-quality word vectors for such domains, in particular related to knowledge
base completion, where our focus is on capturing the semantic properties of
nouns. We find that a simple strategy of averaging the contextualised
embeddings of masked word mentions leads to vectors that outperform the static
word vectors learned by BERT, as well as those from standard word embedding
models, in property induction tasks. We notice in particular that masking
target words is critical to achieve this strong performance, as the resulting
vectors focus less on idiosyncratic properties and more on general semantic
properties. Inspired by this view, we propose a filtering strategy which is
aimed at removing the most idiosyncratic mention vectors, allowing us to obtain
further performance gains in property induction.
| 2,021 |
Computation and Language
|
Detecting Insincere Questions from Text: A Transfer Learning Approach
|
The internet today has become an unrivalled source of information where
people converse on content based websites such as Quora, Reddit, StackOverflow
and Twitter asking doubts and sharing knowledge with the world. A major arising
problem with such websites is the proliferation of toxic comments or instances
of insincerity wherein the users instead of maintaining a sincere motive
indulge in spreading toxic and divisive content. The straightforward course of
action in confronting this situation is detecting such content beforehand and
preventing it from subsisting online. In recent times Transfer Learning in
Natural Language Processing has seen an unprecedented growth. Today with the
existence of transformers and various state of the art innovations, a
tremendous growth has been made in various NLP domains. The introduction of
BERT has caused quite a stir in the NLP community. As mentioned, when
published, BERT dominated performance benchmarks and thereby inspired many
other authors to experiment with it and publish similar models. This led to the
development of a whole BERT-family, each member being specialized on a
different task. In this paper we solve the Insincere Questions Classification
problem by fine tuning four cutting age models viz BERT, RoBERTa, DistilBERT
and ALBERT.
| 2,020 |
Computation and Language
|
Incorporating Domain Knowledge To Improve Topic Segmentation Of Long
MOOC Lecture Videos
|
Topical Segmentation poses a great role in reducing search space of the
topics taught in a lecture video specially when the video metadata lacks topic
wise segmentation information. This segmentation information eases user efforts
of searching, locating and browsing a topic inside a lecture video. In this
work we propose an algorithm, that combines state-of-the art language model and
domain knowledge graph for automatically detecting different coherent topics
present inside a long lecture video. We use the language model on
speech-to-text transcription to capture the implicit meaning of the whole video
while the knowledge graph provides us the domain specific dependencies between
different concepts of that subjects. Also leveraging the domain knowledge we
can capture the way instructor binds and connects different concepts while
teaching, which helps us in achieving better segmentation accuracy. We tested
our approach on NPTEL lecture videos and holistic evaluation shows that it out
performs the other methods described in the literature.
| 2,020 |
Computation and Language
|
Clickbait in Hindi News Media : A Preliminary Study
|
A corpus of Hindi news headlines shared on Twitter was created by collecting
tweets of 5 mainstream Hindi news sources for a period of 4 months. 7
independent annotators were recruited to mark the 20 most retweeted news posts
by each of the 5 news sources on its clickbait nature. The clickbait score
hence generated was assessed for its correlation with interactions on the
platform (retweets, favorites, reader replies), tweet word count, and
normalized POS (part-of-speech) tag counts in tweets. A positive correlation
was observed between readers' interactions with tweets and tweets' clickbait
score. Significant correlations were also observed for POS tag counts and
clickbait score. The prevalence of clickbait in mainstream Hindi news media was
found to be similar to its prevalence in English news media. We hope that our
observations would provide a platform for discussions on clickbait in
mainstream Hindi news media.
| 2,020 |
Computation and Language
|
Time to Transfer: Predicting and Evaluating Machine-Human Chatting
Handoff
|
Is chatbot able to completely replace the human agent? The short answer could
be - "it depends...". For some challenging cases, e.g., dialogue's topical
spectrum spreads beyond the training corpus coverage, the chatbot may
malfunction and return unsatisfied utterances. This problem can be addressed by
introducing the Machine-Human Chatting Handoff (MHCH), which enables
human-algorithm collaboration. To detect the normal/transferable utterances, we
propose a Difficulty-Assisted Matching Inference (DAMI) network, utilizing
difficulty-assisted encoding to enhance the representations of utterances.
Moreover, a matching inference mechanism is introduced to capture the
contextual matching features. A new evaluation metric, Golden Transfer within
Tolerance (GT-T), is proposed to assess the performance by considering the
tolerance property of the MHCH. To provide insights into the task and validate
the proposed model, we collect two new datasets. Extensive experimental results
are presented and contrasted against a series of baseline models to demonstrate
the efficacy of our model on MHCH.
| 2,020 |
Computation and Language
|
What Makes a Good and Useful Summary? Incorporating Users in Automatic
Summarization Research
|
Automatic text summarization has enjoyed great progress over the years and is
used in numerous applications, impacting the lives of many. Despite this
development, there is little research that meaningfully investigates how the
current research focus in automatic summarization aligns with users' needs. To
bridge this gap, we propose a survey methodology that can be used to
investigate the needs of users of automatically generated summaries.
Importantly, these needs are dependent on the target group. Hence, we design
our survey in such a way that it can be easily adjusted to investigate
different user groups. In this work we focus on university students, who make
extensive use of summaries during their studies. We find that the current
research directions of the automatic summarization community do not fully align
with students' needs. Motivated by our findings, we present ways to mitigate
this mismatch in future research on automatic summarization: we propose
research directions that impact the design, the development and the evaluation
of automatically generated summaries.
| 2,022 |
Computation and Language
|
The Complexity of Comparative Text Analysis -- "The Gardener is always
the Murderer" says the Fourth Machine
|
There is a heated debate about how far computers can map the complexity of
text analysis compared to the abilities of the whole team of human researchers.
A "deep" analysis of a given text is still beyond the possibilities of modern
computers.
In the heart of the existing computational text analysis algorithms there are
operations with real numbers, such as additions and multiplications according
to the rules of algebraic fields. However, the process of "comparing" has a
very precise mathematical structure, which is different from the structure of
an algebraic field. The mathematical structure of "comparing" can be expressed
by using Boolean rings. We build on this structure and define the corresponding
algebraic equations lifting algorithms of comparative text analysis onto the
"correct" algebraic basis. From this point of view, we can investigate the
question of {\em computational} complexity of comparative text analysis.
| 2,020 |
Computation and Language
|
Vartani Spellcheck -- Automatic Context-Sensitive Spelling Correction of
OCR-generated Hindi Text Using BERT and Levenshtein Distance
|
Traditional Optical Character Recognition (OCR) systems that generate text of
highly inflectional Indic languages like Hindi tend to suffer from poor
accuracy due to a wide alphabet set, compound characters and difficulty in
segmenting characters in a word. Automatic spelling error detection and
context-sensitive error correction can be used to improve accuracy by
post-processing the text generated by these OCR systems. A majority of
previously developed language models for error correction of Hindi spelling
have been context-free. In this paper, we present Vartani Spellcheck - a
context-sensitive approach for spelling correction of Hindi text using a
state-of-the-art transformer - BERT in conjunction with the Levenshtein
distance algorithm, popularly known as Edit Distance. We use a lookup
dictionary and context-based named entity recognition (NER) for detection of
possible spelling errors in the text. Our proposed technique has been tested on
a large corpus of text generated by the widely used Tesseract OCR on the Hindi
epic Ramayana. With an accuracy of 81%, the results show a significant
improvement over some of the previously established context-sensitive error
correction mechanisms for Hindi. We also explain how Vartani Spellcheck may be
used for on-the-fly autocorrect suggestion during continuous typing in a text
editor environment.
| 2,020 |
Computation and Language
|
Simple or Complex? Learning to Predict Readability of Bengali Texts
|
Determining the readability of a text is the first step to its
simplification. In this paper, we present a readability analysis tool capable
of analyzing text written in the Bengali language to provide in-depth
information on its readability and complexity. Despite being the 7th most
spoken language in the world with 230 million native speakers, Bengali suffers
from a lack of fundamental resources for natural language processing.
Readability related research of the Bengali language so far can be considered
to be narrow and sometimes faulty due to the lack of resources. Therefore, we
correctly adopt document-level readability formulas traditionally used for U.S.
based education system to the Bengali language with a proper age-to-age
comparison. Due to the unavailability of large-scale human-annotated corpora,
we further divide the document-level task into sentence-level and experiment
with neural architectures, which will serve as a baseline for the future works
of Bengali readability prediction. During the process, we present several
human-annotated corpora and dictionaries such as a document-level dataset
comprising 618 documents with 12 different grade levels, a large-scale
sentence-level dataset comprising more than 96K sentences with simple and
complex labels, a consonant conjunct count algorithm and a corpus of 341 words
to validate the effectiveness of the algorithm, a list of 3,396 easy words, and
an updated pronunciation dictionary with more than 67K words. These resources
can be useful for several other tasks of this low-resource language. We make
our Code & Dataset publicly available at
https://github.com/tafseer-nayeem/BengaliReadability} for reproduciblity.
| 2,020 |
Computation and Language
|
Unsupervised Opinion Summarization with Content Planning
|
The recent success of deep learning techniques for abstractive summarization
is predicated on the availability of large-scale datasets. When summarizing
reviews (e.g., for products or movies), such training data is neither available
nor can be easily sourced, motivating the development of methods which rely on
synthetic datasets for supervised training. We show that explicitly
incorporating content planning in a summarization model not only yields output
of higher quality, but also allows the creation of synthetic datasets which are
more natural, resembling real world document-summary pairs. Our content plans
take the form of aspect and sentiment distributions which we induce from data
without access to expensive annotations. Synthetic datasets are created by
sampling pseudo-reviews from a Dirichlet distribution parametrized by our
content planner, while our model generates summaries based on input reviews and
induced content plans. Experimental results on three domains show that our
approach outperforms competitive models in generating informative, coherent,
and fluent summaries that capture opinion consensus.
| 2,020 |
Computation and Language
|
Model Choices Influence Attributive Word Associations: A Semi-supervised
Analysis of Static Word Embeddings
|
Static word embeddings encode word associations, extensively utilized in
downstream NLP tasks. Although prior studies have discussed the nature of such
word associations in terms of biases and lexical regularities captured, the
variation in word associations based on the embedding training procedure
remains in obscurity. This work aims to address this gap by assessing
attributive word associations across five different static word embedding
architectures, analyzing the impact of the choice of the model architecture,
context learning flavor and training corpora. Our approach utilizes a
semi-supervised clustering method to cluster annotated proper nouns and
adjectives, based on their word embedding features, revealing underlying
attributive word associations formed in the embedding space, without
introducing any confirmation bias. Our results reveal that the choice of the
context learning flavor during embedding training (CBOW vs skip-gram) impacts
the word association distinguishability and word embeddings' sensitivity to
deviations in the training corpora. Moreover, it is empirically shown that even
when trained over the same corpora, there is significant inter-model disparity
and intra-model similarity in the encoded word associations across different
word embedding models, portraying specific patterns in the way the embedding
space is created for each embedding architecture.
| 2,020 |
Computation and Language
|
Learning to Rationalize for Nonmonotonic Reasoning with Distant
Supervision
|
The black-box nature of neural models has motivated a line of research that
aims to generate natural language rationales to explain why a model made
certain predictions. Such rationale generation models, to date, have been
trained on dataset-specific crowdsourced rationales, but this approach is
costly and is not generalizable to new tasks and domains. In this paper, we
investigate the extent to which neural models can reason about natural language
rationales that explain model predictions, relying only on distant supervision
with no additional annotation cost for human-written rationales. We investigate
multiple ways to automatically generate rationales using pre-trained language
models, neural knowledge models, and distant supervision from related tasks,
and train generative models capable of composing explanatory rationales for
unseen instances. We demonstrate our approach on the defeasible inference task,
a nonmonotonic reasoning task in which an inference may be strengthened or
weakened when new information (an update) is introduced. Our model shows
promises at generating post-hoc rationales explaining why an inference is more
or less likely given the additional information, however, it mostly generates
trivial rationales reflecting the fundamental limitations of neural language
models. Conversely, the more realistic setup of jointly predicting the update
or its type and generating rationale is more challenging, suggesting an
important future direction.
| 2,020 |
Computation and Language
|
Primer AI's Systems for Acronym Identification and Disambiguation
|
The prevalence of ambiguous acronyms make scientific documents harder to
understand for humans and machines alike, presenting a need for models that can
automatically identify acronyms in text and disambiguate their meaning. We
introduce new methods for acronym identification and disambiguation: our
acronym identification model projects learned token embeddings onto tag
predictions, and our acronym disambiguation model finds training examples with
similar sentence embeddings as test examples. Both of our systems achieve
significant performance gains over previously suggested methods, and perform
competitively on the SDU@AAAI-21 shared task leaderboard. Our models were
trained in part on new distantly-supervised datasets for these tasks which we
call AuxAI and AuxAD. We also identified a duplication conflict issue in the
SciAD dataset, and formed a deduplicated version of SciAD that we call
SciAD-dedupe. We publicly released all three of these datasets, and hope that
they help the community make further strides in scientific document
understanding.
| 2,021 |
Computation and Language
|
Traditional IR rivals neural models on the MS MARCO Document Ranking
Leaderboard
|
This short document describes a traditional IR system that achieved MRR@100
equal to 0.298 on the MS MARCO Document Ranking leaderboard (on 2020-12-06).
Although inferior to most BERT-based models, it outperformed several neural
runs (as well as all non-neural ones), including two submissions that used a
large pretrained Transformer model for re-ranking. We provide software and data
to reproduce our results.
| 2,021 |
Computation and Language
|
Enriched Annotations for Tumor Attribute Classification from Pathology
Reports with Limited Labeled Data
|
Precision medicine has the potential to revolutionize healthcare, but much of
the data for patients is locked away in unstructured free-text, limiting
research and delivery of effective personalized treatments. Generating large
annotated datasets for information extraction from clinical notes is often
challenging and expensive due to the high level of expertise needed for high
quality annotations. To enable natural language processing for small dataset
sizes, we develop a novel enriched hierarchical annotation scheme and
algorithm, Supervised Line Attention (SLA), and apply this algorithm to
predicting categorical tumor attributes from kidney and colon cancer pathology
reports from the University of California San Francisco (UCSF). Whereas
previous work only annotated document level labels, we in addition ask the
annotators to enrich the traditional label by asking them to also highlight the
relevant line or potentially lines for the final label, which leads to a 20%
increase of annotation time required per document. With the enriched
annotations, we develop a simple and interpretable machine learning algorithm
that first predicts the relevant lines in the document and then predicts the
tumor attribute. Our results show across the small dataset sizes of 32, 64,
128, and 186 labeled documents per cancer, SLA only requires half the number of
labeled documents as state-of-the-art methods to achieve similar or better
micro-f1 and macro-f1 scores for the vast majority of comparisons that we made.
Accounting for the increased annotation time, this leads to a 40% reduction in
total annotation time over the state of the art.
| 2,020 |
Computation and Language
|
Writing Polishment with Simile: Task, Dataset and A Neural Approach
|
A simile is a figure of speech that directly makes a comparison, showing
similarities between two different things, e.g. "Reading papers can be dull
sometimes,like watching grass grow". Human writers often interpolate
appropriate similes into proper locations of the plain text to vivify their
writings. However, none of existing work has explored neural simile
interpolation, including both locating and generation. In this paper, we
propose a new task of Writing Polishment with Simile (WPS) to investigate
whether machines are able to polish texts with similes as we human do.
Accordingly, we design a two-staged Locate&Gen model based on transformer
architecture. Our model firstly locates where the simile interpolation should
happen, and then generates a location-specific simile. We also release a
large-scale Chinese Simile (CS) dataset containing 5 million similes with
context. The experimental results demonstrate the feasibility of WPS task and
shed light on the future research directions towards better automatic text
polishment.
| 2,020 |
Computation and Language
|
A Response Retrieval Approach for Dialogue Using a Multi-Attentive
Transformer
|
This paper presents our work for the ninth edition of the Dialogue System
Technology Challenge (DSTC9). Our solution addresses the track number four:
Simulated Interactive MultiModal Conversations. The task consists in providing
an algorithm able to simulate a shopping assistant that supports the user with
his/her requests. We address the task of response retrieval, that is the task
of retrieving the most appropriate agent response from a pool of response
candidates. Our approach makes use of a neural architecture based on
transformer with a multi-attentive structure that conditions the response of
the agent on the request made by the user and on the product the user is
referring to. Final experiments on the SIMMC Fashion Dataset show that our
approach achieves the second best scores on all the retrieval metrics defined
by the organizers. The source code is available at
https://github.com/D2KLab/dstc9-SIMMC.
| 2,020 |
Computation and Language
|
Learning to Check Contract Inconsistencies
|
Contract consistency is important in ensuring the legal validity of the
contract. In many scenarios, a contract is written by filling the blanks in a
precompiled form. Due to carelessness, two blanks that should be filled with
the same (or different)content may be incorrectly filled with different (or
same) content. This will result in the issue of contract inconsistencies, which
may severely impair the legal validity of the contract. Traditional methods to
address this issue mainly rely on manual contract review, which is
labor-intensive and costly. In this work, we formulate a novel Contract
Inconsistency Checking (CIC) problem, and design an end-to-end framework,
called Pair-wise Blank Resolution (PBR), to solve the CIC problem with high
accuracy. Our PBR model contains a novel BlankCoder to address the challenge of
modeling meaningless blanks. BlankCoder adopts a two-stage attention mechanism
that adequately associates a meaningless blank with its relevant descriptions
while avoiding the incorporation of irrelevant context words. Experiments
conducted on real-world datasets show the promising performance of our method
with a balanced accuracy of 94.05% and an F1 score of 90.90% in the CIC
problem.
| 2,020 |
Computation and Language
|
Efficient Clustering from Distributions over Topics
|
There are many scenarios where we may want to find pairs of textually similar
documents in a large corpus (e.g. a researcher doing literature review, or an
R&D project manager analyzing project proposals). To programmatically discover
those connections can help experts to achieve those goals, but brute-force
pairwise comparisons are not computationally adequate when the size of the
document corpus is too large. Some algorithms in the literature divide the
search space into regions containing potentially similar documents, which are
later processed separately from the rest in order to reduce the number of pairs
compared. However, this kind of unsupervised methods still incur in high
temporal costs. In this paper, we present an approach that relies on the
results of a topic modeling algorithm over the documents in a collection, as a
means to identify smaller subsets of documents where the similarity function
can then be computed. This approach has proved to obtain promising results when
identifying similar documents in the domain of scientific publications. We have
compared our approach against state of the art clustering techniques and with
different configurations for the topic modeling algorithm. Results suggest that
our approach outperforms (> 0.5) the other analyzed techniques in terms of
efficiency.
| 2,017 |
Computation and Language
|
Enhance Multimodal Transformer With External Label And In-Domain
Pretrain: Hateful Meme Challenge Winning Solution
|
Hateful meme detection is a new research area recently brought out that
requires both visual, linguistic understanding of the meme and some background
knowledge to performing well on the task. This technical report summarises the
first place solution of the Hateful Meme Detection Challenge 2020, which
extending state-of-the-art visual-linguistic transformers to tackle this
problem. At the end of the report, we also point out the shortcomings and
possible directions for improving the current methodology.
| 2,020 |
Computation and Language
|
CARE: Commonsense-Aware Emotional Response Generation with Latent
Concepts
|
Rationality and emotion are two fundamental elements of humans. Endowing
agents with rationality and emotion has been one of the major milestones in AI.
However, in the field of conversational AI, most existing models only
specialize in one aspect and neglect the other, which often leads to dull or
unrelated responses. In this paper, we hypothesize that combining rationality
and emotion into conversational agents can improve response quality. To test
the hypothesis, we focus on one fundamental aspect of rationality, i.e.,
commonsense, and propose CARE, a novel model for commonsense-aware emotional
response generation. Specifically, we first propose a framework to learn and
construct commonsense-aware emotional latent concepts of the response given an
input message and a desired emotion. We then propose three methods to
collaboratively incorporate the latent concepts into response generation.
Experimental results on two large-scale datasets support our hypothesis and
show that our model can produce more accurate and commonsense-aware emotional
responses and achieve better human ratings than state-of-the-art models that
only specialize in one aspect.
| 2,021 |
Computation and Language
|
Keyword-Guided Neural Conversational Model
|
We study the problem of imposing conversational goals/keywords on open-domain
conversational agents, where the agent is required to lead the conversation to
a target keyword smoothly and fast. Solving this problem enables the
application of conversational agents in many real-world scenarios, e.g.,
recommendation and psychotherapy. The dominant paradigm for tackling this
problem is to 1) train a next-turn keyword classifier, and 2) train a
keyword-augmented response retrieval model. However, existing approaches in
this paradigm have two limitations: 1) the training and evaluation datasets for
next-turn keyword classification are directly extracted from conversations
without human annotations, thus, they are noisy and have low correlation with
human judgements, and 2) during keyword transition, the agents solely rely on
the similarities between word embeddings to move closer to the target keyword,
which may not reflect how humans converse. In this paper, we assume that human
conversations are grounded on commonsense and propose a keyword-guided neural
conversational model that can leverage external commonsense knowledge graphs
(CKG) for both keyword transition and response retrieval. Automatic evaluations
suggest that commonsense improves the performance of both next-turn keyword
prediction and keyword-augmented response retrieval. In addition, both
self-play and human evaluations show that our model produces responses with
smoother keyword transition and reaches the target keyword faster than
competitive baselines.
| 2,021 |
Computation and Language
|
Modeling Homophone Noise for Robust Neural Machine Translation
|
In this paper, we propose a robust neural machine translation (NMT)
framework. The framework consists of a homophone noise detector and a
syllable-aware NMT model to homophone errors. The detector identifies potential
homophone errors in a textual sentence and converts them into syllables to form
a mixed sequence that is then fed into the syllable-aware NMT. Extensive
experiments on Chinese->English translation demonstrate that our proposed
method not only significantly outperforms baselines on noisy test sets with
homophone noise, but also achieves a substantial improvement on clean text.
| 2,020 |
Computation and Language
|
Multi-Aspect Sentiment Analysis with Latent Sentiment-Aspect Attribution
|
In this paper, we introduce a new framework called the sentiment-aspect
attribution module (SAAM). SAAM works on top of traditional neural networks and
is designed to address the problem of multi-aspect sentiment classification and
sentiment regression. The framework works by exploiting the correlations
between sentence-level embedding features and variations of document-level
aspect rating scores. We demonstrate several variations of our framework on top
of CNN and RNN based models. Experiments on a hotel review dataset and a beer
review dataset have shown SAAM can improve sentiment analysis performance over
corresponding base models. Moreover, because of the way our framework
intuitively combines sentence-level scores into document-level scores, it is
able to provide a deeper insight into data (e.g., semi-supervised sentence
aspect labeling). Hence, we end the paper with a detailed analysis that shows
the potential of our models for other applications such as sentiment snippet
extraction.
| 2,020 |
Computation and Language
|
Nested Named Entity Recognition with Partially-Observed TreeCRFs
|
Named entity recognition (NER) is a well-studied task in natural language
processing. However, the widely-used sequence labeling framework is difficult
to detect entities with nested structures. In this work, we view nested NER as
constituency parsing with partially-observed trees and model it with
partially-observed TreeCRFs. Specifically, we view all labeled entity spans as
observed nodes in a constituency tree, and other spans as latent nodes. With
the TreeCRF we achieve a uniform way to jointly model the observed and the
latent nodes. To compute the probability of partial trees with partial
marginalization, we propose a variant of the Inside algorithm, the
\textsc{Masked Inside} algorithm, that supports different inference operations
for different nodes (evaluation for the observed, marginalization for the
latent, and rejection for nodes incompatible with the observed) with efficient
parallelized implementation, thus significantly speeding up training and
inference. Experiments show that our approach achieves the state-of-the-art
(SOTA) F1 scores on the ACE2004, ACE2005 dataset, and shows comparable
performance to SOTA models on the GENIA dataset. Our approach is implemented
at: \url{https://github.com/FranxYao/Partially-Observed-TreeCRFs}.
| 2,020 |
Computation and Language
|
Exploring Transfer Learning For End-to-End Spoken Language Understanding
|
Voice Assistants such as Alexa, Siri, and Google Assistant typically use a
two-stage Spoken Language Understanding pipeline; first, an Automatic Speech
Recognition (ASR) component to process customer speech and generate text
transcriptions, followed by a Natural Language Understanding (NLU) component to
map transcriptions to an actionable hypothesis. An end-to-end (E2E) system that
goes directly from speech to a hypothesis is a more attractive option. These
systems were shown to be smaller, faster, and better optimized. However, they
require massive amounts of end-to-end training data and in addition, don't take
advantage of the already available ASR and NLU training data.
In this work, we propose an E2E system that is designed to jointly train on
multiple speech-to-text tasks, such as ASR (speech-transcription) and SLU
(speech-hypothesis), and text-to-text tasks, such as NLU (text-hypothesis). We
call this the Audio-Text All-Task (AT-AT) Model and we show that it beats the
performance of E2E models trained on individual tasks, especially ones trained
on limited data. We show this result on an internal music dataset and two
public datasets, FluentSpeech and SNIPS Audio, where we achieve
state-of-the-art results. Since our model can process both speech and text
input sequences and learn to predict a target sequence, it also allows us to do
zero-shot E2E SLU by training on only text-hypothesis data (without any speech)
from a new domain. We evaluate this ability of our model on the Facebook TOP
dataset and set a new benchmark for zeroshot E2E performance. We will soon
release the audio data collected for the TOP dataset for future research.
| 2,020 |
Computation and Language
|
Pre-Training Transformers as Energy-Based Cloze Models
|
We introduce Electric, an energy-based cloze model for representation
learning over text. Like BERT, it is a conditional generative model of tokens
given their contexts. However, Electric does not use masking or output a full
distribution over tokens that could occur in a context. Instead, it assigns a
scalar energy score to each input token indicating how likely it is given its
context. We train Electric using an algorithm based on noise-contrastive
estimation and elucidate how this learning objective is closely related to the
recently proposed ELECTRA pre-training method. Electric performs well when
transferred to downstream tasks and is particularly effective at producing
likelihood scores for text: it re-ranks speech recognition n-best lists better
than language models and much faster than masked language models. Furthermore,
it offers a clearer and more principled view of what ELECTRA learns during
pre-training.
| 2,020 |
Computation and Language
|
DialogXL: All-in-One XLNet for Multi-Party Conversation Emotion
Recognition
|
This paper presents our pioneering effort for emotion recognition in
conversation (ERC) with pre-trained language models. Unlike regular documents,
conversational utterances appear alternately from different parties and are
usually organized as hierarchical structures in previous work. Such structures
are not conducive to the application of pre-trained language models such as
XLNet. To address this issue, we propose an all-in-one XLNet model, namely
DialogXL, with enhanced memory to store longer historical context and
dialog-aware self-attention to deal with the multi-party structures.
Specifically, we first modify the recurrence mechanism of XLNet from
segment-level to utterance-level in order to better model the conversational
data. Second, we introduce dialog-aware self-attention in replacement of the
vanilla self-attention in XLNet to capture useful intra- and inter-speaker
dependencies. Extensive experiments are conducted on four ERC benchmarks with
mainstream models presented for comparison. The experimental results show that
the proposed model outperforms the baselines on all the datasets. Several other
experiments such as ablation study and error analysis are also conducted and
the results confirm the role of the critical modules of DialogXL.
| 2,020 |
Computation and Language
|
Improving Multilingual Neural Machine Translation For Low-Resource
Languages: French,English - Vietnamese
|
Prior works have demonstrated that a low-resource language pair can benefit
from multilingual machine translation (MT) systems, which rely on many language
pairs' joint training. This paper proposes two simple strategies to address the
rare word issue in multilingual MT systems for two low-resource language pairs:
French-Vietnamese and English-Vietnamese. The first strategy is about dynamical
learning word similarity of tokens in the shared space among source languages
while another one attempts to augment the translation ability of rare words
through updating their embeddings during the training. Besides, we leverage
monolingual data for multilingual MT systems to increase the amount of
synthetic parallel corpora while dealing with the data sparsity problem. We
have shown significant improvements of up to +1.62 and +2.54 BLEU points over
the bilingual baseline systems for both language pairs and released our
datasets for the research community.
| 2,021 |
Computation and Language
|
Building domain specific lexicon based on TikTok comment dataset
|
In the sentiment analysis task, predicting the sentiment tendency of a
sentence is an important branch. Previous research focused more on sentiment
analysis in English, for example, analyzing the sentiment tendency of sentences
based on Valence, Arousal, Dominance of sentences. the emotional tendency is
different between the two languages. For example, the sentence order between
Chinese and English may present different emotions. This paper tried a method
that builds a domain-specific lexicon. In this way, the model can classify
Chinese words with emotional tendency. In this approach, based on the [13], an
ultra-dense space embedding table is trained through word embedding of Chinese
TikTok review and emotional lexicon sources(seed words). The result of the
model is a domain-specific lexicon, which presents the emotional tendency of
words. I collected Chinese TikTok comments as training data. By comparing The
training results with the PCA method to evaluate the performance of the model
in Chinese sentiment classification, the results show that the model has done
well in Chinese. The source code has released on
github:https://github.com/h2222/douyin_comment_dataset
| 2,020 |
Computation and Language
|
Learning from Mistakes: Using Mis-predictions as Harm Alerts in Language
Pre-Training
|
Fitting complex patterns in the training data, such as reasoning and
commonsense, is a key challenge for language pre-training. According to recent
studies and our empirical observations, one possible reason is that some
easy-to-fit patterns in the training data, such as frequently co-occurring word
combinations, dominate and harm pre-training, making it hard for the model to
fit more complex information. We argue that mis-predictions can help locate
such dominating patterns that harm language understanding. When a
mis-prediction occurs, there should be frequently co-occurring patterns with
the mis-predicted word fitted by the model that lead to the mis-prediction. If
we can add regularization to train the model to rely less on such dominating
patterns when a mis-prediction occurs and focus more on the rest more subtle
patterns, more information can be efficiently fitted at pre-training. Following
this motivation, we propose a new language pre-training method, Mis-Predictions
as Harm Alerts (MPA). In MPA, when a mis-prediction occurs during pre-training,
we use its co-occurrence information to guide several heads of the
self-attention modules. Some self-attention heads in the Transformer modules
are optimized to assign lower attention weights to the words in the input
sentence that frequently co-occur with the mis-prediction while assigning
higher weights to the other words. By doing so, the Transformer model is
trained to rely less on the dominating frequently co-occurring patterns with
mis-predictions while focus more on the rest more complex information when
mis-predictions occur. Our experiments show that MPA expedites the pre-training
of BERT and ELECTRA and improves their performances on downstream tasks.
| 2,021 |
Computation and Language
|
Clinical Temporal Relation Extraction with Probabilistic Soft Logic
Regularization and Global Inference
|
There has been a steady need in the medical community to precisely extract
the temporal relations between clinical events. In particular, temporal
information can facilitate a variety of downstream applications such as case
report retrieval and medical question answering. Existing methods either
require expensive feature engineering or are incapable of modeling the global
relational dependencies among the events. In this paper, we propose a novel
method, Clinical Temporal ReLation Exaction with Probabilistic Soft Logic
Regularization and Global Inference (CTRL-PG) to tackle the problem at the
document level. Extensive experiments on two benchmark datasets, I2B2-2012 and
TB-Dense, demonstrate that CTRL-PG significantly outperforms baseline methods
for temporal relation extraction.
| 2,020 |
Computation and Language
|
A Lightweight Neural Model for Biomedical Entity Linking
|
Biomedical entity linking aims to map biomedical mentions, such as diseases
and drugs, to standard entities in a given knowledge base. The specific
challenge in this context is that the same biomedical entity can have a wide
range of names, including synonyms, morphological variations, and names with
different word orderings. Recently, BERT-based methods have advanced the
state-of-the-art by allowing for rich representations of word sequences.
However, they often have hundreds of millions of parameters and require heavy
computing resources, which limits their applications in resource-limited
scenarios. Here, we propose a lightweight neural method for biomedical entity
linking, which needs just a fraction of the parameters of a BERT model and much
less computing resources. Our method uses a simple alignment layer with
attention mechanisms to capture the variations between mention and entity
names. Yet, we show that our model is competitive with previous work on
standard evaluation benchmarks.
| 2,021 |
Computation and Language
|
Multi-type Disentanglement without Adversarial Training
|
Controlling the style of natural language by disentangling the latent space
is an important step towards interpretable machine learning. After the latent
space is disentangled, the style of a sentence can be transformed by tuning the
style representation without affecting other features of the sentence. Previous
works usually use adversarial training to guarantee that disentangled vectors
do not affect each other. However, adversarial methods are difficult to train.
Especially when there are multiple features (e.g., sentiment, or tense, which
we call style types in this paper), each feature requires a separate
discriminator for extracting a disentangled style vector corresponding to that
feature. In this paper, we propose a unified distribution-controlling method,
which provides each specific style value (the value of style types, e.g.,
positive sentiment, or past tense) with a unique representation. This method
contributes a solid theoretical basis to avoid adversarial training in
multi-type disentanglement. We also propose multiple loss functions to achieve
a style-content disentanglement as well as a disentanglement among multiple
style types. In addition, we observe that if two different style types always
have some specific style values that occur together in the dataset, they will
affect each other when transferring the style values. We call this phenomenon
training bias, and we propose a loss function to alleviate such training bias
while disentangling multiple types. We conduct experiments on two datasets
(Yelp service reviews and Amazon product reviews) to evaluate the
style-disentangling effect and the unsupervised style transfer performance on
two style types: sentiment and tense. The experimental results show the
effectiveness of our model.
| 2,021 |
Computation and Language
|
Learning from the Best: Rationalizing Prediction by Adversarial
Information Calibration
|
Explaining the predictions of AI models is paramount in safety-critical
applications, such as in legal or medical domains. One form of explanation for
a prediction is an extractive rationale, i.e., a subset of features of an
instance that lead the model to give its prediction on the instance. Previous
works on generating extractive rationales usually employ a two-phase model: a
selector that selects the most important features (i.e., the rationale)
followed by a predictor that makes the prediction based exclusively on the
selected features. One disadvantage of these works is that the main signal for
learning to select features comes from the comparison of the answers given by
the predictor and the ground-truth answers. In this work, we propose to squeeze
more information from the predictor via an information calibration method. More
precisely, we train two models jointly: one is a typical neural model that
solves the task at hand in an accurate but black-box manner, and the other is a
selector-predictor model that additionally produces a rationale for its
prediction. The first model is used as a guide to the second model. We use an
adversarial-based technique to calibrate the information extracted by the two
models such that the difference between them is an indicator of the missed or
over-selected features. In addition, for natural language tasks, we propose to
use a language-model-based regularizer to encourage the extraction of fluent
rationales. Experimental results on a sentiment analysis task as well as on
three tasks from the legal domain show the effectiveness of our approach to
rationale extraction.
| 2,021 |
Computation and Language
|
Multilingual Evidence Retrieval and Fact Verification to Combat Global
Disinformation: The Power of Polyglotism
|
This article investigates multilingual evidence retrieval and fact
verification as a step to combat global disinformation, a first effort of this
kind, to the best of our knowledge. The goal is building multilingual systems
that retrieve in evidence-rich languages to verify claims in evidence-poor
languages that are more commonly targeted by disinformation. To this end, our
EnmBERT fact verification system shows evidence of transfer learning ability
and 400 example mixed English-Romanian dataset is made available for
cross-lingual transfer learning evaluation.
| 2,021 |
Computation and Language
|
R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching
|
Sentence semantic matching is one of the fundamental tasks in natural
language processing, which requires an agent to determine the semantic relation
among input sentences. Recently, deep neural networks have achieved impressive
performance in this area, especially BERT. Despite the effectiveness of these
models, most of them treat output labels as meaningless one-hot vectors,
underestimating the semantic information and guidance of relations that these
labels reveal, especially for tasks with a small number of labels. To address
this problem, we propose a Relation of Relation Learning Network (R2-Net) for
sentence semantic matching. Specifically, we first employ BERT to encode the
input sentences from a global perspective. Then a CNN-based encoder is designed
to capture keywords and phrase information from a local perspective. To fully
leverage labels for better relation information extraction, we introduce a
self-supervised relation of relation classification task for guiding R2-Net to
consider more about labels. Meanwhile, a triplet loss is employed to
distinguish the intra-class and inter-class relations in a finer granularity.
Empirical experiments on two sentence semantic matching tasks demonstrate the
superiority of our proposed model. As a byproduct, we have released the codes
to facilitate other researches.
| 2,020 |
Computation and Language
|
Costs to Consider in Adopting NLP for Your Business
|
Recent advances in Natural Language Processing (NLP) have largely pushed deep
transformer-based models as the go-to state-of-the-art technique without much
regard to the production and utilization cost. Companies planning to adopt
these methods into their business face difficulties because of the lack of
machine, data, and human resources to build them. We compare both the
performance and the cost of classical learning algorithms to the latest ones in
common sequence and text labeling tasks. In our industrial datasets, we find
that classical models often perform on par with deep neural ones despite the
lower cost. We show the trade-off between performance gain and the cost across
the models to give more insights for AI-pivoting business. Further, we call for
more research into low-cost models, especially for under-resourced languages.
| 2,021 |
Computation and Language
|
Discovering New Intents with Deep Aligned Clustering
|
Discovering new intents is a crucial task in dialogue systems. Most existing
methods are limited in transferring the prior knowledge from known intents to
new intents. They also have difficulties in providing high-quality supervised
signals to learn clustering-friendly features for grouping unlabeled intents.
In this work, we propose an effective method, Deep Aligned Clustering, to
discover new intents with the aid of the limited known intent data. Firstly, we
leverage a few labeled known intent samples as prior knowledge to pre-train the
model. Then, we perform k-means to produce cluster assignments as
pseudo-labels. Moreover, we propose an alignment strategy to tackle the label
inconsistency problem during clustering assignments. Finally, we learn the
intent representations under the supervision of the aligned pseudo-labels. With
an unknown number of new intents, we predict the number of intent categories by
eliminating low-confidence intent-wise clusters. Extensive experiments on two
benchmark datasets show that our method is more robust and achieves substantial
improvements over the state-of-the-art methods. The codes are released at
https://github.com/thuiar/DeepAligned-Clustering.
| 2,023 |
Computation and Language
|
Using Meta-Knowledge Mined from Identifiers to Improve Intent
Recognition in Neuro-Symbolic Algorithms
|
In this paper we explore the use of meta-knowledge embedded in intent
identifiers to improve intent recognition in conversational systems. As
evidenced by the analysis of thousands of real-world chatbots and in interviews
with professional chatbot curators, developers and domain experts tend to
organize the set of chatbot intents by identifying them using proto-taxonomies,
i.e., meta-knowledge connecting high-level, symbolic concepts shared across
different intents. By using neuro-symbolic algorithms able to incorporate such
proto-taxonomies to expand intent representation, we show that such mined
meta-knowledge can improve accuracy in intent recognition. In a dataset with
intents and example utterances from hundreds of professional chatbots, we saw
improvements of more than 10% in the equal error rate (EER) in almost a third
of the chatbots when we apply those algorithms in comparison to a baseline of
the same algorithms without the meta-knowledge. The meta-knowledge proved to be
even more relevant in detecting out-of-scope utterances, decreasing the false
acceptance rate (FAR) in more than 20\% in about half of the chatbots. The
experiments demonstrate that such symbolic meta-knowledge structures can be
effectively mined and used by neuro-symbolic algorithms, apparently by
incorporating into the learning process higher-level structures of the problem
being solved. Based on these results, we also discuss how the use of mined
meta-knowledge can be an answer for the challenge of knowledge acquisition in
neuro-symbolic algorithms.
| 2,020 |
Computation and Language
|
Show or Tell? Demonstration is More Robust to Changes in Shared
Perception than Explanation
|
Successful teaching entails a complex interaction between a teacher and a
learner. The teacher must select and convey information based on what they
think the learner perceives and believes. Teaching always involves misaligned
beliefs, but studies of pedagogy often focus on situations where teachers and
learners share perceptions. Nonetheless, a teacher and learner may not always
experience or attend to the same aspects of the environment. Here, we study how
misaligned perceptions influence communication. We hypothesize that the
efficacy of different forms of communication depends on the shared perceptual
state between teacher and learner. We develop a cooperative teaching game to
test whether concrete mediums (demonstrations, or "showing") are more robust
than abstract ones (language, or "telling") when the teacher and learner are
not perceptually aligned. We find evidence that (1) language-based teaching is
more affected by perceptual misalignment, but (2) demonstration-based teaching
is less likely to convey nuanced information. We discuss implications for human
pedagogy and machine learning.
| 2,020 |
Computation and Language
|
You Are What You Tweet: Profiling Users by Past Tweets to Improve Hate
Speech Detection
|
Hate speech detection research has predominantly focused on purely
content-based methods, without exploiting any additional context. We briefly
critique pros and cons of this task formulation. We then investigate profiling
users by their past utterances as an informative prior to better predict
whether new utterances constitute hate speech. To evaluate this, we augment
three Twitter hate speech datasets with additional timeline data, then embed
this additional context into a strong baseline model. Promising results suggest
merit for further investigation, though analysis is complicated by differences
in annotation schemes and processes, as well as Twitter API limitations and
data sharing policies.
| 2,022 |
Computation and Language
|
Exploring Thematic Coherence in Fake News
|
The spread of fake news remains a serious global issue; understanding and
curtailing it is paramount. One way of differentiating between deceptive and
truthful stories is by analyzing their coherence. This study explores the use
of topic models to analyze the coherence of cross-domain news shared online.
Experimental results on seven cross-domain datasets demonstrate that fake news
shows a greater thematic deviation between its opening sentences and its
remainder.
| 2,020 |
Computation and Language
|
Building and Using Personal Knowledge Graph to Improve Suicidal Ideation
Detection on Social Media
|
A large number of individuals are suffering from suicidal ideation in the
world. There are a number of causes behind why an individual might suffer from
suicidal ideation. As the most popular platform for self-expression, emotion
release, and personal interaction, individuals may exhibit a number of symptoms
of suicidal ideation on social media. Nevertheless, challenges from both data
and knowledge aspects remain as obstacles, constraining the social media-based
detection performance. Data implicitness and sparsity make it difficult to
discover the inner true intentions of individuals based on their posts.
Inspired by psychological studies, we build and unify a high-level
suicide-oriented knowledge graph with deep neural networks for suicidal
ideation detection on social media. We further design a two-layered attention
mechanism to explicitly reason and establish key risk factors to individual's
suicidal ideation. The performance study on microblog and Reddit shows that: 1)
with the constructed personal knowledge graph, the social media-based suicidal
ideation detection can achieve over 93% accuracy; and 2) among the six
categories of personal factors, post, personality, and experience are the top-3
key indicators. Under these categories, posted text, stress level, stress
duration, posted image, and ruminant thinking contribute to one's suicidal
ideation detection.
| 2,020 |
Computation and Language
|
LIREx: Augmenting Language Inference with Relevant Explanation
|
Natural language explanations (NLEs) are a special form of data annotation in
which annotators identify rationales (most significant text tokens) when
assigning labels to data instances, and write out explanations for the labels
in natural language based on the rationales. NLEs have been shown to capture
human reasoning better, but not as beneficial for natural language inference
(NLI). In this paper, we analyze two primary flaws in the way NLEs are
currently used to train explanation generators for language inference tasks. We
find that the explanation generators do not take into account the variability
inherent in human explanation of labels, and that the current explanation
generation models generate spurious explanations. To overcome these
limitations, we propose a novel framework, LIREx, that incorporates both a
rationale-enabled explanation generator and an instance selector to select only
relevant, plausible NLEs to augment NLI models. When evaluated on the
standardized SNLI data set, LIREx achieved an accuracy of 91.87%, an
improvement of 0.32 over the baseline and matching the best-reported
performance on the data set. It also achieves significantly better performance
than previous studies when transferred to the out-of-domain MultiNLI data set.
Qualitative analysis shows that LIREx generates flexible, faithful, and
relevant NLEs that allow the model to be more robust to spurious explanations.
The code is available at https://github.com/zhaoxy92/LIREx.
| 2,020 |
Computation and Language
|
MELINDA: A Multimodal Dataset for Biomedical Experiment Method
Classification
|
We introduce a new dataset, MELINDA, for Multimodal biomEdicaL experImeNt
methoD clAssification. The dataset is collected in a fully automated distant
supervision manner, where the labels are obtained from an existing curated
database, and the actual contents are extracted from papers associated with
each of the records in the database. We benchmark various state-of-the-art NLP
and computer vision models, including unimodal models which only take either
caption texts or images as inputs, and multimodal models. Extensive experiments
and analysis show that multimodal models, despite outperforming unimodal ones,
still need improvements especially on a less-supervised way of grounding visual
concepts with languages, and better transferability to low resource domains. We
release our dataset and the benchmarks to facilitate future research in
multimodal learning, especially to motivate targeted improvements for
applications in scientific domains.
| 2,020 |
Computation and Language
|
Do You Do Yoga? Understanding Twitter Users' Types and Motivations using
Social and Textual Information
|
Leveraging social media data to understand people's lifestyle choices is an
exciting domain to explore but requires a multiview formulation of the data. In
this paper, we propose a joint embedding model based on the fusion of neural
networks with attention mechanism by incorporating social and textual
information of users to understand their activities and motivations. We use
well-being related tweets from Twitter, focusing on 'Yoga'. We demonstrate our
model on two downstream tasks: (i) finding user type such as either
practitioner or promotional (promoting yoga studio/gym), other; (ii) finding
user motivation i.e. health benefit, spirituality, love to tweet/retweet about
yoga but do not practice yoga.
| 2,021 |
Computation and Language
|
Literature Retrieval for Precision Medicine with Neural Matching and
Faceted Summarization
|
Information retrieval (IR) for precision medicine (PM) often involves looking
for multiple pieces of evidence that characterize a patient case. This
typically includes at least the name of a condition and a genetic variation
that applies to the patient. Other factors such as demographic attributes,
comorbidities, and social determinants may also be pertinent. As such, the
retrieval problem is often formulated as ad hoc search but with multiple facets
(e.g., disease, mutation) that may need to be incorporated. In this paper, we
present a document reranking approach that combines neural query-document
matching and text summarization toward such retrieval scenarios. Our
architecture builds on the basic BERT model with three specific components for
reranking: (a). document-query matching (b). keyword extraction and (c).
facet-conditioned abstractive summarization. The outcomes of (b) and (c) are
used to essentially transform a candidate document into a concise summary that
can be compared with the query at hand to compute a relevance score. Component
(a) directly generates a matching score of a candidate document for a query.
The full architecture benefits from the complementary potential of
document-query matching and the novel document transformation approach based on
summarization along PM facets. Evaluations using NIST's TREC-PM track datasets
(2017--2019) show that our model achieves state-of-the-art performance. To
foster reproducibility, our code is made available here:
https://github.com/bionlproc/text-summ-for-doc-retrieval.
| 2,020 |
Computation and Language
|
Assessing COVID-19 Impacts on College Students via Automated Processing
of Free-form Text
|
In this paper, we report experimental results on assessing the impact of
COVID-19 on college students by processing free-form texts generated by them.
By free-form texts, we mean textual entries posted by college students
(enrolled in a four year US college) via an app specifically designed to assess
and improve their mental health. Using a dataset comprising of more than 9000
textual entries from 1451 students collected over four months (split between
pre and post COVID-19), and established NLP techniques, a) we assess how topics
of most interest to student change between pre and post COVID-19, and b) we
assess the sentiments that students exhibit in each topic between pre and post
COVID-19. Our analysis reveals that topics like Education became noticeably
less important to students post COVID-19, while Health became much more
trending. We also found that across all topics, negative sentiment among
students post COVID-19 was much higher compared to pre-COVID-19. We expect our
study to have an impact on policy-makers in higher education across several
spectra, including college administrators, teachers, parents, and mental health
counselors.
| 2,020 |
Computation and Language
|
InSRL: A Multi-view Learning Framework Fusing Multiple Information
Sources for Distantly-supervised Relation Extraction
|
Distant supervision makes it possible to automatically label bags of
sentences for relation extraction by leveraging knowledge bases, but suffers
from the sparse and noisy bag issues. Additional information sources are
urgently needed to supplement the training data and overcome these issues. In
this paper, we introduce two widely-existing sources in knowledge bases, namely
entity descriptions, and multi-grained entity types to enrich the distantly
supervised data. We see information sources as multiple views and fusing them
to construct an intact space with sufficient information. An end-to-end
multi-view learning framework is proposed for relation extraction via Intact
Space Representation Learning (InSRL), and the representations of single views
are jointly learned simultaneously. Moreover, inner-view and cross-view
attention mechanisms are used to highlight important information on different
levels on an entity-pair basis. The experimental results on a popular benchmark
dataset demonstrate the necessity of additional information sources and the
effectiveness of our framework. We will release the implementation of our model
and dataset with multiple information sources after the anonymized review
phase.
| 2,020 |
Computation and Language
|
Interactive Question Clarification in Dialogue via Reinforcement
Learning
|
Coping with ambiguous questions has been a perennial problem in real-world
dialogue systems. Although clarification by asking questions is a common form
of human interaction, it is hard to define appropriate questions to elicit more
specific intents from a user. In this work, we propose a reinforcement model to
clarify ambiguous questions by suggesting refinements of the original query. We
first formulate a collection partitioning problem to select a set of labels
enabling us to distinguish potential unambiguous intents. We list the chosen
labels as intent phrases to the user for further confirmation. The selected
label along with the original user query then serves as a refined query, for
which a suitable response can more easily be identified. The model is trained
using reinforcement learning with a deep policy network. We evaluate our model
based on real-world user clicks and demonstrate significant improvements across
several different experiments.
| 2,020 |
Computation and Language
|
Unsupervised Learning of Discourse Structures using a Tree Autoencoder
|
Discourse information, as postulated by popular discourse theories, such as
RST and PDTB, has been shown to improve an increasing number of downstream NLP
tasks, showing positive effects and synergies of discourse with important
real-world applications. While methods for incorporating discourse become more
and more sophisticated, the growing need for robust and general discourse
structures has not been sufficiently met by current discourse parsers, usually
trained on small scale datasets in a strictly limited number of domains. This
makes the prediction for arbitrary tasks noisy and unreliable. The overall
resulting lack of high-quality, high-quantity discourse trees poses a severe
limitation to further progress. In order the alleviate this shortcoming, we
propose a new strategy to generate tree structures in a task-agnostic,
unsupervised fashion by extending a latent tree induction framework with an
auto-encoding objective. The proposed approach can be applied to any
tree-structured objective, such as syntactic parsing, discourse parsing and
others. However, due to the especially difficult annotation process to generate
discourse trees, we initially develop a method to generate larger and more
diverse discourse treebanks. In this paper we are inferring general tree
structures of natural text in multiple domains, showing promising results on a
diverse set of tasks.
| 2,020 |
Computation and Language
|
CIF-based Collaborative Decoding for End-to-end Contextual Speech
Recognition
|
End-to-end (E2E) models have achieved promising results on multiple speech
recognition benchmarks, and shown the potential to become the mainstream.
However, the unified structure and the E2E training hamper injecting contextual
information into them for contextual biasing. Though contextual LAS (CLAS)
gives an excellent all-neural solution, the degree of biasing to given context
information is not explicitly controllable. In this paper, we focus on
incorporating context information into the continuous integrate-and-fire (CIF)
based model that supports contextual biasing in a more controllable fashion.
Specifically, an extra context processing network is introduced to extract
contextual embeddings, integrate acoustically relevant context information and
decode the contextual output distribution, thus forming a collaborative
decoding with the decoder of the CIF-based model. Evaluated on the named entity
rich evaluation sets of HKUST/AISHELL-2, our method brings relative character
error rate (CER) reduction of 8.83%/21.13% and relative named entity character
error rate (NE-CER) reduction of 40.14%/51.50% when compared with a strong
baseline. Besides, it keeps the performance on original evaluation set without
degradation.
| 2,021 |
Computation and Language
|
ReferentialGym: A Nomenclature and Framework for Language Emergence &
Grounding in (Visual) Referential Games
|
Natural languages are powerful tools wielded by human beings to communicate
information and co-operate towards common goals. Their values lie in some main
properties like compositionality, hierarchy and recurrent syntax, which
computational linguists have been researching the emergence of in artificial
languages induced by language games. Only relatively recently, the AI community
has started to investigate language emergence and grounding working towards
better human-machine interfaces. For instance, interactive/conversational AI
assistants that are able to relate their vision to the ongoing conversation.
This paper provides two contributions to this research field. Firstly, a
nomenclature is proposed to understand the main initiatives in studying
language emergence and grounding, accounting for the variations in assumptions
and constraints. Secondly, a PyTorch based deep learning framework is
introduced, entitled ReferentialGym, which is dedicated to furthering the
exploration of language emergence and grounding. By providing baseline
implementations of major algorithms and metrics, in addition to many different
features and approaches, ReferentialGym attempts to ease the entry barrier to
the field and provide the community with common implementations.
| 2,020 |
Computation and Language
|
Ultra-Fast, Low-Storage, Highly Effective Coarse-grained Selection in
Retrieval-based Chatbot by Using Deep Semantic Hashing
|
We study the coarse-grained selection module in retrieval-based chatbot.
Coarse-grained selection is a basic module in a retrieval-based chatbot, which
constructs a rough candidate set from the whole database to speed up the
interaction with customers. So far, there are two kinds of approaches for
coarse-grained selection module: (1) sparse representation; (2) dense
representation. To the best of our knowledge, there is no systematic comparison
between these two approaches in retrieval-based chatbots, and which kind of
method is better in real scenarios is still an open question. In this paper, we
first systematically compare these two methods from four aspects: (1)
effectiveness; (2) index stoarge; (3) search time cost; (4) human evaluation.
Extensive experiment results demonstrate that dense representation method
significantly outperforms the sparse representation, but costs more time and
storage occupation. In order to overcome these fatal weaknesses of dense
representation method, we propose an ultra-fast, low-storage, and highly
effective Deep Semantic Hashing Coarse-grained selection method, called DSHC
model. Specifically, in our proposed DSHC model, a hashing optimizing module
that consists of two autoencoder models is stacked on a trained dense
representation model, and three loss functions are designed to optimize it. The
hash codes provided by hashing optimizing module effectively preserve the rich
semantic and similarity information in dense vectors. Extensive experiment
results prove that, our proposed DSHC model can achieve much faster speed and
lower storage than sparse representation, with limited performance loss
compared with dense representation. Besides, our source codes have been
publicly released for future research.
| 2,020 |
Computation and Language
|
Hate Speech detection in the Bengali language: A dataset and its
baseline evaluation
|
Social media sites such as YouTube and Facebook have become an integral part
of everyone's life and in the last few years, hate speech in the social media
comment section has increased rapidly. Detection of hate speech on social media
websites faces a variety of challenges including small imbalanced data sets,
the findings of an appropriate model and also the choice of feature analysis
method. further more, this problem is more severe for the Bengali speaking
community due to the lack of gold standard labelled datasets. This paper
presents a new dataset of 30,000 user comments tagged by crowd sourcing and
varified by experts. All the comments are collected from YouTube and Facebook
comment section and classified into seven categories: sports, entertainment,
religion, politics, crime, celebrity and TikTok & meme. A total of 50
annotators annotated each comment three times and the majority vote was taken
as the final annotation. Nevertheless, we have conducted base line experiments
and several deep learning models along with extensive pre-trained Bengali word
embedding such as Word2Vec, FastText and BengFastText on this dataset to
facilitate future research opportunities. The experiment illustrated that
although all deep learning models performed well, SVM achieved the best result
with 87.5% accuracy. Our core contribution is to make this benchmark dataset
available and accessible to facilitate further research in the field of in the
field of Bengali hate speech detection.
| 2,020 |
Computation and Language
|
Five Psycholinguistic Characteristics for Better Interaction with Users
|
When two people pay attention to each other and are interested in what the
other has to say or write, they almost instantly adapt their writing/speaking
style to match the other. For a successful interaction with a user, chatbots
and dialogue systems should be able to do the same. We propose a framework
consisting of five psycholinguistic textual characteristics for better
human-computer interaction. We describe the annotation processes used for
collecting the data, and benchmark five binary classification tasks,
experimenting with different training sizes and model architectures. The best
architectures noticeably outperform several baselines and achieve
macro-averaged F$_1$-scores between 72\% and 96\% depending on the language and
the task. The proposed framework proved to be fairly easy to model for various
languages even with small amount of manually annotated data if right
architectures are used.
| 2,022 |
Computation and Language
|
MIX : a Multi-task Learning Approach to Solve Open-Domain Question
Answering
|
In this paper, we introduce MIX : a multi-task deep learning approach to
solve Open-Domain Question Answering. First, we design our system as a
multi-stage pipeline made of 3 building blocks : a BM25-based Retriever, to
reduce the search space; RoBERTa based Scorer and Extractor, to rank retrieved
paragraphs and extract relevant spans of text respectively. Eventually, we
further improve computational efficiency of our system to deal with the
scalability challenge : thanks to multi-task learning, we parallelize the close
tasks solved by the Scorer and the Extractor. Our system is on par with
state-of-the-art performances on the squad-open benchmark while being simpler
conceptually.
| 2,021 |
Computation and Language
|
BERT Goes Shopping: Comparing Distributional Models for Product
Representations
|
Word embeddings (e.g., word2vec) have been applied successfully to eCommerce
products through~\textit{prod2vec}. Inspired by the recent performance
improvements on several NLP tasks brought by contextualized embeddings, we
propose to transfer BERT-like architectures to eCommerce: our model --
~\textit{Prod2BERT} -- is trained to generate representations of products
through masked session modeling. Through extensive experiments over multiple
shops, different tasks, and a range of design choices, we systematically
compare the accuracy of~\textit{Prod2BERT} and~\textit{prod2vec} embeddings:
while~\textit{Prod2BERT} is found to be superior in several scenarios, we
highlight the importance of resources and hyperparameters in the best
performing models. Finally, we provide guidelines to practitioners for training
embeddings under a variety of computational and data constraints.
| 2,021 |
Computation and Language
|
Continual Lifelong Learning in Natural Language Processing: A Survey
|
Continual learning (CL) aims to enable information systems to learn from a
continuous data stream across time. However, it is difficult for existing deep
learning architectures to learn a new task without largely forgetting
previously acquired knowledge. Furthermore, CL is particularly challenging for
language learning, as natural language is ambiguous: it is discrete,
compositional, and its meaning is context-dependent. In this work, we look at
the problem of CL through the lens of various NLP tasks. Our survey discusses
major challenges in CL and current methods applied in neural network models. We
also provide a critical review of the existing CL evaluation methods and
datasets in NLP. Finally, we present our outlook on future research directions.
| 2,020 |
Computation and Language
|
Named Entity Recognition in the Legal Domain using a Pointer Generator
Network
|
Named Entity Recognition (NER) is the task of identifying and classifying
named entities in unstructured text. In the legal domain, named entities of
interest may include the case parties, judges, names of courts, case numbers,
references to laws etc. We study the problem of legal NER with noisy text
extracted from PDF files of filed court cases from US courts. The "gold
standard" training data for NER systems provide annotation for each token of
the text with the corresponding entity or non-entity label. We work with only
partially complete training data, which differ from the gold standard NER data
in that the exact location of the entities in the text is unknown and the
entities may contain typos and/or OCR mistakes. To overcome the challenges of
our noisy training data, e.g. text extraction errors and/or typos and unknown
label indices, we formulate the NER task as a text-to-text sequence generation
task and train a pointer generator network to generate the entities in the
document rather than label them. We show that the pointer generator can be
effective for NER in the absence of gold standard data and outperforms the
common NER neural network architectures in long legal documents.
| 2,020 |
Computation and Language
|
Can Transformers Reason About Effects of Actions?
|
A recent work has shown that transformers are able to "reason" with facts and
rules in a limited setting where the rules are natural language expressions of
conjunctions of conditions implying a conclusion. Since this suggests that
transformers may be used for reasoning with knowledge given in natural
language, we do a rigorous evaluation of this with respect to a common form of
knowledge and its corresponding reasoning -- the reasoning about effects of
actions. Reasoning about action and change has been a top focus in the
knowledge representation subfield of AI from the early days of AI and more
recently it has been a highlight aspect in common sense question answering. We
consider four action domains (Blocks World, Logistics, Dock-Worker-Robots and a
Generic Domain) in natural language and create QA datasets that involve
reasoning about the effects of actions in these domains. We investigate the
ability of transformers to (a) learn to reason in these domains and (b)
transfer that learning from the generic domains to the other domains.
| 2,020 |
Computation and Language
|
NeurST: Neural Speech Translation Toolkit
|
NeurST is an open-source toolkit for neural speech translation. The toolkit
mainly focuses on end-to-end speech translation, which is easy to use, modify,
and extend to advanced speech translation research and products. NeurST aims at
facilitating the speech translation research for NLP researchers and building
reliable benchmarks for this field. It provides step-by-step recipes for
feature extraction, data preprocessing, distributed training, and evaluation.
In this paper, we will introduce the framework design of NeurST and show
experimental results for different benchmark datasets, which can be regarded as
reliable baselines for future research. The toolkit is publicly available at
https://github.com/bytedance/neurst/ and we will continuously update the
performance of NeurST with other counterparts and studies at
https://st-benchmark.github.io/.
| 2,021 |
Computation and Language
|
Exploring Fluent Query Reformulations with Text-to-Text Transformers and
Reinforcement Learning
|
Query reformulation aims to alter noisy or ambiguous text sequences into
coherent ones closer to natural language questions. This is to prevent errors
from propagating in a client-facing pipeline and promote better communication
with users. Besides, it is crucial to maintain performance in downstream
environments like question answering when rephrased queries are given as input.
We show that under the previous framework (AQA), attempts to alter RL
algorithms do not bring significant benefits to either reward acquisition or
sequence fluency. Instead, we leverage a query-reformulating text-to-text
transformer (QRT5) and apply policy-based RL algorithms to further nudge this
reformulator and obtain better answers downstream by generating
reward-acquiring query trajectories. QRT5 shows better sample efficiency in RL
to achieve the same level of QA performance as the previous approach. It can
generate reformulations with more readability based on query well-formedness
evaluations and can generalize to out-of-sample data. Our framework is
demonstrated to be flexible, allowing reward signals to be sourced from
different downstream environments such as intent classification.
| 2,021 |
Computation and Language
|
Leveraging Event Specific and Chunk Span features to Extract COVID
Events from tweets
|
Twitter has acted as an important source of information during disasters and
pandemic, especially during the times of COVID-19. In this paper, we describe
our system entry for WNUT 2020 Shared Task-3. The task was aimed at automating
the extraction of a variety of COVID-19 related events from Twitter, such as
individuals who recently contracted the virus, someone with symptoms who were
denied testing and believed remedies against the infection. The system consists
of separate multi-task models for slot-filling subtasks and
sentence-classification subtasks while leveraging the useful sentence-level
information for the corresponding event. The system uses COVID-Twitter-Bert
with attention-weighted pooling of candidate slot-chunk features to capture the
useful information chunks. The system ranks 1st at the leader-board with F1 of
0.6598, without using any ensembles or additional datasets. The code and
trained models are available at this https URL.
| 2,020 |
Computation and Language
|
Attention-Based LSTM Network for COVID-19 Clinical Trial Parsing
|
COVID-19 clinical trial design is a critical task in developing therapeutics
for the prevention and treatment of COVID-19. In this study, we apply a deep
learning approach to extract eligibility criteria variables from COVID-19
trials to enable quantitative analysis of trial design and optimization.
Specifically, we train attention-based bidirectional Long Short-Term Memory
(Att-BiLSTM) models and use the optimal model to extract entities (i.e.,
variables) from the eligibility criteria of COVID-19 trials. We compare the
performance of Att-BiLSTM with traditional ontology-based method. The result on
a benchmark dataset shows that Att-BiLSTM outperforms the ontology model.
Att-BiLSTM achieves a precision of 0.942, recall of 0.810, and F1 of 0.871,
while the ontology model only achieves a precision of 0.715, recall of 0.659,
and F1 of 0.686. Our analyses demonstrate that Att-BiLSTM is an effective
approach for characterizing patient populations in COVID-19 clinical trials.
| 2,020 |
Computation and Language
|
Mention Extraction and Linking for SQL Query Generation
|
On the WikiSQL benchmark, state-of-the-art text-to-SQL systems typically take
a slot-filling approach by building several dedicated models for each type of
slots. Such modularized systems are not only complex butalso of limited
capacity for capturing inter-dependencies among SQL clauses. To solve these
problems, this paper proposes a novel extraction-linking approach, where a
unified extractor recognizes all types of slot mentions appearing in the
question sentence before a linker maps the recognized columns to the table
schema to generate executable SQL queries. Trained with automatically generated
annotations, the proposed method achieves the first place on the WikiSQL
benchmark.
| 2,020 |
Computation and Language
|
Technical Progress Analysis Using a Dynamic Topic Model for Technical
Terms to Revise Patent Classification Codes
|
Japanese patents are assigned a patent classification code, FI (File Index),
that is unique to Japan. FI is a subdivision of the IPC, an international
patent classification code, that is related to Japanese technology. FIs are
revised to keep up with technological developments. These revisions have
already established more than 30,000 new FIs since 2006. However, these
revisions require a lot of time and workload. Moreover, these revisions are not
automated and are thus inefficient. Therefore, using machine learning to assist
in the revision of patent classification codes (FI) will lead to improved
accuracy and efficiency. This study analyzes patent documents from this new
perspective of assisting in the revision of patent classification codes with
machine learning. To analyze time-series changes in patents, we used the
dynamic topic model (DTM), which is an extension of the latent Dirichlet
allocation (LDA). Also, unlike English, the Japanese language requires
morphological analysis. Patents contain many technical words that are not used
in everyday life, so morphological analysis using a common dictionary is not
sufficient. Therefore, we used a technique for extracting technical terms from
text. After extracting technical terms, we applied them to DTM. In this study,
we determined the technological progress of the lighting class F21 for 14 years
and compared it with the actual revision of patent classification codes. In
other words, we extracted technical terms from Japanese patents and applied DTM
to determine the progress of Japanese technology. Then, we analyzed the results
from the new perspective of revising patent classification codes with machine
learning. As a result, it was found that those whose topics were on the rise
were judged to be new technologies.
| 2,020 |
Computation and Language
|
Regularized Attentive Capsule Network for Overlapped Relation Extraction
|
Distantly supervised relation extraction has been widely applied in knowledge
base construction due to its less requirement of human efforts. However, the
automatically established training datasets in distant supervision contain
low-quality instances with noisy words and overlapped relations, introducing
great challenges to the accurate extraction of relations. To address this
problem, we propose a novel Regularized Attentive Capsule Network (RA-CapNet)
to better identify highly overlapped relations in each informal sentence. To
discover multiple relation features in an instance, we embed multi-head
attention into the capsule network as the low-level capsules, where the
subtraction of two entities acts as a new form of relation query to select
salient features regardless of their positions. To further discriminate
overlapped relation features, we devise disagreement regularization to
explicitly encourage the diversity among both multiple attention heads and
low-level capsules. Extensive experiments conducted on widely used datasets
show that our model achieves significant improvements in relation extraction.
| 2,020 |
Computation and Language
|
Deep Open Intent Classification with Adaptive Decision Boundary
|
Open intent classification is a challenging task in dialogue systems. On the
one hand, it should ensure the quality of known intent identification. On the
other hand, it needs to detect the open (unknown) intent without prior
knowledge. Current models are limited in finding the appropriate decision
boundary to balance the performances of both known intents and the open intent.
In this paper, we propose a post-processing method to learn the adaptive
decision boundary (ADB) for open intent classification. We first utilize the
labeled known intent samples to pre-train the model. Then, we automatically
learn the adaptive spherical decision boundary for each known class with the
aid of well-trained features. Specifically, we propose a new loss function to
balance both the empirical risk and the open space risk. Our method does not
need open intent samples and is free from modifying the model architecture.
Moreover, our approach is surprisingly insensitive with less labeled data and
fewer known intents. Extensive experiments on three benchmark datasets show
that our method yields significant improvements compared with the
state-of-the-art methods. The codes are released at
https://github.com/thuiar/Adaptive-Decision-Boundary.
| 2,023 |
Computation and Language
|
AdvExpander: Generating Natural Language Adversarial Examples by
Expanding Text
|
Adversarial examples are vital to expose the vulnerability of machine
learning models. Despite the success of the most popular substitution-based
methods which substitutes some characters or words in the original examples,
only substitution is insufficient to uncover all robustness issues of models.
In this paper, we present AdvExpander, a method that crafts new adversarial
examples by expanding text, which is complementary to previous
substitution-based methods. We first utilize linguistic rules to determine
which constituents to expand and what types of modifiers to expand with. We
then expand each constituent by inserting an adversarial modifier searched from
a CVAE-based generative model which is pre-trained on a large scale corpus. To
search adversarial modifiers, we directly search adversarial latent codes in
the latent space without tuning the pre-trained parameters. To ensure that our
adversarial examples are label-preserving for text matching, we also constrain
the modifications with a heuristic rule. Experiments on three classification
tasks verify the effectiveness of AdvExpander and the validity of our
adversarial examples. AdvExpander crafts a new type of adversarial examples by
text expansion, thereby promising to reveal new robustness issues.
| 2,020 |
Computation and Language
|
A Benchmark Arabic Dataset for Commonsense Explanation
|
Language comprehension and commonsense knowledge validation by machines are
challenging tasks that are still under researched and evaluated for Arabic
text. In this paper, we present a benchmark Arabic dataset for commonsense
explanation. The dataset consists of Arabic sentences that does not make sense
along with three choices to select among them the one that explains why the
sentence is false. Furthermore, this paper presents baseline results to assist
and encourage the future evaluation of research in this field. The dataset is
distributed under the Creative Commons CC-BY-SA 4.0 license and can be found on
GitHub
| 2,020 |
Computation and Language
|
ReINTEL Challenge 2020: A Multimodal Ensemble Model for Detecting
Unreliable Information on Vietnamese SNS
|
In this paper, we present our methods for unrealiable information
identification task at VLSP 2020 ReINTEL Challenge. The task is to classify a
piece of information into reliable or unreliable category. We propose a novel
multimodal ensemble model which combines two multimodal models to solve the
task. In each multimodal model, we combined feature representations acquired
from three different data types: texts, images, and metadata. Multimodal
features are derived from three neural networks and fused for classification.
Experimental results showed that our proposed multimodal ensemble model
improved against single models in term of ROC AUC score. We obtained 0.9445 AUC
score on the private test of the challenge.
| 2,020 |
Computation and Language
|
Understood in Translation, Transformers for Domain Understanding
|
Knowledge acquisition is the essential first step of any Knowledge Graph (KG)
application. This knowledge can be extracted from a given corpus (KG generation
process) or specified from an existing KG (KG specification process). Focusing
on domain specific solutions, knowledge acquisition is a labor intensive task
usually orchestrated and supervised by subject matter experts. Specifically,
the domain of interest is usually manually defined and then the needed
generation or extraction tools are utilized to produce the KG. Herein, we
propose a supervised machine learning method, based on Transformers, for domain
definition of a corpus. We argue why such automated definition of the domain's
structure is beneficial both in terms of construction time and quality of the
generated graph. The proposed method is extensively validated on three public
datasets (WebNLG, NYT and DocRED) by comparing it with two reference methods
based on CNNs and RNNs models. The evaluation shows the efficiency of our model
in this task. Focusing on scientific document understanding, we present a new
health domain dataset based on publications extracted from PubMed and we
successfully utilize our method on this. Lastly, we demonstrate how this work
lays the foundation for fully automated and unsupervised KG generation.
| 2,020 |
Computation and Language
|
An Empirical Study of Using Pre-trained BERT Models for Vietnamese
Relation Extraction Task at VLSP 2020
|
In this paper, we present an empirical study of using pre-trained BERT models
for the relation extraction task at the VLSP 2020 Evaluation Campaign. We
applied two state-of-the-art BERT-based models: R-BERT and BERT model with
entity starts. For each model, we compared two pre-trained BERT models:
FPTAI/vibert and NlpHUST/vibert4news. We found that NlpHUST/vibert4news model
significantly outperforms FPTAI/vibert for the Vietnamese relation extraction
task. Finally, we proposed an ensemble model that combines R-BERT and BERT with
entity starts. Our proposed ensemble model slightly improved against two single
models on the development data and the test data provided by the task
organizers.
| 2,021 |
Computation and Language
|
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
|
Hate speech is a challenging issue plaguing the online social media. While
better models for hate speech detection are continuously being developed, there
is little research on the bias and interpretability aspects of hate speech. In
this paper, we introduce HateXplain, the first benchmark hate speech dataset
covering multiple aspects of the issue. Each post in our dataset is annotated
from three different perspectives: the basic, commonly used 3-class
classification (i.e., hate, offensive or normal), the target community (i.e.,
the community that has been the victim of hate speech/offensive speech in the
post), and the rationales, i.e., the portions of the post on which their
labelling decision (as hate, offensive or normal) is based. We utilize existing
state-of-the-art models and observe that even models that perform very well in
classification do not score high on explainability metrics like model
plausibility and faithfulness. We also observe that models, which utilize the
human rationales for training, perform better in reducing unintended bias
towards target communities. We have made our code and dataset public at
https://github.com/punyajoy/HateXplain
| 2,022 |
Computation and Language
|
Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-Training
|
Most recently, there has been significant interest in learning contextual
representations for various NLP tasks, by leveraging large scale text corpora
to train large neural language models with self-supervised learning objectives,
such as Masked Language Model (MLM). However, based on a pilot study, we
observe three issues of existing general-purpose language models when they are
applied to text-to-SQL semantic parsers: fail to detect column mentions in the
utterances, fail to infer column mentions from cell values, and fail to compose
complex SQL queries. To mitigate these issues, we present a model pre-training
framework, Generation-Augmented Pre-training (GAP), that jointly learns
representations of natural language utterances and table schemas by leveraging
generation models to generate pre-train data. GAP MODEL is trained on 2M
utterance-schema pairs and 30K utterance-schema-SQL triples, whose utterances
are produced by generative models. Based on experimental results, neural
semantic parsers that leverage GAP MODEL as a representation encoder obtain new
state-of-the-art results on both SPIDER and CRITERIA-TO-SQL benchmarks.
| 2,020 |
Computation and Language
|
Finding Sparse Structures for Domain Specific Neural Machine Translation
|
Neural machine translation often adopts the fine-tuning approach to adapt to
specific domains. However, nonrestricted fine-tuning can easily degrade on the
general domain and over-fit to the target domain. To mitigate the issue, we
propose Prune-Tune, a novel domain adaptation method via gradual pruning. It
learns tiny domain-specific sub-networks during fine-tuning on new domains.
Prune-Tune alleviates the over-fitting and the degradation problem without
model modification. Furthermore, Prune-Tune is able to sequentially learn a
single network with multiple disjoint domain-specific sub-networks for multiple
domains. Empirical experiment results show that Prune-Tune outperforms several
strong competitors in the target domain test set without sacrificing the
quality on the general domain in both single and multi-domain settings. The
source code and data are available at https://github.com/ohlionel/Prune-Tune.
| 2,021 |
Computation and Language
|
Uncertainty-Aware Label Refinement for Sequence Labeling
|
Conditional random fields (CRF) for label decoding has become ubiquitous in
sequence labeling tasks. However, the local label dependencies and inefficient
Viterbi decoding have always been a problem to be solved. In this work, we
introduce a novel two-stage label decoding framework to model long-term label
dependencies, while being much more computationally efficient. A base model
first predicts draft labels, and then a novel two-stream self-attention model
makes refinements on these draft predictions based on long-range label
dependencies, which can achieve parallel decoding for a faster prediction. In
addition, in order to mitigate the side effects of incorrect draft labels,
Bayesian neural networks are used to indicate the labels with a high
probability of being wrong, which can greatly assist in preventing error
propagation. The experimental results on three sequence labeling benchmarks
demonstrated that the proposed method not only outperformed the CRF-based
methods but also greatly accelerated the inference process.
| 2,020 |
Computation and Language
|
FraCaS: Temporal Analysis
|
In this paper, we propose an implementation of temporal semantics which is
suitable for inference problems. This implementation translates syntax trees to
logical formulas, suitable for consumption by the Coq proof assistant. We
support several phenomena including: temporal references, temporal adverbs,
aspectual classes and progressives. We apply these semantics to the complete
FraCaS testsuite. We obtain an accuracy of 81 percent overall and 73 percent
for problems explicitly marked as related to temporal reference.
| 2,020 |
Computation and Language
|
On (Emergent) Systematic Generalisation and Compositionality in Visual
Referential Games with Straight-Through Gumbel-Softmax Estimator
|
The drivers of compositionality in artificial languages that emerge when two
(or more) agents play a non-visual referential game has been previously
investigated using approaches based on the REINFORCE algorithm and the (Neural)
Iterated Learning Model. Following the more recent introduction of the
\textit{Straight-Through Gumbel-Softmax} (ST-GS) approach, this paper
investigates to what extent the drivers of compositionality identified so far
in the field apply in the ST-GS context and to what extent do they translate
into (emergent) systematic generalisation abilities, when playing a visual
referential game. Compositionality and the generalisation abilities of the
emergent languages are assessed using topographic similarity and zero-shot
compositional tests. Firstly, we provide evidence that the test-train split
strategy significantly impacts the zero-shot compositional tests when dealing
with visual stimuli, whilst it does not when dealing with symbolic ones.
Secondly, empirical evidence shows that using the ST-GS approach with small
batch sizes and an overcomplete communication channel improves compositionality
in the emerging languages. Nevertheless, while shown robust with symbolic
stimuli, the effect of the batch size is not so clear-cut when dealing with
visual stimuli. Our results also show that not all overcomplete communication
channels are created equal. Indeed, while increasing the maximum sentence
length is found to be beneficial to further both compositionality and
generalisation abilities, increasing the vocabulary size is found detrimental.
Finally, a lack of correlation between the language compositionality at
training-time and the agents' generalisation abilities is observed in the
context of discriminative referential games with visual stimuli. This is
similar to previous observations in the field using the generative variant with
symbolic stimuli.
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.