Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
CL-IMS @ DIACR-Ita: Volente o Nolente: BERT does not outperform SGNS on
Semantic Change Detection
|
We present the results of our participation in the DIACR-Ita shared task on
lexical semantic change detection for Italian. We exploit Average Pairwise
Distance of token-based BERT embeddings between time points and rank 5 (of 8)
in the official ranking with an accuracy of $.72$. While we tune parameters on
the English data set of SemEval-2020 Task 1 and reach high performance, this
does not translate to the Italian DIACR-Ita data set. Our results show that we
do not manage to find robust ways to exploit BERT embeddings in lexical
semantic change detection.
| 2,020 |
Computation and Language
|
DebateSum: A large-scale argument mining and summarization dataset
|
Prior work in Argument Mining frequently alludes to its potential
applications in automatic debating systems. Despite this focus, almost no
datasets or models exist which apply natural language processing techniques to
problems found within competitive formal debate. To remedy this, we present the
DebateSum dataset. DebateSum consists of 187,386 unique pieces of evidence with
corresponding argument and extractive summaries. DebateSum was made using data
compiled by competitors within the National Speech and Debate Association over
a 7-year period. We train several transformer summarization models to benchmark
summarization performance on DebateSum. We also introduce a set of fasttext
word-vectors trained on DebateSum called debate2vec. Finally, we present a
search engine for this dataset which is utilized extensively by members of the
National Speech and Debate Association today. The DebateSum search engine is
available to the public here: http://www.debate.cards
| 2,020 |
Computation and Language
|
Sentiment Analysis for Sinhala Language using Deep Learning Techniques
|
Due to the high impact of the fast-evolving fields of machine learning and
deep learning, Natural Language Processing (NLP) tasks have further obtained
comprehensive performances for highly resourced languages such as English and
Chinese. However Sinhala, which is an under-resourced language with a rich
morphology, has not experienced these advancements. For sentiment analysis,
there exists only two previous research with deep learning approaches, which
focused only on document-level sentiment analysis for the binary case. They
experimented with only three types of deep learning models. In contrast, this
paper presents a much comprehensive study on the use of standard sequence
models such as RNN, LSTM, Bi-LSTM, as well as more recent state-of-the-art
models such as hierarchical attention hybrid neural networks, and capsule
networks. Classification is done at document-level but with more granularity by
considering POSITIVE, NEGATIVE, NEUTRAL, and CONFLICT classes. A data set of
15059 Sinhala news comments, annotated with these four classes and a corpus
consists of 9.48 million tokens are publicly released. This is the largest
sentiment annotated data set for Sinhala so far.
| 2,020 |
Computation and Language
|
Meaningful Answer Generation of E-Commerce Question-Answering
|
In e-commerce portals, generating answers for product-related questions has
become a crucial task. In this paper, we focus on the task of product-aware
answer generation, which learns to generate an accurate and complete answer
from large-scale unlabeled e-commerce reviews and product attributes. However,
safe answer problems pose significant challenges to text generation tasks, and
e-commerce question-answering task is no exception. To generate more meaningful
answers, in this paper, we propose a novel generative neural model, called the
Meaningful Product Answer Generator (MPAG), which alleviates the safe answer
problem by taking product reviews, product attributes, and a prototype answer
into consideration. Product reviews and product attributes are used to provide
meaningful content, while the prototype answer can yield a more diverse answer
pattern. To this end, we propose a novel answer generator with a review
reasoning module and a prototype answer reader. Our key idea is to obtain the
correct question-aware information from a large scale collection of reviews and
learn how to write a coherent and meaningful answer from an existing prototype
answer. To be more specific, we propose a read-and-write memory consisting of
selective writing units to conduct reasoning among these reviews. We then
employ a prototype reader consisting of comprehensive matching to extract the
answer skeleton from the prototype answer. Finally, we propose an answer editor
to generate the final answer by taking the question and the above parts as
input. Conducted on a real-world dataset collected from an e-commerce platform,
extensive experimental results show that our model achieves state-of-the-art
performance in terms of both automatic metrics and human evaluations. Human
evaluation also demonstrates that our model can consistently generate specific
and proper answers.
| 2,020 |
Computation and Language
|
Conditioned Natural Language Generation using only Unconditioned
Language Model: An Exploration
|
Transformer-based language models have shown to be very powerful for natural
language generation (NLG). However, text generation conditioned on some user
inputs, such as topics or attributes, is non-trivial. Past approach relies on
either modifying the original LM architecture, re-training the LM on corpora
with attribute labels, or having separately trained `guidance models' to guide
text generation in decoding. We argued that the above approaches are not
necessary, and the original unconditioned LM is sufficient for conditioned NLG.
We evaluated our approaches by the samples' fluency and diversity with
automated and human evaluation.
| 2,020 |
Computation and Language
|
Words are the Window to the Soul: Language-based User Representations
for Fake News Detection
|
Cognitive and social traits of individuals are reflected in language use.
Moreover, individuals who are prone to spread fake news online often share
common traits. Building on these ideas, we introduce a model that creates
representations of individuals on social media based only on the language they
produce, and use them to detect fake news. We show that language-based user
representations are beneficial for this task. We also present an extended
analysis of the language of fake news spreaders, showing that its main features
are mostly domain independent and consistent across two English datasets.
Finally, we exploit the relation between language use and connections in the
social graph to assess the presence of the Echo Chamber effect in our data.
| 2,020 |
Computation and Language
|
Lessons from Computational Modelling of Reference Production in Mandarin
and English
|
Referring expression generation (REG) algorithms offer computational models
of the production of referring expressions. In earlier work, a corpus of
referring expressions (REs) in Mandarin was introduced. In the present paper,
we annotate this corpus, evaluate classic REG algorithms on it, and compare the
results with earlier results on the evaluation of REG for English referring
expressions. Next, we offer an in-depth analysis of the corpus, focusing on
issues that arise from the grammar of Mandarin. We discuss shortcomings of
previous REG evaluations that came to light during our investigation and we
highlight some surprising results. Perhaps most strikingly, we found a much
higher proportion of under-specified expressions than previous studies had
suggested, not just in Mandarin but in English as well.
| 2,021 |
Computation and Language
|
A Hybrid Approach for Improved Low Resource Neural Machine Translation
using Monolingual Data
|
Many language pairs are low resource, meaning the amount and/or quality of
available parallel data is not sufficient to train a neural machine translation
(NMT) model which can reach an acceptable standard of accuracy. Many works have
explored using the readily available monolingual data in either or both of the
languages to improve the standard of translation models in low, and even high,
resource languages. One of the most successful of such works is the
back-translation that utilizes the translations of the target language
monolingual data to increase the amount of the training data. The quality of
the backward model which is trained on the available parallel data has been
shown to determine the performance of the back-translation approach. Despite
this, only the forward model is improved on the monolingual target data in
standard back-translation. A previous study proposed an iterative
back-translation approach for improving both models over several iterations.
But unlike in the traditional back-translation, it relied on both the target
and source monolingual data. This work, therefore, proposes a novel approach
that enables both the backward and forward models to benefit from the
monolingual target data through a hybrid of self-learning and back-translation
respectively. Experimental results have shown the superiority of the proposed
approach over the traditional back-translation method on English-German low
resource neural machine translation. We also proposed an iterative
self-learning approach that outperforms the iterative back-translation while
also relying only on the monolingual target data and require the training of
less models.
| 2,021 |
Computation and Language
|
Target Guided Emotion Aware Chat Machine
|
The consistency of a response to a given post at semantic-level and
emotional-level is essential for a dialogue system to deliver human-like
interactions. However, this challenge is not well addressed in the literature,
since most of the approaches neglect the emotional information conveyed by a
post while generating responses. This article addresses this problem by
proposing a unifed end-to-end neural architecture, which is capable of
simultaneously encoding the semantics and the emotions in a post and leverage
target information for generating more intelligent responses with appropriately
expressed emotions. Extensive experiments on real-world data demonstrate that
the proposed method outperforms the state-of-the-art methods in terms of both
content coherence and emotion appropriateness.
| 2,021 |
Computation and Language
|
Morphologically Aware Word-Level Translation
|
We propose a novel morphologically aware probability model for bilingual
lexicon induction, which jointly models lexeme translation and inflectional
morphology in a structured way. Our model exploits the basic linguistic
intuition that the lexeme is the key lexical unit of meaning, while
inflectional morphology provides additional syntactic information. This
approach leads to substantial performance improvements - 19% average
improvement in accuracy across 6 language pairs over the state of the art in
the supervised setting and 16% in the weakly supervised setting. As another
contribution, we highlight issues associated with modern BLI that stem from
ignoring inflectional morphology, and propose three suggestions for improving
the task.
| 2,020 |
Computation and Language
|
The Challenge of Diacritics in Yoruba Embeddings
|
The major contributions of this work include the empirical establishment of a
better performance for Yoruba embeddings from undiacritized (normalized)
dataset and provision of new analogy sets for evaluation. The Yoruba language,
being a tonal language, utilizes diacritics (tonal marks) in written form. We
show that this affects embedding performance by creating embeddings from
exactly the same Wikipedia dataset but with the second one normalized to be
undiacritized. We further compare average intrinsic performance with two other
work (using analogy test set & WordSim) and we obtain the best performance in
WordSim and corresponding Spearman correlation.
| 2,020 |
Computation and Language
|
DORB: Dynamically Optimizing Multiple Rewards with Bandits
|
Policy gradients-based reinforcement learning has proven to be a promising
approach for directly optimizing non-differentiable evaluation metrics for
language generation tasks. However, optimizing for a specific metric reward
leads to improvements in mostly that metric only, suggesting that the model is
gaming the formulation of that metric in a particular way without often
achieving real qualitative improvements. Hence, it is more beneficial to make
the model optimize multiple diverse metric rewards jointly. While appealing,
this is challenging because one needs to manually decide the importance and
scaling weights of these metric rewards. Further, it is important to consider
using a dynamic combination and curriculum of metric rewards that flexibly
changes over time. Considering the above aspects, in our work, we automate the
optimization of multiple metric rewards simultaneously via a multi-armed bandit
approach (DORB), where at each round, the bandit chooses which metric reward to
optimize next, based on expected arm gains. We use the Exp3 algorithm for
bandits and formulate two approaches for bandit rewards: (1) Single
Multi-reward Bandit (SM-Bandit); (2) Hierarchical Multi-reward Bandit
(HM-Bandit). We empirically show the effectiveness of our approaches via
various automatic metrics and human evaluation on two important NLG tasks:
question generation and data-to-text generation, including on an unseen-test
transfer setup. Finally, we present interpretable analyses of the learned
bandit curriculum over the optimized rewards.
| 2,020 |
Computation and Language
|
ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in
Dynamic Environments
|
For embodied agents, navigation is an important ability but not an isolated
goal. Agents are also expected to perform specific tasks after reaching the
target location, such as picking up objects and assembling them into a
particular arrangement. We combine Vision-and-Language Navigation, assembling
of collected objects, and object referring expression comprehension, to create
a novel joint navigation-and-assembly task, named ArraMon. During this task,
the agent (similar to a PokeMON GO player) is asked to find and collect
different target objects one-by-one by navigating based on natural language
instructions in a complex, realistic outdoor environment, but then also ARRAnge
the collected objects part-by-part in an egocentric grid-layout environment. To
support this task, we implement a 3D dynamic environment simulator and collect
a dataset (in English; and also extended to Hindi) with human-written
navigation and assembling instructions, and the corresponding ground truth
trajectories. We also filter the collected instructions via a verification
stage, leading to a total of 7.7K task instances (30.8K instructions and
paths). We present results for several baseline models (integrated and biased)
and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance
gap demonstrates that our task is challenging and presents a wide scope for
future work. Our dataset, simulator, and code are publicly available at:
https://arramonunc.github.io
| 2,020 |
Computation and Language
|
IIT_kgp at FinCausal 2020, Shared Task 1: Causality Detection using
Sentence Embeddings in Financial Reports
|
The paper describes the work that the team submitted to FinCausal 2020 Shared
Task. This work is associated with the first sub-task of identifying causality
in sentences. The various models used in the experiments tried to obtain a
latent space representation for each of the sentences. Linear regression was
performed on these representations to classify whether the sentence is causal
or not. The experiments have shown BERT (Large) performed the best, giving a F1
score of 0.958, in the task of detecting the causality of sentences in
financial texts and reports. The class imbalance was dealt with a modified loss
function to give a better metric score for the evaluation.
| 2,020 |
Computation and Language
|
Reinforced Medical Report Generation with X-Linear Attention and
Repetition Penalty
|
To reduce doctors' workload, deep-learning-based automatic medical report
generation has recently attracted more and more research efforts, where
attention mechanisms and reinforcement learning are integrated with the classic
encoder-decoder architecture to enhance the performance of deep models.
However, these state-of-the-art solutions mainly suffer from two shortcomings:
(i) their attention mechanisms cannot utilize high-order feature interactions,
and (ii) due to the use of TF-IDF-based reward functions, these methods are
fragile with generating repeated terms. Therefore, in this work, we propose a
reinforced medical report generation solution with x-linear attention and
repetition penalty mechanisms (ReMRG-XR) to overcome these problems.
Specifically, x-linear attention modules are used to explore high-order feature
interactions and achieve multi-modal reasoning, while repetition penalty is
used to apply penalties to repeated terms during the model's training process.
Extensive experimental studies have been conducted on two public datasets, and
the results show that ReMRG-XR greatly outperforms the state-of-the-art
baselines in terms of all metrics.
| 2,020 |
Computation and Language
|
Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases
|
Existing studies on question answering on knowledge bases (KBQA) mainly
operate with the standard i.i.d assumption, i.e., training distribution over
questions is the same as the test distribution. However, i.i.d may be neither
reasonably achievable nor desirable on large-scale KBs because 1) true user
distribution is hard to capture and 2) randomly sample training examples from
the enormous space would be highly data-inefficient. Instead, we suggest that
KBQA models should have three levels of built-in generalization: i.i.d,
compositional, and zero-shot. To facilitate the development of KBQA models with
stronger generalization, we construct and release a new large-scale,
high-quality dataset with 64,331 questions, GrailQA, and provide evaluation
settings for all three levels of generalization. In addition, we propose a
novel BERT-based KBQA model. The combination of our dataset and model enables
us to thoroughly examine and demonstrate, for the first time, the key role of
pre-trained contextual embeddings like BERT in the generalization of KBQA.
| 2,021 |
Computation and Language
|
Deep Shallow Fusion for RNN-T Personalization
|
End-to-end models in general, and Recurrent Neural Network Transducer (RNN-T)
in particular, have gained significant traction in the automatic speech
recognition community in the last few years due to their simplicity,
compactness, and excellent performance on generic transcription tasks. However,
these models are more challenging to personalize compared to traditional hybrid
systems due to the lack of external language models and difficulties in
recognizing rare long-tail words, specifically entity names. In this work, we
present novel techniques to improve RNN-T's ability to model rare WordPieces,
infuse extra information into the encoder, enable the use of alternative
graphemic pronunciations, and perform deep fusion with personalized language
models for more robust biasing. We show that these combined techniques result
in 15.4%-34.5% relative Word Error Rate improvement compared to a strong RNN-T
baseline which uses shallow fusion and text-to-speech augmentation. Our work
helps push the boundary of RNN-T personalization and close the gap with hybrid
systems on use cases where biasing and entity recognition are crucial.
| 2,020 |
Computation and Language
|
WikiAsp: A Dataset for Multi-domain Aspect-based Summarization
|
Aspect-based summarization is the task of generating focused summaries based
on specific points of interest. Such summaries aid efficient analysis of text,
such as quickly understanding reviews or opinions from different angles.
However, due to large differences in the type of aspects for different domains
(e.g., sentiment, product features), the development of previous models has
tended to be domain-specific. In this paper, we propose WikiAsp, a large-scale
dataset for multi-domain aspect-based summarization that attempts to spur
research in the direction of open-domain aspect-based summarization.
Specifically, we build the dataset using Wikipedia articles from 20 different
domains, using the section titles and boundaries of each article as a proxy for
aspect annotation. We propose several straightforward baseline models for this
task and conduct experiments on the dataset. Results highlight key challenges
that existing summarization models face in this setting, such as proper pronoun
handling of quoted sources and consistent explanation of time-sensitive events.
| 2,020 |
Computation and Language
|
Evaluating Sentence Segmentation and Word Tokenization Systems on
Estonian Web Texts
|
Texts obtained from web are noisy and do not necessarily follow the
orthographic sentence and word boundary rules. Thus, sentence segmentation and
word tokenization systems that have been developed on well-formed texts might
not perform so well on unedited web texts. In this paper, we first describe the
manual annotation of sentence boundaries of an Estonian web dataset and then
present the evaluation results of three existing sentence segmentation and word
tokenization systems on this corpus: EstNLTK, Stanza and UDPipe. While EstNLTK
obtains the highest performance compared to other systems on sentence
segmentation on this dataset, the sentence segmentation performance of Stanza
and UDPipe remains well below the results obtained on the more well-formed
Estonian UD test set.
| 2,020 |
Computation and Language
|
Text Information Aggregation with Centrality Attention
|
A lot of natural language processing problems need to encode the text
sequence as a fix-length vector, which usually involves aggregation process of
combining the representations of all the words, such as pooling or
self-attention. However, these widely used aggregation approaches did not take
higher-order relationship among the words into consideration. Hence we propose
a new way of obtaining aggregation weights, called eigen-centrality
self-attention. More specifically, we build a fully-connected graph for all the
words in a sentence, then compute the eigen-centrality as the attention score
of each word.
The explicit modeling of relationships as a graph is able to capture some
higher-order dependency among words, which helps us achieve better results in 5
text classification tasks and one SNLI task than baseline models such as
pooling, self-attention and dynamic routing. Besides, in order to compute the
dominant eigenvector of the graph, we adopt power method algorithm to get the
eigen-centrality measure. Moreover, we also derive an iterative approach to get
the gradient for the power method process to reduce both memory consumption and
computation requirement.}
| 2,020 |
Computation and Language
|
Score Combination for Improved Parallel Corpus Filtering for Low
Resource Conditions
|
This paper describes our submission to the WMT20 sentence filtering task. We
combine scores from (1) a custom LASER built for each source language, (2) a
classifier built to distinguish positive and negative pairs by semantic
alignment, and (3) the original scores included in the task devkit. For the
mBART finetuning setup, provided by the organizers, our method shows 7% and 5%
relative improvement over baseline, in sacreBLEU score on the test set for
Pashto and Khmer respectively.
| 2,020 |
Computation and Language
|
Pre-training Text-to-Text Transformers for Concept-centric Common Sense
|
Pre-trained language models (PTLM) have achieved impressive results in a
range of natural language understanding (NLU) and generation (NLG) tasks.
However, current pre-training objectives such as masked token prediction (for
BERT-style PTLMs) and masked span infilling (for T5-style PTLMs) do not
explicitly model the relational commonsense knowledge about everyday concepts,
which is crucial to many downstream tasks that need common sense to understand
or generate. To augment PTLMs with concept-centric commonsense knowledge, in
this paper, we propose both generative and contrastive objectives for learning
common sense from the text, and use them as intermediate self-supervised
learning tasks for incrementally pre-training PTLMs (before task-specific
fine-tuning on downstream datasets). Furthermore, we develop a joint
pre-training framework to unify generative and contrastive objectives so that
they can mutually reinforce each other. Extensive experimental results show
that our method, concept-aware language model (CALM), can pack more commonsense
knowledge into the parameters of a pre-trained text-to-text transformer without
relying on external knowledge graphs, yielding better performance on both NLU
and NLG tasks. We show that while only incrementally pre-trained on a
relatively small corpus for a few steps, CALM outperforms baseline methods by a
consistent margin and even comparable with some larger PTLMs, which suggests
that CALM can serve as a general, plug-and-play method for improving the
commonsense reasoning ability of a PTLM.
| 2,020 |
Computation and Language
|
Explicitly Modeling Syntax in Language Models with Incremental Parsing
and a Dynamic Oracle
|
Syntax is fundamental to our thinking about language. Failing to capture the
structure of input language could lead to generalization problems and
over-parametrization. In the present work, we propose a new syntax-aware
language model: Syntactic Ordered Memory (SOM). The model explicitly models the
structure with an incremental parser and maintains the conditional probability
setting of a standard language model (left-to-right). To train the incremental
parser and avoid exposure bias, we also propose a novel dynamic oracle, so that
SOM is more robust to wrong parsing decisions. Experiments show that SOM can
achieve strong results in language modeling, incremental parsing and syntactic
generalization tests, while using fewer parameters than other models.
| 2,021 |
Computation and Language
|
An Empirical Investigation of Contextualized Number Prediction
|
We conduct a large scale empirical investigation of contextualized number
prediction in running text. Specifically, we consider two tasks: (1)masked
number prediction-predicting a missing numerical value within a sentence, and
(2)numerical anomaly detection-detecting an errorful numeric value within a
sentence. We experiment with novel combinations of contextual encoders and
output distributions over the real number line. Specifically, we introduce a
suite of output distribution parameterizations that incorporate latent
variables to add expressivity and better fit the natural distribution of
numeric values in running text, and combine them with both recurrent and
transformer-based encoder architectures. We evaluate these models on two
numeric datasets in the financial and scientific domain. Our findings show that
output distributions that incorporate discrete latent variables and allow for
multiple modes outperform simple flow-based counterparts on all datasets,
yielding more accurate numerical prediction and anomaly detection. We also show
that our models effectively utilize textual con-text and benefit from
general-purpose unsupervised pretraining.
| 2,020 |
Computation and Language
|
Performance of Transfer Learning Model vs. Traditional Neural Network in
Low System Resource Environment
|
Recently, the use of pre-trained model to build neural network based on
transfer learning methodology is increasingly popular. These pre-trained models
present the benefit of using less computing resources to train model with
smaller amount of training data. The rise of state-of-the-art models such as
BERT, XLNet and GPT boost accuracy and benefit as a base model for transfer
leanring. However, these models are still too complex and consume many
computing resource to train for transfer learning with low GPU memory. We will
compare the performance and cost between lighter transfer learning model and
purposely built neural network for NLP application of text classification and
NER model.
| 2,020 |
Computation and Language
|
Learning from similarity and information extraction from structured
documents
|
The automation of document processing is gaining recent attention due to the
great potential to reduce manual work through improved methods and hardware.
Neural networks have been successfully applied before - even though they have
been trained only on relatively small datasets with hundreds of documents so
far. To successfully explore deep learning techniques and improve the
information extraction results, a dataset with more than twenty-five thousand
documents has been compiled, anonymized and is published as a part of this
work. We will expand our previous work where we proved that convolutions, graph
convolutions and self-attention can work together and exploit all the
information present in a structured document. Taking the fully trainable method
one step further, we will now design and examine various approaches to using
siamese networks, concepts of similarity, one-shot learning and context/memory
awareness. The aim is to improve micro F1 of per-word classification on the
huge real-world document dataset. The results verify the hypothesis that
trainable access to a similar (yet still different) page together with its
already known target information improves the information extraction.
Furthermore, the experiments confirm that all proposed architecture parts are
all required to beat the previous results. The best model improves the previous
state-of-the-art results by an 8.25 gain in F1 score. Qualitative analysis is
provided to verify that the new model performs better for all target classes.
Additionally, multiple structural observations about the causes of the
underperformance of some architectures are revealed. All the source codes,
parameters and implementation details are published together with the dataset
in the hope to push the research boundaries since all the techniques used in
this work are not problem-specific and can be generalized for other tasks and
contexts.
| 2,021 |
Computation and Language
|
Datasets and Models for Authorship Attribution on Italian Personal
Writings
|
Existing research on Authorship Attribution (AA) focuses on texts for which a
lot of data is available (e.g novels), mainly in English. We approach AA via
Authorship Verification on short Italian texts in two novel datasets, and
analyze the interaction between genre, topic, gender and length. Results show
that AV is feasible even with little data, but more evidence helps. Gender and
topic can be indicative clues, and if not controlled for, they might overtake
more specific aspects of personal style.
| 2,020 |
Computation and Language
|
The Person Index Challenge: Extraction of Persons from Messy, Short
Texts
|
When persons are mentioned in texts with their first name, last name and/or
middle names, there can be a high variation which of their names are used, how
their names are ordered and if their names are abbreviated. If multiple persons
are mentioned consecutively in very different ways, especially short texts can
be perceived as "messy". Once ambiguous names occur, associations to persons
may not be inferred correctly. Despite these eventualities, in this paper we
ask how well an unsupervised algorithm can build a person index from short
texts. We define a person index as a structured table that distinctly catalogs
individuals by their names. First, we give a formal definition of the problem
and describe a procedure to generate ground truth data for future evaluations.
To give a first solution to this challenge, a baseline approach is implemented.
By using our proposed evaluation strategy, we test the performance of the
baseline and suggest further improvements. For future research the source code
is publicly available.
| 2,021 |
Computation and Language
|
Comparative Probing of Lexical Semantics Theories for Cognitive
Plausibility and Technological Usefulness
|
Lexical semantics theories differ in advocating that the meaning of words is
represented as an inference graph, a feature mapping or a vector space, thus
raising the question: is it the case that one of these approaches is superior
to the others in representing lexical semantics appropriately? Or in its non
antagonistic counterpart: could there be a unified account of lexical semantics
where these approaches seamlessly emerge as (partial) renderings of (different)
aspects of a core semantic knowledge base?
In this paper, we contribute to these research questions with a number of
experiments that systematically probe different lexical semantics theories for
their levels of cognitive plausibility and of technological usefulness.
The empirical findings obtained from these experiments advance our insight on
lexical semantics as the feature-based approach emerges as superior to the
other ones, and arguably also move us closer to finding answers to the research
questions above.
| 2,020 |
Computation and Language
|
"What is on your mind?" Automated Scoring of Mindreading in Childhood
and Early Adolescence
|
In this paper we present the first work on the automated scoring of
mindreading ability in middle childhood and early adolescence. We create
MIND-CA, a new corpus of 11,311 question-answer pairs in English from 1,066
children aged 7 to 14. We perform machine learning experiments and carry out
extensive quantitative and qualitative evaluation. We obtain promising results,
demonstrating the applicability of state-of-the-art NLP solutions to a new
domain and task.
| 2,020 |
Computation and Language
|
Hierarchical Transformer for Task Oriented Dialog Systems
|
Generative models for dialog systems have gained much interest because of the
recent success of RNN and Transformer based models in tasks like question
answering and summarization. Although the task of dialog response generation is
generally seen as a sequence-to-sequence (Seq2Seq) problem, researchers in the
past have found it challenging to train dialog systems using the standard
Seq2Seq models. Therefore, to help the model learn meaningful utterance and
conversation level features, Sordoni et al. (2015b); Serban et al. (2016)
proposed Hierarchical RNN architecture, which was later adopted by several
other RNN based dialog systems. With the transformer-based models dominating
the seq2seq problems lately, the natural question to ask is the applicability
of the notion of hierarchy in transformer based dialog systems. In this paper,
we propose a generalized framework for Hierarchical Transformer Encoders and
show how a standard transformer can be morphed into any hierarchical encoder,
including HRED and HIBERT like models, by using specially designed attention
masks and positional encodings. We demonstrate that Hierarchical Encoding helps
achieve better natural language understanding of the contexts in
transformer-based models for task-oriented dialog systems through a wide range
of experiments.
| 2,021 |
Computation and Language
|
JNLP Team: Deep Learning for Legal Processing in COLIEE 2020
|
We propose deep learning based methods for automatic systems of legal
retrieval and legal question-answering in COLIEE 2020. These systems are all
characterized by being pre-trained on large amounts of data before being
finetuned for the specified tasks. This approach helps to overcome the data
scarcity and achieve good performance, thus can be useful for tackling related
problems in information retrieval, and decision support in the legal domain.
Besides, the approach can be explored to deal with other domain specific
problems.
| 2,020 |
Computation and Language
|
Topic-Centric Unsupervised Multi-Document Summarization of Scientific
and News Articles
|
Recent advances in natural language processing have enabled automation of a
wide range of tasks, including machine translation, named entity recognition,
and sentiment analysis. Automated summarization of documents, or groups of
documents, however, has remained elusive, with many efforts limited to
extraction of keywords, key phrases, or key sentences. Accurate abstractive
summarization has yet to be achieved due to the inherent difficulty of the
problem, and limited availability of training data. In this paper, we propose a
topic-centric unsupervised multi-document summarization framework to generate
extractive and abstractive summaries for groups of scientific articles across
20 Fields of Study (FoS) in Microsoft Academic Graph (MAG) and news articles
from DUC-2004 Task 2. The proposed algorithm generates an abstractive summary
by developing salient language unit selection and text generation techniques.
Our approach matches the state-of-the-art when evaluated on automated
extractive evaluation metrics and performs better for abstractive summarization
on five human evaluation metrics (entailment, coherence, conciseness,
readability, and grammar). We achieve a kappa score of 0.68 between two
co-author linguists who evaluated our results. We plan to publicly share
MAG-20, a human-validated gold standard dataset of topic-clustered research
articles and their summaries to promote research in abstractive summarization.
| 2,020 |
Computation and Language
|
Analyzing Sustainability Reports Using Natural Language Processing
|
Climate change is a far-reaching, global phenomenon that will impact many
aspects of our society, including the global stock market
\cite{dietz2016climate}. In recent years, companies have increasingly been
aiming to both mitigate their environmental impact and adapt to the changing
climate context. This is reported via increasingly exhaustive reports, which
cover many types of climate risks and exposures under the umbrella of
Environmental, Social, and Governance (ESG). However, given this abundance of
data, sustainability analysts are obliged to comb through hundreds of pages of
reports in order to find relevant information. We leveraged recent progress in
Natural Language Processing (NLP) to create a custom model, ClimateQA, which
allows the analysis of financial reports in order to identify climate-relevant
sections based on a question answering approach. We present this tool and the
methodology that we used to develop it in the present article.
| 2,020 |
Computation and Language
|
Answer Identification in Collaborative Organizational Group Chat
|
We present a simple unsupervised approach for answer identification in
organizational group chat. In recent years, organizational group chat is on the
rise enabling asynchronous text-based collaboration between co-workers in
different locations and time zones. Finding answers to questions is often
critical for work efficiency. However, group chat is characterized by
intertwined conversations and 'always on' availability, making it hard for
users to pinpoint answers to questions they care about in real-time or search
for answers in retrospective. In addition, structural and lexical
characteristics differ between chat groups, making it hard to find a 'one model
fits all' approach. Our Kernel Density Estimation (KDE) based clustering
approach termed Ans-Chat implicitly learns discussion patterns as a means for
answer identification, thus eliminating the need to channel-specific tagging.
Empirical evaluation shows that this solution outperforms other approached.
| 2,020 |
Computation and Language
|
Tweet Sentiment Quantification: An Experimental Re-Evaluation
|
Sentiment quantification is the task of training, by means of supervised
learning, estimators of the relative frequency (also called ``prevalence'') of
sentiment-related classes (such as \textsf{Positive}, \textsf{Neutral},
\textsf{Negative}) in a sample of unlabelled texts. This task is especially
important when these texts are tweets, since the final goal of most sentiment
classification efforts carried out on Twitter data is actually quantification
(and not the classification of individual tweets). It is well-known that
solving quantification by means of ``classify and count'' (i.e., by classifying
all unlabelled items by means of a standard classifier and counting the items
that have been assigned to a given class) is less than optimal in terms of
accuracy, and that more accurate quantification methods exist. Gao and
Sebastiani (2016) carried out a systematic comparison of quantification methods
on the task of tweet sentiment quantification. In hindsight, we observe that
the experimental protocol followed in that work was weak, and that the
reliability of the conclusions that were drawn from the results is thus
questionable. We now re-evaluate those quantification methods (plus a few more
modern ones) on exactly the same same datasets, this time following a now
consolidated and much more robust experimental protocol (which also involves
simulating the presence, in the test data, of class prevalence values very
different from those of the training set). This experimental protocol (even
without counting the newly added methods) involves a number of experiments
5,775 times larger than that of the original study. The results of our
experiments are dramatically different from those obtained by Gao and
Sebastiani, and they provide a different, much more solid understanding of the
relative strengths and weaknesses of different sentiment quantification
methods.
| 2,021 |
Computation and Language
|
A Dataset for Tracking Entities in Open Domain Procedural Text
|
We present the first dataset for tracking state changes in procedural text
from arbitrary domains by using an unrestricted (open) vocabulary. For example,
in a text describing fog removal using potatoes, a car window may transition
between being foggy, sticky,opaque, and clear. Previous formulations of this
task provide the text and entities involved,and ask how those entities change
for just a small, pre-defined set of attributes (e.g., location), limiting
their fidelity. Our solution is a new task formulation where given just a
procedural text as input, the task is to generate a set of state change
tuples(entity, at-tribute, before-state, after-state)for each step,where the
entity, attribute, and state values must be predicted from an open vocabulary.
Using crowdsourcing, we create OPENPI1, a high-quality (91.5% coverage as
judged by humans and completely vetted), and large-scale dataset comprising
29,928 state changes over 4,050 sentences from 810 procedural real-world
paragraphs from WikiHow.com. A current state-of-the-art generation model on
this task achieves 16.1% F1 based on BLEU metric, leaving enough room for novel
model architectures.
| 2,020 |
Computation and Language
|
Learning from Task Descriptions
|
Typically, machine learning systems solve new tasks by training on thousands
of examples. In contrast, humans can solve new tasks by reading some
instructions, with perhaps an example or two. To take a step toward closing
this gap, we introduce a framework for developing NLP systems that solve new
tasks after reading their descriptions, synthesizing prior work in this area.
We instantiate this framework with a new English language dataset, ZEST,
structured for task-oriented evaluation on unseen tasks. Formulating task
descriptions as questions, we ensure each is general enough to apply to many
possible inputs, thus comprehensively evaluating a model's ability to solve
each task. Moreover, the dataset's structure tests specific types of systematic
generalization. We find that the state-of-the-art T5 model achieves a score of
12% on ZEST, leaving a significant challenge for NLP researchers.
| 2,020 |
Computation and Language
|
End-to-end spoken language understanding using transformer networks and
self-supervised pre-trained features
|
Transformer networks and self-supervised pre-training have consistently
delivered state-of-art results in the field of natural language processing
(NLP); however, their merits in the field of spoken language understanding
(SLU) still need further investigation. In this paper we introduce a modular
End-to-End (E2E) SLU transformer network based architecture which allows the
use of self-supervised pre-trained acoustic features, pre-trained model
initialization and multi-task training. Several SLU experiments for predicting
intent and entity labels/values using the ATIS dataset are performed. These
experiments investigate the interaction of pre-trained model initialization and
multi-task training with either traditional filterbank or self-supervised
pre-trained acoustic features. Results show not only that self-supervised
pre-trained acoustic features outperform filterbank features in almost all the
experiments, but also that when these features are used in combination with
multi-task training, they almost eliminate the necessity of pre-trained model
initialization.
| 2,020 |
Computation and Language
|
Dialog Simulation with Realistic Variations for Training Goal-Oriented
Conversational Systems
|
Goal-oriented dialog systems enable users to complete specific goals like
requesting information about a movie or booking a ticket. Typically the dialog
system pipeline contains multiple ML models, including natural language
understanding, state tracking and action prediction (policy learning). These
models are trained through a combination of supervised or reinforcement
learning methods and therefore require collection of labeled domain specific
datasets. However, collecting annotated datasets with language and dialog-flow
variations is expensive, time-consuming and scales poorly due to human
involvement. In this paper, we propose an approach for automatically creating a
large corpus of annotated dialogs from a few thoroughly annotated sample
dialogs and the dialog schema. Our approach includes a novel goal-sampling
technique for sampling plausible user goals and a dialog simulation technique
that uses heuristic interplay between the user and the system (Alexa), where
the user tries to achieve the sampled goal. We validate our approach by
generating data and training three different downstream conversational ML
models. We achieve 18 ? 50% relative accuracy improvements on a held-out test
set compared to a baseline dialog generation approach that only samples natural
language and entity value variations from existing catalogs but does not
generate any novel dialog flow variations. We also qualitatively establish that
the proposed approach is better than the baseline. Moreover, several different
conversational experiences have been built using this method, which enables
customers to have a wide variety of conversations with Alexa.
| 2,020 |
Computation and Language
|
A Probabilistic Approach in Historical Linguistics Word Order Change in
Infinitival Clauses: from Latin to Old French
|
This research offers a new interdisciplinary approach to the field of
Linguistics by using Computational Linguistics, NLP, Bayesian Statistics and
Sociolinguistics methods. This thesis investigates word order change in
infinitival clauses from Object-Verb (OV) to Verb-Object (VO) in the history of
Latin and Old French. By applying a variationist approach, I examine a
synchronic word order variation in each stage of language change, from which I
infer the character, periodization and constraints of diachronic variation. I
also show that in discourse-configurational languages, such as Latin and Early
Old French, it is possible to identify pragmatically neutral contexts by using
information structure annotation. I further argue that by mapping pragmatic
categories into a syntactic structure, we can detect how word order change
unfolds. For this investigation, the data are extracted from annotated corpora
spanning several centuries of Latin and Old French and from additional
resources created by using computational linguistic methods. The data are then
further codified for various pragmatic, semantic, syntactic and sociolinguistic
factors. This study also evaluates previous factors proposed to account for
word order alternation and change. I show how information structure and
syntactic constraints change over time and propose a method that allows
researchers to differentiate a stable word order alternation from alternation
indicating a change. Finally, I present a three-stage probabilistic model of
word order change, which also conforms to traditional language change patterns.
| 2,020 |
Computation and Language
|
NLPGym -- A toolkit for evaluating RL agents on Natural Language
Processing Tasks
|
Reinforcement learning (RL) has recently shown impressive performance in
complex game AI and robotics tasks. To a large extent, this is thanks to the
availability of simulated environments such as OpenAI Gym, Atari Learning
Environment, or Malmo which allow agents to learn complex tasks through
interaction with virtual environments. While RL is also increasingly applied to
natural language processing (NLP), there are no simulated textual environments
available for researchers to apply and consistently benchmark RL on NLP tasks.
With the work reported here, we therefore release NLPGym, an open-source Python
toolkit that provides interactive textual environments for standard NLP tasks
such as sequence tagging, multi-label classification, and question answering.
We also present experimental results for 6 tasks using different RL algorithms
which serve as baselines for further research. The toolkit is published at
https://github.com/rajcscw/nlp-gym
| 2,020 |
Computation and Language
|
A Two-Phase Approach for Abstractive Podcast Summarization
|
Podcast summarization is different from summarization of other data formats,
such as news, patents, and scientific papers in that podcasts are often longer,
conversational, colloquial, and full of sponsorship and advertising
information, which imposes great challenges for existing models. In this paper,
we focus on abstractive podcast summarization and propose a two-phase approach:
sentence selection and seq2seq learning. Specifically, we first select
important sentences from the noisy long podcast transcripts. The selection is
based on sentence similarity to the reference to reduce the redundancy and the
associated latent topics to preserve semantics. Then the selected sentences are
fed into a pre-trained encoder-decoder framework for the summary generation.
Our approach achieves promising results regarding both ROUGE-based measures and
human evaluations.
| 2,020 |
Computation and Language
|
Facebook AI's WMT20 News Translation Task Submission
|
This paper describes Facebook AI's submission to WMT20 shared news
translation task. We focus on the low resource setting and participate in two
language pairs, Tamil <-> English and Inuktitut <-> English, where there are
limited out-of-domain bitext and monolingual data. We approach the low resource
problem using two main strategies, leveraging all available data and adapting
the system to the target news domain. We explore techniques that leverage
bitext and monolingual data from all languages, such as self-supervised model
pretraining, multilingual models, data augmentation, and reranking. To better
adapt the translation system to the test domain, we explore dataset tagging and
fine-tuning on in-domain data. We observe that different techniques provide
varied improvements based on the available data of the language pair. Based on
the finding, we integrate these techniques into one training pipeline. For
En->Ta, we explore an unconstrained setup with additional Tamil bitext and
monolingual data and show that further improvement can be obtained. On the test
set, our best submitted systems achieve 21.5 and 13.7 BLEU for Ta->En and
En->Ta respectively, and 27.9 and 13.0 for Iu->En and En->Iu respectively.
| 2,020 |
Computation and Language
|
Don't Patronize Me! An Annotated Dataset with Patronizing and
Condescending Language towards Vulnerable Communities
|
In this paper, we introduce a new annotated dataset which is aimed at
supporting the development of NLP models to identify and categorize language
that is patronizing or condescending towards vulnerable communities (e.g.
refugees, homeless people, poor families). While the prevalence of such
language in the general media has long been shown to have harmful effects, it
differs from other types of harmful language, in that it is generally used
unconsciously and with good intentions. We furthermore believe that the often
subtle nature of patronizing and condescending language (PCL) presents an
interesting technical challenge for the NLP community. Our analysis of the
proposed dataset shows that identifying PCL is hard for standard NLP models,
with language models such as BERT achieving the best results.
| 2,020 |
Computation and Language
|
Widening the Dialogue Workflow Modeling Bottleneck in Ontology-Based
Personal Assistants
|
We present a new approach to dialogue specification for Virtual Personal
Assistants (VPAs) based on so-called dialogue workflow graphs, with several
demonstrated advantages over current ontology-based methods. Our new dialogue
specification language (DSL) enables customers to more easily participate in
the VPA modeling process due to a user-friendly modeling framework. Resulting
models are also significantly more compact. VPAs can be developed much more
rapidly. The DSL is a new modeling layer on top of our ontology-based Dialogue
Management (DM) framework OntoVPA. We explain the rationale and benefits behind
the new language and support our claims with concrete reduced Level-of-Effort
(LOE) numbers from two recent OntoVPA projects.
| 2,020 |
Computation and Language
|
Self-supervised Document Clustering Based on BERT with Data Augment
|
Contrastive learning is a promising approach to unsupervised learning, as it
inherits the advantages of well-studied deep models without a dedicated and
complex model design. In this paper, based on bidirectional encoder
representations from transformers, we propose self-supervised contrastive
learning (SCL) as well as few-shot contrastive learning (FCL) with unsupervised
data augmentation (UDA) for text clustering. SCL outperforms state-of-the-art
unsupervised clustering approaches for short texts and those for long texts in
terms of several clustering evaluation measures. FCL achieves performance close
to supervised learning, and FCL with UDA further improves the performance for
short texts.
| 2,021 |
Computation and Language
|
MVP-BERT: Redesigning Vocabularies for Chinese BERT and Multi-Vocab
Pretraining
|
Despite the development of pre-trained language models (PLMs) significantly
raise the performances of various Chinese natural language processing (NLP)
tasks, the vocabulary for these Chinese PLMs remain to be the one provided by
Google Chinese Bert \cite{devlin2018bert}, which is based on Chinese
characters. Second, the masked language model pre-training is based on a single
vocabulary, which limits its downstream task performances. In this work, we
first propose a novel method, \emph{seg\_tok}, to form the vocabulary of
Chinese BERT, with the help of Chinese word segmentation (CWS) and subword
tokenization. Then we propose three versions of multi-vocabulary pretraining
(MVP) to improve the models expressiveness. Experiments show that: (a) compared
with char based vocabulary, \emph{seg\_tok} does not only improves the
performances of Chinese PLMs on sentence level tasks, it can also improve
efficiency; (b) MVP improves PLMs' downstream performance, especially it can
improve \emph{seg\_tok}'s performances on sequence labeling tasks.
| 2,020 |
Computation and Language
|
Neural Semi-supervised Learning for Text Classification Under
Large-Scale Pretraining
|
The goal of semi-supervised learning is to utilize the unlabeled, in-domain
dataset U to improve models trained on the labeled dataset D. Under the context
of large-scale language-model (LM) pretraining, how we can make the best use of
U is poorly understood: is semi-supervised learning still beneficial with the
presence of large-scale pretraining? should U be used for in-domain LM
pretraining or pseudo-label generation? how should the pseudo-label based
semi-supervised model be actually implemented? how different semi-supervised
strategies affect performances regarding D of different sizes, U of different
sizes, etc. In this paper, we conduct comprehensive studies on semi-supervised
learning in the task of text classification under the context of large-scale LM
pretraining. Our studies shed important lights on the behavior of
semi-supervised learning methods: (1) with the presence of in-domain
pretraining LM on U, open-domain LM pretraining is unnecessary; (2) both the
in-domain pretraining strategy and the pseudo-label based strategy introduce
significant performance boosts, with the former performing better with larger
U, the latter performing better with smaller U, and the combination leading to
the largest performance boost; (3) self-training (pretraining first on pseudo
labels D' and then fine-tuning on D) yields better performances when D is
small, while joint training on the combination of pseudo labels D' and the
original dataset D yields better performances when D is large. Using
semi-supervised learning strategies, we are able to achieve a performance of
around 93.8% accuracy with only 50 training data points on the IMDB dataset,
and a competitive performance of 96.6% with the full IMDB dataset. Our work
marks an initial step in understanding the behavior of semi-supervised learning
models under the context of large-scale pretraining.
| 2,020 |
Computation and Language
|
Curriculum CycleGAN for Textual Sentiment Domain Adaptation with
Multiple Sources
|
Sentiment analysis of user-generated reviews or comments on products and
services in social networks can help enterprises to analyze the feedback from
customers and take corresponding actions for improvement. To mitigate
large-scale annotations on the target domain, domain adaptation (DA) provides
an alternate solution by learning a transferable model from other labeled
source domains. Existing multi-source domain adaptation (MDA) methods either
fail to extract some discriminative features in the target domain that are
related to sentiment, neglect the correlations of different sources and the
distribution difference among different sub-domains even in the same source, or
cannot reflect the varying optimal weighting during different training stages.
In this paper, we propose a novel instance-level MDA framework, named
curriculum cycle-consistent generative adversarial network (C-CycleGAN), to
address the above issues. Specifically, C-CycleGAN consists of three
components: (1) pre-trained text encoder which encodes textual input from
different domains into a continuous representation space, (2) intermediate
domain generator with curriculum instance-level adaptation which bridges the
gap across source and target domains, and (3) task classifier trained on the
intermediate domain for final sentiment classification. C-CycleGAN transfers
source samples at instance-level to an intermediate domain that is closer to
the target domain with sentiment semantics preserved and without losing
discriminative features. Further, our dynamic instance-level weighting
mechanisms can assign the optimal weights to different source samples in each
training stage. We conduct extensive experiments on three benchmark datasets
and achieve substantial gains over state-of-the-art DA approaches. Our source
code is released at: https://github.com/WArushrush/Curriculum-CycleGAN.
| 2,021 |
Computation and Language
|
Measuring the Novelty of Natural Language Text Using the Conjunctive
Clauses of a Tsetlin Machine Text Classifier
|
Most supervised text classification approaches assume a closed world,
counting on all classes being present in the data at training time. This
assumption can lead to unpredictable behaviour during operation, whenever
novel, previously unseen, classes appear. Although deep learning-based methods
have recently been used for novelty detection, they are challenging to
interpret due to their black-box nature. This paper addresses
\emph{interpretable} open-world text classification, where the trained
classifier must deal with novel classes during operation. To this end, we
extend the recently introduced Tsetlin machine (TM) with a novelty scoring
mechanism. The mechanism uses the conjunctive clauses of the TM to measure to
what degree a text matches the classes covered by the training data. We
demonstrate that the clauses provide a succinct interpretable description of
known topics, and that our scoring mechanism makes it possible to discern novel
topics from the known ones. Empirically, our TM-based approach outperforms
seven other novelty detection schemes on three out of five datasets, and
performs second and third best on the remaining, with the added benefit of an
interpretable propositional logic-based representation.
| 2,020 |
Computation and Language
|
KddRES: A Multi-level Knowledge-driven Dialogue Dataset for Restaurant
Towards Customized Dialogue System
|
Compared with CrossWOZ (Chinese) and MultiWOZ (English) dataset which have
coarse-grained information, there is no dataset which handle fine-grained and
hierarchical level information properly. In this paper, we publish a first
Cantonese knowledge-driven Dialogue Dataset for REStaurant (KddRES) in Hong
Kong, which grounds the information in multi-turn conversations to one specific
restaurant. Our corpus contains 0.8k conversations which derive from 10
restaurants with various styles in different regions. In addition to that, we
designed fine-grained slots and intents to better capture semantic information.
The benchmark experiments and data statistic analysis show the diversity and
rich annotations of our dataset. We believe the publish of KddRES can be a
necessary supplement of current dialogue datasets and more suitable and
valuable for small and middle enterprises (SMEs) of society, such as build a
customized dialogue system for each restaurant. The corpus and benchmark models
are publicly available.
| 2,021 |
Computation and Language
|
Toward Understanding Clinical Context of Medication Change Events in
Clinical Narratives
|
Understanding medication events in clinical narratives is essential to
achieving a complete picture of a patient's medication history. While prior
research has explored classification of medication changes from clinical notes,
studies to date have not considered the necessary clinical context needed for
their use in real-world applications, such as medication timeline generation
and medication reconciliation. In this paper, we present the Contextualized
Medication Event Dataset (CMED), a dataset for capturing relevant context of
medication changes documented in clinical notes, which was developed using a
novel conceptual framework that organizes context for clinical events into
various orthogonal dimensions. In this process, we define specific contextual
aspects pertinent to medication change events, characterize the dataset, and
report the results of preliminary experiments. CMED consists of 9,013
medication mentions annotated over 500 clinical notes, and will be released to
the community as a shared task in 2021.
| 2,021 |
Computation and Language
|
Towards Olfactory Information Extraction from Text: A Case Study on
Detecting Smell Experiences in Novels
|
Environmental factors determine the smells we perceive, but societal factors
factors shape the importance, sentiment and biases we give to them.
Descriptions of smells in text, or as we call them `smell experiences', offer a
window into these factors, but they must first be identified. To the best of
our knowledge, no tool exists to extract references to smell experiences from
text. In this paper, we present two variations on a semi-supervised approach to
identify smell experiences in English literature. The combined set of patterns
from both implementations offer significantly better performance than a
keyword-based baseline.
| 2,020 |
Computation and Language
|
Gunrock 2.0: A User Adaptive Social Conversational System
|
Gunrock 2.0 is built on top of Gunrock with an emphasis on user adaptation.
Gunrock 2.0 combines various neural natural language understanding modules,
including named entity detection, linking, and dialog act prediction, to
improve user understanding. Its dialog management is a hierarchical model that
handles various topics, such as movies, music, and sports. The system-level
dialog manager can handle question detection, acknowledgment, error handling,
and additional functions, making downstream modules much easier to design and
implement. The dialog manager also adapts its topic selection to accommodate
different users' profile information, such as inferred gender and personality.
The generation model is a mix of templates and neural generation models.
Gunrock 2.0 is able to achieve an average rating of 3.73 at its latest build
from May 29th to June 4th.
| 2,020 |
Computation and Language
|
Exploring Neural Entity Representations for Semantic Information
|
Neural methods for embedding entities are typically extrinsically evaluated
on downstream tasks and, more recently, intrinsically using probing tasks.
Downstream task-based comparisons are often difficult to interpret due to
differences in task structure, while probing task evaluations often look at
only a few attributes and models. We address both of these issues by evaluating
a diverse set of eight neural entity embedding methods on a set of simple
probing tasks, demonstrating which methods are able to remember words used to
describe entities, learn type, relationship and factual information, and
identify how frequently an entity is mentioned. We also compare these methods
in a unified framework on two entity linking tasks and discuss how they
generalize to different model architectures and datasets.
| 2,020 |
Computation and Language
|
Argumentative Topology: Finding Loop(holes) in Logic
|
Advances in natural language processing have resulted in increased
capabilities with respect to multiple tasks. One of the possible causes of the
observed performance gains is the introduction of increasingly sophisticated
text representations. While many of the new word embedding techniques can be
shown to capture particular notions of sentiment or associative structures, we
explore the ability of two different word embeddings to uncover or capture the
notion of logical shape in text. To this end we present a novel framework that
we call Topological Word Embeddings which leverages mathematical techniques in
dynamical system analysis and data driven shape extraction (i.e. topological
data analysis). In this preliminary work we show that using a topological delay
embedding we are able to capture and extract a different, shape-based notion of
logic aimed at answering the question "Can we find a circle in a circular
argument?"
| 2,020 |
Computation and Language
|
Predictions For Pre-training Language Models
|
Language model pre-training has proven to be useful in many language
understanding tasks. In this paper, we investigate whether it is still helpful
to add the self-training method in the pre-training step and the fine-tuning
step. Towards this goal, we propose a learning framework that making best use
of the unlabel data on the low-resource and high-resource labeled dataset. In
industry NLP applications, we have large amounts of data produced by users or
customers. Our learning framework is based on this large amounts of unlabel
data. First, We use the model fine-tuned on manually labeled dataset to predict
pseudo labels for the user-generated unlabeled data. Then we use the pseudo
labels to supervise the task-specific training on the large amounts of
user-generated data. We consider this task-specific training step on pseudo
labels as a pre-training step for the next fine-tuning step. At last, we
fine-tune on the manually labeled dataset upon the pre-trained model. In this
work, we first empirically show that our method is able to solidly improve the
performance by 3.6%, when the manually labeled fine-tuning dataset is
relatively small. Then we also show that our method still is able to improve
the performance further by 0.2%, when the manually labeled fine-tuning dataset
is relatively large enough. We argue that our method make the best use of the
unlabel data, which is superior to either pre-training or self-training alone.
| 2,023 |
Computation and Language
|
Sequence-Level Mixed Sample Data Augmentation
|
Despite their empirical success, neural networks still have difficulty
capturing compositional aspects of natural language. This work proposes a
simple data augmentation approach to encourage compositional behavior in neural
models for sequence-to-sequence problems. Our approach, SeqMix, creates new
synthetic examples by softly combining input/output sequences from the training
set. We connect this approach to existing techniques such as SwitchOut and word
dropout, and show that these techniques are all approximating variants of a
single objective. SeqMix consistently yields approximately 1.0 BLEU improvement
on five different translation datasets over strong Transformer baselines. On
tasks that require strong compositional generalization such as SCAN and
semantic parsing, SeqMix also offers further improvements.
| 2,020 |
Computation and Language
|
Diverse and Non-redundant Answer Set Extraction on Community QA based on
DPPs
|
In community-based question answering (CQA) platforms, it takes time for a
user to get useful information from among many answers. Although one solution
is an answer ranking method, the user still needs to read through the
top-ranked answers carefully. This paper proposes a new task of selecting a
diverse and non-redundant answer set rather than ranking the answers. Our
method is based on determinantal point processes (DPPs), and it calculates the
answer importance and similarity between answers by using BERT. We built a
dataset focusing on a Japanese CQA site, and the experiments on this dataset
demonstrated that the proposed method outperformed several baseline methods.
| 2,020 |
Computation and Language
|
Do Fine-tuned Commonsense Language Models Really Generalize?
|
Recently, transformer-based methods such as RoBERTa and GPT-3 have led to
significant experimental advances in natural language processing tasks such as
question answering and commonsense reasoning. The latter is typically evaluated
through multiple benchmarks framed as multiple-choice instances of the former.
According to influential leaderboards hosted by the Allen Institute (evaluating
state-of-the-art performance on commonsense reasoning benchmarks), models based
on such transformer methods are approaching human-like performance and have
average accuracy well over 80% on many benchmarks. Since these are commonsense
benchmarks, a model that generalizes on commonsense reasoning should not
experience much performance loss across multiple commonsense benchmarks. In
this paper, we study the generalization issue in detail by designing and
conducting a rigorous scientific study. Using five common benchmarks, multiple
controls and statistical analysis, we find clear evidence that fine-tuned
commonsense language models still do not generalize well, even with moderate
changes to the experimental setup, and may, in fact, be susceptible to dataset
bias. We also perform selective studies, including qualitative and consistency
analyses, to gain deeper insight into the problem.
| 2,020 |
Computation and Language
|
Improving Document-Level Sentiment Analysis with User and Product
Context
|
Past work that improves document-level sentiment analysis by encoding user
and product information has been limited to considering only the text of the
current review. We investigate incorporating additional review text available
at the time of sentiment prediction that may prove meaningful for guiding
prediction. Firstly, we incorporate all available historical review text
belonging to the author of the review in question. Secondly, we investigate the
inclusion of historical reviews associated with the current product (written by
other users). We achieve this by explicitly storing representations of reviews
written by the same user and about the same product and force the model to
memorize all reviews for one particular user and product. Additionally, we drop
the hierarchical architecture used in previous work to enable words in the text
to directly attend to each other. Experiment results on IMDB, Yelp 2013 and
Yelp 2014 datasets show improvement to state-of-the-art of more than 2
percentage points in the best case.
| 2,020 |
Computation and Language
|
On the use of Self-supervised Pre-trained Acoustic and Linguistic
Features for Continuous Speech Emotion Recognition
|
Pre-training for feature extraction is an increasingly studied approach to
get better continuous representations of audio and text content. In the present
work, we use wav2vec and camemBERT as self-supervised learned models to
represent our data in order to perform continuous emotion recognition from
speech (SER) on AlloSat, a large French emotional database describing the
satisfaction dimension, and on the state of the art corpus SEWA focusing on
valence, arousal and liking dimensions. To the authors' knowledge, this paper
presents the first study showing that the joint use of wav2vec and BERT-like
pre-trained features is very relevant to deal with continuous SER task, usually
characterized by a small amount of labeled training data. Evaluated by the
well-known concordance correlation coefficient (CCC), our experiments show that
we can reach a CCC value of 0.825 instead of 0.592 when using MFCC in
conjunction with word2vec word embedding on the AlloSat dataset.
| 2,020 |
Computation and Language
|
The Ubiqus English-Inuktitut System for WMT20
|
This paper describes Ubiqus' submission to the WMT20 English-Inuktitut shared
news translation task. Our main system, and only submission, is based on a
multilingual approach, jointly training a Transformer model on several
agglutinative languages. The English-Inuktitut translation task is challenging
at every step, from data selection, preparation and tokenization to quality
evaluation down the line. Difficulties emerge both because of the peculiarities
of the Inuktitut language as well as the low-resource context.
| 2,020 |
Computation and Language
|
Inspecting state of the art performance and NLP metrics in image-based
medical report generation
|
Several deep learning architectures have been proposed over the last years to
deal with the problem of generating a written report given an imaging exam as
input. Most works evaluate the generated reports using standard Natural
Language Processing (NLP) metrics (e.g. BLEU, ROUGE), reporting significant
progress. In this article, we contrast this progress by comparing state of the
art (SOTA) models against weak baselines. We show that simple and even naive
approaches yield near SOTA performance on most traditional NLP metrics. We
conclude that evaluation methods in this task should be further studied towards
correctly measuring clinical accuracy, ideally involving physicians to
contribute to this end.
| 2,022 |
Computation and Language
|
Combining Prosodic, Voice Quality and Lexical Features to Automatically
Detect Alzheimer's Disease
|
Alzheimer's Disease (AD) is nowadays the most common form of dementia, and
its automatic detection can help to identify symptoms at early stages, so that
preventive actions can be carried out. Moreover, non-intrusive techniques based
on spoken data are crucial for the development of AD automatic detection
systems. In this light, this paper is presented as a contribution to the ADReSS
Challenge, aiming at improving AD automatic detection from spontaneous speech.
To this end, recordings from 108 participants, which are age-, gender-, and AD
condition-balanced, have been used as training set to perform two different
tasks: classification into AD/non-AD conditions, and regression over the
Mini-Mental State Examination (MMSE) scores. Both tasks have been performed
extracting 28 features from speech -- based on prosody and voice quality -- and
51 features from the transcriptions -- based on lexical and turn-taking
information. Our results achieved up to 87.5 % of classification accuracy using
a Random Forest classifier, and 4.54 of RMSE using a linear regression with
stochastic gradient descent over the provided test set. This shows promising
results in the automatic detection of Alzheimer's Disease through speech and
lexical features.
| 2,020 |
Computation and Language
|
Master Thesis: Neural Sign Language Translation by Learning Tokenization
|
In this thesis, we propose a multitask learning based method to improve
Neural Sign Language Translation (NSLT) consisting of two parts, a tokenization
layer and Neural Machine Translation (NMT). The tokenization part focuses on
how Sign Language (SL) videos should be represented to be fed into the other
part. It has not been studied elaborately whereas NMT research has attracted
several researchers contributing enormous advancements. Up to now, there are
two main input tokenization levels, namely frame-level and gloss-level
tokenization. Glosses are world-like intermediate presentation and unique to
SLs. Therefore, we aim to develop a generic sign-level tokenization layer so
that it is applicable to other domains without further effort. We begin with
investigating current tokenization approaches and explain their weaknesses with
several experiments. To provide a solution, we adapt Transfer Learning,
Multitask Learning and Unsupervised Domain Adaptation into this research to
leverage additional supervision. We succeed in enabling knowledge transfer
between SLs and improve translation quality by 5 points in BLEU-4 and 8 points
in ROUGE scores. Secondly, we show the effects of body parts by extensive
experiments in all the tokenization approaches. Apart from these, we adopt
3D-CNNs to improve efficiency in terms of time and space. Lastly, we discuss
the advantages of sign-level tokenization over gloss-level tokenization. To sum
up, our proposed method eliminates the need for gloss level annotation to
obtain higher scores by providing additional supervision by utilizing weak
supervision sources.
| 2,020 |
Computation and Language
|
Learning Regular Expressions for Interpretable Medical Text
Classification Using a Pool-based Simulated Annealing and Word-vector Models
|
In this paper, we propose a rule-based engine composed of high quality and
interpretable regular expressions for medical text classification. The regular
expressions are auto generated by a constructive heuristic method and optimized
using a Pool-based Simulated Annealing (PSA) approach. Although existing Deep
Neural Network (DNN) methods present high quality performance in most Natural
Language Processing (NLP) applications, the solutions are regarded as
uninterpretable black boxes to humans. Therefore, rule-based methods are often
introduced when interpretable solutions are needed, especially in the medical
field. However, the construction of regular expressions can be extremely
labor-intensive for large data sets. This research aims to reduce the manual
efforts while maintaining high-quality solutions
| 2,020 |
Computation and Language
|
LAVA: Latent Action Spaces via Variational Auto-encoding for Dialogue
Policy Optimization
|
Reinforcement learning (RL) can enable task-oriented dialogue systems to
steer the conversation towards successful task completion. In an end-to-end
setting, a response can be constructed in a word-level sequential decision
making process with the entire system vocabulary as action space. Policies
trained in such a fashion do not require expert-defined action spaces, but they
have to deal with large action spaces and long trajectories, making RL
impractical. Using the latent space of a variational model as action space
alleviates this problem. However, current approaches use an uninformed prior
for training and optimize the latent distribution solely on the context. It is
therefore unclear whether the latent representation truly encodes the
characteristics of different actions. In this paper, we explore three ways of
leveraging an auxiliary task to shape the latent variable distribution: via
pre-training, to obtain an informed prior, and via multitask learning. We
choose response auto-encoding as the auxiliary task, as this captures the
generative factors of dialogue responses while requiring low computational cost
and neither additional data nor labels. Our approach yields a more
action-characterized latent representations which support end-to-end dialogue
policy optimization and achieves state-of-the-art success rates. These results
warrant a more wide-spread use of RL in end-to-end dialogue models.
| 2,020 |
Computation and Language
|
Out-of-Task Training for Dialog State Tracking Models
|
Dialog state tracking (DST) suffers from severe data sparsity. While many
natural language processing (NLP) tasks benefit from transfer learning and
multi-task learning, in dialog these methods are limited by the amount of
available data and by the specificity of dialog applications. In this work, we
successfully utilize non-dialog data from unrelated NLP tasks to train dialog
state trackers. This opens the door to the abundance of unrelated NLP corpora
to mitigate the data sparsity issue inherent to DST.
| 2,020 |
Computation and Language
|
Topology of Word Embeddings: Singularities Reflect Polysemy
|
The manifold hypothesis suggests that word vectors live on a submanifold
within their ambient vector space. We argue that we should, more accurately,
expect them to live on a pinched manifold: a singular quotient of a manifold
obtained by identifying some of its points. The identified, singular points
correspond to polysemous words, i.e. words with multiple meanings. Our point of
view suggests that monosemous and polysemous words can be distinguished based
on the topology of their neighbourhoods. We present two kinds of empirical
evidence to support this point of view: (1) We introduce a topological measure
of polysemy based on persistent homology that correlates well with the actual
number of meanings of a word. (2) We propose a simple, topologically motivated
solution to the SemEval-2010 task on Word Sense Induction & Disambiguation that
produces competitive results.
| 2,020 |
Computation and Language
|
Palomino-Ochoa at SemEval-2020 Task 9: Robust System based on
Transformer for Code-Mixed Sentiment Classification
|
We present a transfer learning system to perform a mixed Spanish-English
sentiment classification task. Our proposal uses the state-of-the-art language
model BERT and embed it within a ULMFiT transfer learning pipeline. This
combination allows us to predict the polarity detection of code-mixed
(English-Spanish) tweets. Thus, among 29 submitted systems, our approach
(referred to as dplominop) is ranked 4th on the Sentimix Spanglish test set of
SemEval 2020 Task 9. In fact, our system yields the weighted-F1 score value of
0.755 which can be easily reproduced -- the source code and implementation
details are made available.
| 2,020 |
Computation and Language
|
EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform
for NLP Applications
|
The literature has witnessed the success of leveraging Pre-trained Language
Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural
Language Processing (NLP) applications, yet it is not easy to build an
easy-to-use and scalable TL toolkit for this purpose. To bridge this gap, the
EasyTransfer platform is designed to develop deep TL algorithms for NLP
applications. EasyTransfer is backended with a high-performance and scalable
engine for efficient training and inference, and also integrates comprehensive
deep TL algorithms, to make the development of industrial-scale TL applications
easier. In EasyTransfer, the built-in data and model parallelism strategies,
combined with AI compiler optimization, show to be 4.0x faster than the
community version of distributed training. EasyTransfer supports various NLP
models in the ModelZoo, including mainstream PLMs and multi-modality models. It
also features various in-house developed TL algorithms, together with the
AppZoo for NLP applications. The toolkit is convenient for users to quickly
start model training, evaluation, and online deployment. EasyTransfer is
currently deployed at Alibaba to support a variety of business scenarios,
including item recommendation, personalized search, conversational question
answering, etc. Extensive experiments on real-world datasets and online
applications show that EasyTransfer is suitable for online production with
cutting-edge performance for various applications. The source code of
EasyTransfer is released at Github (https://github.com/alibaba/EasyTransfer).
| 2,021 |
Computation and Language
|
A Sequence-to-Sequence Approach to Dialogue State Tracking
|
This paper is concerned with dialogue state tracking (DST) in a task-oriented
dialogue system. Building a DST module that is highly effective is still a
challenging issue, although significant progresses have been made recently.
This paper proposes a new approach to dialogue state tracking, referred to as
Seq2Seq-DU, which formalizes DST as a sequence-to-sequence problem. Seq2Seq-DU
employs two BERT-based encoders to respectively encode the utterances in the
dialogue and the descriptions of schemas, an attender to calculate attentions
between the utterance embeddings and the schema embeddings, and a decoder to
generate pointers to represent the current state of dialogue. Seq2Seq-DU has
the following advantages. It can jointly model intents, slots, and slot values;
it can leverage the rich representations of utterances and schemas based on
BERT; it can effectively deal with categorical and non-categorical slots, and
unseen schemas. In addition, Seq2Seq-DU can also be used in the NLU (natural
language understanding) module of a dialogue system. Experimental results on
benchmark datasets in different settings (SGD, MultiWOZ2.2, MultiWOZ2.1,
WOZ2.0, DSTC2, M2M, SNIPS, and ATIS) show that Seq2Seq-DU outperforms the
existing methods.
| 2,021 |
Computation and Language
|
Predicting metrical patterns in Spanish poetry with language models
|
In this paper, we compare automated metrical pattern identification systems
available for Spanish against extensive experiments done by fine-tuning
language models trained on the same task. Despite being initially conceived as
a model suitable for semantic tasks, our results suggest that BERT-based models
retain enough structural information to perform reasonably well for Spanish
scansion.
| 2,020 |
Computation and Language
|
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal
Clinical NLP
|
Clinical machine learning is increasingly multimodal, collected in both
structured tabular formats and unstructured forms such as freetext. We propose
a novel task of exploring fairness on a multimodal clinical dataset, adopting
equalized odds for the downstream medical prediction tasks. To this end, we
investigate a modality-agnostic fairness algorithm - equalized odds post
processing - and compare it to a text-specific fairness algorithm: debiased
clinical word embeddings. Despite the fact that debiased word embeddings do not
explicitly address equalized odds of protected groups, we show that a
text-specific approach to fairness may simultaneously achieve a good balance of
performance and classical notions of fairness. We hope that our paper inspires
future contributions at the critical intersection of clinical NLP and fairness.
The full source code is available here:
https://github.com/johntiger1/multimodal_fairness
| 2,020 |
Computation and Language
|
Relation Extraction with Contextualized Relation Embedding (CRE)
|
Relation extraction is the task of identifying relation instance between two
entities given a corpus whereas Knowledge base modeling is the task of
representing a knowledge base, in terms of relations between entities. This
paper proposes an architecture for the relation extraction task that integrates
semantic information with knowledge base modeling in a novel manner. Existing
approaches for relation extraction either do not utilize knowledge base
modelling or use separately trained KB models for the RE task. We present a
model architecture that internalizes KB modeling in relation extraction. This
model applies a novel approach to encode sentences into contextualized relation
embeddings, which can then be used together with parameterized entity
embeddings to score relation instances. The proposed CRE model achieves state
of the art performance on datasets derived from The New York Times Annotated
Corpus and FreeBase. The source code has been made available.
| 2,020 |
Computation and Language
|
SentiLSTM: A Deep Learning Approach for Sentiment Analysis of Restaurant
Reviews
|
The amount of textual data generation has increased enormously due to the
effortless access of the Internet and the evolution of various web 2.0
applications. These textual data productions resulted because of the people
express their opinion, emotion or sentiment about any product or service in the
form of tweets, Facebook post or status, blog write up, and reviews. Sentiment
analysis deals with the process of computationally identifying and categorizing
opinions expressed in a piece of text, especially in order to determine whether
the writer's attitude toward a particular topic is positive, negative, or
neutral. The impact of customer review is significant to perceive the customer
attitude towards a restaurant. Thus, the automatic detection of sentiment from
reviews is advantageous for the restaurant owners, or service providers and
customers to make their decisions or services more satisfactory. This paper
proposes, a deep learning-based technique (i.e., BiLSTM) to classify the
reviews provided by the clients of the restaurant into positive and negative
polarities. A corpus consists of 8435 reviews is constructed to evaluate the
proposed technique. In addition, a comparative analysis of the proposed
technique with other machine learning algorithms presented. The results of the
evaluation on test dataset show that BiLSTM technique produced in the highest
accuracy of 91.35%.
| 2,020 |
Computation and Language
|
Are Pre-trained Language Models Knowledgeable to Ground Open Domain
Dialogues?
|
We study knowledge-grounded dialogue generation with pre-trained language
models. Instead of pursuing new state-of-the-art on benchmarks, we try to
understand if the knowledge stored in parameters of the pre-trained models is
already enough to ground open domain dialogues, and thus allows us to get rid
of the dependency on external knowledge sources in generation. Through
extensive experiments on benchmarks, we find that by fine-tuning with a few
dialogues containing knowledge, the pre-trained language models can outperform
the state-of-the-art model that requires external knowledge in automatic
evaluation and human judgment, suggesting a positive answer to the question we
raised.
| 2,020 |
Computation and Language
|
Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT
|
Most current extractive summarization models generate summaries by selecting
salient sentences. However, one of the problems with sentence-level extractive
summarization is that there exists a gap between the human-written gold summary
and the oracle sentence labels. In this paper, we propose to extract fact-level
semantic units for better extractive summarization. We also introduce a
hierarchical structure, which incorporates the multi-level of granularities of
the textual information into the model. In addition, we incorporate our model
with BERT using a hierarchical graph mask. This allows us to combine BERT's
ability in natural language understanding and the structural information
without increasing the scale of the model. Experiments on the CNN/DaliyMail
dataset show that our model achieves state-of-the-art results.
| 2,020 |
Computation and Language
|
An Integrated Approach for Improving Brand Consistency of Web Content:
Modeling, Analysis and Recommendation
|
A consumer-dependent (business-to-consumer) organization tends to present
itself as possessing a set of human qualities, which is termed as the brand
personality of the company. The perception is impressed upon the consumer
through the content, be it in the form of advertisement, blogs or magazines,
produced by the organization. A consistent brand will generate trust and retain
customers over time as they develop an affinity towards regularity and common
patterns. However, maintaining a consistent messaging tone for a brand has
become more challenging with the virtual explosion in the amount of content
which needs to be authored and pushed to the Internet to maintain an edge in
the era of digital marketing. To understand the depth of the problem, we
collect around 300K web page content from around 650 companies. We develop
trait-specific classification models by considering the linguistic features of
the content. The classifier automatically identifies the web articles which are
not consistent with the mission and vision of a company and further helps us to
discover the conditions under which the consistency cannot be maintained. To
address the brand inconsistency issue, we then develop a sentence ranking
system that outputs the top three sentences that need to be changed for making
a web article more consistent with the company's brand personality.
| 2,021 |
Computation and Language
|
Entity Recognition and Relation Extraction from Scientific and Technical
Texts in Russian
|
This paper is devoted to the study of methods for information extraction
(entity recognition and relation classification) from scientific texts on
information technology. Scientific publications provide valuable information
into cutting-edge scientific advances, but efficient processing of increasing
amounts of data is a time-consuming task. In this paper, several modifications
of methods for the Russian language are proposed. It also includes the results
of experiments comparing a keyword extraction method, vocabulary method, and
some methods based on neural networks. Text collections for these tasks exist
for the English language and are actively used by the scientific community, but
at present, such datasets in Russian are not publicly available. In this paper,
we present a corpus of scientific texts in Russian, RuSERRC. This dataset
consists of 1600 unlabeled documents and 80 labeled with entities and semantic
relations (6 relation types were considered). The dataset and models are
available at https://github.com/iis-research-team. We hope they can be useful
for research purposes and development of information extraction systems.
| 2,020 |
Computation and Language
|
Do We Need Online NLU Tools?
|
The intent recognition is an essential algorithm of any conversational AI
application. It is responsible for the classification of an input message into
meaningful classes. In many bot development platforms, we can configure the NLU
pipeline. Several intent recognition services are currently available as an
API, or we choose from many open-source alternatives. However, there is no
comparison of intent recognition services and open-source algorithms. Many
factors make the selection of the right approach to the intent recognition
challenging in practice. In this paper, we suggest criteria to choose the best
intent recognition algorithm for an application. We present a dataset for
evaluation. Finally, we compare selected public NLU services with selected
open-source algorithms for intent recognition.
| 2,021 |
Computation and Language
|
Persuasive Dialogue Understanding: the Baselines and Negative Results
|
Persuasion aims at forming one's opinion and action via a series of
persuasive messages containing persuader's strategies. Due to its potential
application in persuasive dialogue systems, the task of persuasive strategy
recognition has gained much attention lately. Previous methods on user intent
recognition in dialogue systems adopt recurrent neural network (RNN) or
convolutional neural network (CNN) to model context in conversational history,
neglecting the tactic history and intra-speaker relation. In this paper, we
demonstrate the limitations of a Transformer-based approach coupled with
Conditional Random Field (CRF) for the task of persuasive strategy recognition.
In this model, we leverage inter- and intra-speaker contextual semantic
features, as well as label dependencies to improve the recognition. Despite
extensive hyper-parameter optimizations, this architecture fails to outperform
the baseline methods. We observe two negative results. Firstly, CRF cannot
capture persuasive label dependencies, possibly as strategies in persuasive
dialogues do not follow any strict grammar or rules as the cases in Named
Entity Recognition (NER) or part-of-speech (POS) tagging. Secondly, the
Transformer encoder trained from scratch is less capable of capturing
sequential information in persuasive dialogues than Long Short-Term Memory
(LSTM). We attribute this to the reason that the vanilla Transformer encoder
does not efficiently consider relative position information of sequence
elements.
| 2,020 |
Computation and Language
|
Sentiment Classification in Bangla Textual Content: A Comparative Study
|
Sentiment analysis has been widely used to understand our views on social and
political agendas or user experiences over a product. It is one of the cores
and well-researched areas in NLP. However, for low-resource languages, like
Bangla, one of the prominent challenge is the lack of resources. Another
important limitation, in the current literature for Bangla, is the absence of
comparable results due to the lack of a well-defined train/test split. In this
study, we explore several publicly available sentiment labeled datasets and
designed classifiers using both classical and deep learning algorithms. In our
study, the classical algorithms include SVM and Random Forest, and deep
learning algorithms include CNN, FastText, and transformer-based models. We
compare these models in terms of model performance and time-resource
complexity. Our finding suggests transformer-based models, which have not been
explored earlier for Bangla, outperform all other models. Furthermore, we
created a weighted list of lexicon content based on the valence score per
class. We then analyzed the content for high significance entries per class, in
the datasets. For reproducibility, we make publicly available data splits and
the ranked lexicon list. The presented results can be used for future studies
as a benchmark.
| 2,020 |
Computation and Language
|
Collaborative Storytelling with Large-scale Neural Language Models
|
Storytelling plays a central role in human socializing and entertainment.
However, much of the research on automatic storytelling generation assumes that
stories will be generated by an agent without any human interaction. In this
paper, we introduce the task of collaborative storytelling, where an artificial
intelligence agent and a person collaborate to create a unique story by taking
turns adding to it. We present a collaborative storytelling system which works
with a human storyteller to create a story by generating new utterances based
on the story so far. We constructed the storytelling system by tuning a
publicly-available large scale language model on a dataset of writing prompts
and their accompanying fictional works. We identify generating sufficiently
human-like utterances to be an important technical issue and propose a
sample-and-rank approach to improve utterance quality. Quantitative evaluation
shows that our approach outperforms a baseline, and we present qualitative
evaluation of our system's capabilities.
| 2,020 |
Computation and Language
|
Are Chess Discussions Racist? An Adversarial Hate Speech Data Set
|
On June 28, 2020, while presenting a chess podcast on Grandmaster Hikaru
Nakamura, Antonio Radi\'c's YouTube handle got blocked because it contained
"harmful and dangerous" content. YouTube did not give further specific reason,
and the channel got reinstated within 24 hours. However, Radi\'c speculated
that given the current political situation, a referral to "black against
white", albeit in the context of chess, earned him this temporary ban. In this
paper, via a substantial corpus of 681,995 comments, on 8,818 YouTube videos
hosted by five highly popular chess-focused YouTube channels, we ask the
following research question: \emph{how robust are off-the-shelf hate-speech
classifiers to out-of-domain adversarial examples?} We release a data set of
1,000 annotated comments where existing hate speech classifiers misclassified
benign chess discussions as hate speech. We conclude with an intriguing analogy
result on racial bias with our findings pointing out to the broader challenge
of color polysemy.
| 2,020 |
Computation and Language
|
Learning Informative Representations of Biomedical Relations with Latent
Variable Models
|
Extracting biomedical relations from large corpora of scientific documents is
a challenging natural language processing task. Existing approaches usually
focus on identifying a relation either in a single sentence (mention-level) or
across an entire corpus (pair-level). In both cases, recent methods have
achieved strong results by learning a point estimate to represent the relation;
this is then used as the input to a relation classifier. However, the relation
expressed in text between a pair of biomedical entities is often more complex
than can be captured by a point estimate. To address this issue, we propose a
latent variable model with an arbitrarily flexible distribution to represent
the relation between an entity pair. Additionally, our model provides a unified
architecture for both mention-level and pair-level relation extraction. We
demonstrate that our model achieves results competitive with strong baselines
for both tasks while having fewer parameters and being significantly faster to
train. We make our code publicly available.
| 2,020 |
Computation and Language
|
A Deep Language-independent Network to analyze the impact of COVID-19 on
the World via Sentiment Analysis
|
Towards the end of 2019, Wuhan experienced an outbreak of novel coronavirus,
which soon spread all over the world, resulting in a deadly pandemic that
infected millions of people around the globe. The government and public health
agencies followed many strategies to counter the fatal virus. However, the
virus severely affected the social and economic lives of the people. In this
paper, we extract and study the opinion of people from the top five worst
affected countries by the virus, namely USA, Brazil, India, Russia, and South
Africa. We propose a deep language-independent Multilevel Attention-based
Conv-BiGRU network (MACBiG-Net), which includes embedding layer, word-level
encoded attention, and sentence-level encoded attention mechanism to extract
the positive, negative, and neutral sentiments. The embedding layer encodes the
sentence sequence into a real-valued vector. The word-level and sentence-level
encoding is performed by a 1D Conv-BiGRU based mechanism, followed by
word-level and sentence-level attention, respectively. We further develop a
COVID-19 Sentiment Dataset by crawling the tweets from Twitter. Extensive
experiments on our proposed dataset demonstrate the effectiveness of the
proposed MACBiG-Net. Also, attention-weights visualization and in-depth results
analysis shows that the proposed network has effectively captured the
sentiments of the people.
| 2,020 |
Computation and Language
|
1st AfricaNLP Workshop Proceedings, 2020
|
Proceedings of the 1st AfricaNLP Workshop held on 26th April alongside ICLR
2020, Virtual Conference, Formerly Addis Ababa Ethiopia.
| 2,020 |
Computation and Language
|
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
|
Backdoor attacks are a kind of emergent training-time threat to deep neural
networks (DNNs). They can manipulate the output of DNNs and possess high
insidiousness. In the field of natural language processing, some attack methods
have been proposed and achieve very high attack success rates on multiple
popular models. Nevertheless, there are few studies on defending against
textual backdoor attacks. In this paper, we propose a simple and effective
textual backdoor defense named ONION, which is based on outlier word detection
and, to the best of our knowledge, is the first method that can handle all the
textual backdoor attack situations. Experiments demonstrate the effectiveness
of our model in defending BiLSTM and BERT against five different backdoor
attacks. All the code and data of this paper can be obtained at
https://github.com/thunlp/ONION.
| 2,021 |
Computation and Language
|
Fine-Tuning BERT for Sentiment Analysis of Vietnamese Reviews
|
Sentiment analysis is an important task in the field ofNature Language
Processing (NLP), in which users' feedbackdata on a specific issue are
evaluated and analyzed. Manydeep learning models have been proposed to tackle
this task, including the recently-introduced Bidirectional Encoder
Rep-resentations from Transformers (BERT) model. In this paper,we experiment
with two BERT fine-tuning methods for thesentiment analysis task on datasets of
Vietnamese reviews: 1) a method that uses only the [CLS] token as the input for
anattached feed-forward neural network, and 2) another methodin which all BERT
output vectors are used as the input forclassification. Experimental results on
two datasets show thatmodels using BERT slightly outperform other models
usingGloVe and FastText. Also, regarding the datasets employed inthis study,
our proposed BERT fine-tuning method produces amodel with better performance
than the original BERT fine-tuning method.
| 2,020 |
Computation and Language
|
Topic modelling discourse dynamics in historical newspapers
|
This paper addresses methodological issues in diachronic data analysis for
historical research. We apply two families of topic models (LDA and DTM) on a
relatively large set of historical newspapers, with the aim of capturing and
understanding discourse dynamics. Our case study focuses on newspapers and
periodicals published in Finland between 1854 and 1917, but our method can
easily be transposed to any diachronic data. Our main contributions are a) a
combined sampling, training and inference procedure for applying topic models
to huge and imbalanced diachronic text collections; b) a discussion on the
differences between two topic models for this type of data; c) quantifying
topic prominence for a period and thus a generalization of document-wise topic
assignment to a discourse level; and d) a discussion of the role of humanistic
interpretation with regard to analysing discourse dynamics through topic
models.
| 2,020 |
Computation and Language
|
What do we expect from Multiple-choice QA Systems?
|
The recent success of machine learning systems on various QA datasets could
be interpreted as a significant improvement in models' language understanding
abilities. However, using various perturbations, multiple recent works have
shown that good performance on a dataset might not indicate performance that
correlates well with human's expectations from models that "understand"
language. In this work we consider a top performing model on several Multiple
Choice Question Answering (MCQA) datasets, and evaluate it against a set of
expectations one might have from such a model, using a series of
zero-information perturbations of the model's inputs. Our results show that the
model clearly falls short of our expectations, and motivates a modified
training approach that forces the model to better attend to the inputs. We show
that the new training paradigm leads to a model that performs on par with the
original model while better satisfying our expectations.
| 2,020 |
Computation and Language
|
Self-Supervised learning with cross-modal transformers for emotion
recognition
|
Emotion recognition is a challenging task due to limited availability of
in-the-wild labeled datasets. Self-supervised learning has shown improvements
on tasks with limited labeled datasets in domains like speech and natural
language. Models such as BERT learn to incorporate context in word embeddings,
which translates to improved performance in downstream tasks like question
answering. In this work, we extend self-supervised training to multi-modal
applications. We learn multi-modal representations using a transformer trained
on the masked language modeling task with audio, visual and text features. This
model is fine-tuned on the downstream task of emotion recognition. Our results
on the CMU-MOSEI dataset show that this pre-training technique can improve the
emotion recognition performance by up to 3% compared to the baseline.
| 2,021 |
Computation and Language
|
Athena: Constructing Dialogues Dynamically with Discourse Constraints
|
This report describes Athena, a dialogue system for spoken conversation on
popular topics and current events. We develop a flexible topic-agnostic
approach to dialogue management that dynamically configures dialogue based on
general principles of entity and topic coherence. Athena's dialogue manager
uses a contract-based method where discourse constraints are dispatched to
clusters of response generators. This allows Athena to procure responses from
dynamic sources, such as knowledge graph traversals and feature-based
on-the-fly response retrieval methods. After describing the dialogue system
architecture, we perform an analysis of conversations that Athena participated
in during the 2019 Alexa Prize Competition. We conclude with a report on
several user studies we carried out to better understand how individual user
characteristics affect system ratings.
| 2,020 |
Computation and Language
|
LRTA: A Transparent Neural-Symbolic Reasoning Framework with Modular
Supervision for Visual Question Answering
|
The predominant approach to visual question answering (VQA) relies on
encoding the image and question with a "black-box" neural encoder and decoding
a single token as the answer like "yes" or "no". Despite this approach's strong
quantitative results, it struggles to come up with intuitive, human-readable
forms of justification for the prediction process. To address this
insufficiency, we reformulate VQA as a full answer generation task, which
requires the model to justify its predictions in natural language. We propose
LRTA [Look, Read, Think, Answer], a transparent neural-symbolic reasoning
framework for visual question answering that solves the problem step-by-step
like humans and provides human-readable form of justification at each step.
Specifically, LRTA learns to first convert an image into a scene graph and
parse a question into multiple reasoning instructions. It then executes the
reasoning instructions one at a time by traversing the scene graph using a
recurrent neural-symbolic execution module. Finally, it generates a full answer
to the given question with natural language justifications. Our experiments on
GQA dataset show that LRTA outperforms the state-of-the-art model by a large
margin (43.1% v.s. 28.0%) on the full answer generation task. We also create a
perturbed GQA test set by removing linguistic cues (attributes and relations)
in the questions for analyzing whether a model is having a smart guess with
superficial data correlations. We show that LRTA makes a step towards truly
understanding the question while the state-of-the-art model tends to learn
superficial correlations from the training data.
| 2,020 |
Computation and Language
|
Evaluating Semantic Accuracy of Data-to-Text Generation with Natural
Language Inference
|
A major challenge in evaluating data-to-text (D2T) generation is measuring
the semantic accuracy of the generated text, i.e. checking if the output text
contains all and only facts supported by the input data. We propose a new
metric for evaluating the semantic accuracy of D2T generation based on a neural
model pretrained for natural language inference (NLI). We use the NLI model to
check textual entailment between the input data and the output text in both
directions, allowing us to reveal omissions or hallucinations. Input data are
converted to text for NLI using trivial templates. Our experiments on two
recent D2T datasets show that our metric can achieve high accuracy in
identifying erroneous system outputs.
| 2,020 |
Computation and Language
|
Sensing Ambiguity in Henry James' "The Turn of the Screw"
|
Fields such as the philosophy of language, continental philosophy, and
literary studies have long established that human language is, at its essence,
ambiguous and that this quality, although challenging to communication,
enriches language and points to the complexity of human thought. On the other
hand, in the NLP field there have been ongoing efforts aimed at disambiguation
for various downstream tasks. This work brings together computational text
analysis and literary analysis to demonstrate the extent to which ambiguity in
certain texts plays a key role in shaping meaning and thus requires analysis
rather than elimination. We revisit the discussion, well known in the
humanities, about the role ambiguity plays in Henry James' 19th century
novella, The Turn of the Screw. We model each of the novella's two competing
interpretations as a topic and computationally demonstrate that the duality
between them exists consistently throughout the work and shapes, rather than
obscures, its meaning. We also demonstrate that cosine similarity and word
mover's distance are sensitive enough to detect ambiguity in its most subtle
literary form, despite doubts to the contrary raised by literary scholars. Our
analysis is built on topic word lists and word embeddings from various sources.
We first claim, and then empirically show, the interdependence between
computational analysis and close reading performed by a human expert.
| 2,020 |
Computation and Language
|
Standardizing linguistic data: method and tools for annotating
(pre-orthographic) French
|
With the development of big corpora of various periods, it becomes crucial to
standardise linguistic annotation (e.g. lemmas, POS tags, morphological
annotation) to increase the interoperability of the data produced, despite
diachronic variations. In the present paper, we describe both methodologically
(by proposing annotation principles) and technically (by creating the required
training data and the relevant models) the production of a linguistic tagger
for (early) modern French (16-18th c.), taking as much as possible into account
already existing standards for contemporary and, especially, medieval French.
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.