Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Aspect-based Sentiment Analysis of Scientific Reviews | Scientific papers are complex and understanding the usefulness of these
papers requires prior knowledge. Peer reviews are comments on a paper provided
by designated experts on that field and hold a substantial amount of
information, not only for the editors and chairs to make the final decision,
but also to judge the potential impact of the paper. In this paper, we propose
to use aspect-based sentiment analysis of scientific reviews to be able to
extract useful information, which correlates well with the accept/reject
decision.
While working on a dataset of close to 8k reviews from ICLR, one of the top
conferences in the field of machine learning, we use an active learning
framework to build a training dataset for aspect prediction, which is further
used to obtain the aspects and sentiments for the entire dataset. We show that
the distribution of aspect-based sentiments obtained from a review is
significantly different for accepted and rejected papers. We use the aspect
sentiments from these reviews to make an intriguing observation, certain
aspects present in a paper and discussed in the review strongly determine the
final recommendation. As a second objective, we quantify the extent of
disagreement among the reviewers refereeing a paper. We also investigate the
extent of disagreement between the reviewers and the chair and find that the
inter-reviewer disagreement may have a link to the disagreement with the chair.
One of the most interesting observations from this study is that reviews, where
the reviewer score and the aspect sentiments extracted from the review text
written by the reviewer are consistent, are also more likely to be concurrent
with the chair's decision.
| 2,020 | Computation and Language |
Understanding Self-Attention of Self-Supervised Audio Transformers | Self-supervised Audio Transformers (SAT) enable great success in many
downstream speech applications like ASR, but how they work has not been widely
explored yet. In this work, we present multiple strategies for the analysis of
attention mechanisms in SAT. We categorize attentions into explainable
categories, where we discover each category possesses its own unique
functionality. We provide a visualization tool for understanding multi-head
self-attention, importance ranking strategies for identifying critical
attention, and attention refinement techniques to improve model performance.
| 2,020 | Computation and Language |
ELITR Non-Native Speech Translation at IWSLT 2020 | This paper is an ELITR system submission for the non-native speech
translation task at IWSLT 2020. We describe systems for offline ASR, real-time
ASR, and our cascaded approach to offline SLT and real-time SLT. We select our
primary candidates from a pool of pre-existing systems, develop a new
end-to-end general ASR system, and a hybrid ASR trained on non-native speech.
The provided small validation set prevents us from carrying out a complex
validation, but we submit all the unselected candidates for contrastive
evaluation on the test set.
| 2,020 | Computation and Language |
Unsupervised Translation of Programming Languages | A transcompiler, also known as source-to-source translator, is a system that
converts source code from a high-level programming language (such as C++ or
Python) to another. Transcompilers are primarily used for interoperability, and
to port codebases written in an obsolete or deprecated language (e.g. COBOL,
Python 2) to a modern one. They typically rely on handcrafted rewrite rules,
applied to the source code abstract syntax tree. Unfortunately, the resulting
translations often lack readability, fail to respect the target language
conventions, and require manual modifications in order to work properly. The
overall translation process is timeconsuming and requires expertise in both the
source and target languages, making code-translation projects expensive.
Although neural models significantly outperform their rule-based counterparts
in the context of natural language translation, their applications to
transcompilation have been limited due to the scarcity of parallel data in this
domain. In this paper, we propose to leverage recent approaches in unsupervised
machine translation to train a fully unsupervised neural transcompiler. We
train our model on source code from open source GitHub projects, and show that
it can translate functions between C++, Java, and Python with high accuracy.
Our method relies exclusively on monolingual source code, requires no expertise
in the source or target languages, and can easily be generalized to other
programming languages. We also build and release a test set composed of 852
parallel functions, along with unit tests to check the correctness of
translations. We show that our model outperforms rule-based commercial
baselines by a significant margin.
| 2,020 | Computation and Language |
Beyond Domain APIs: Task-oriented Conversational Modeling with
Unstructured Knowledge Access | Most prior work on task-oriented dialogue systems are restricted to a limited
coverage of domain APIs, while users oftentimes have domain related requests
that are not covered by the APIs. In this paper, we propose to expand coverage
of task-oriented dialogue systems by incorporating external unstructured
knowledge sources. We define three sub-tasks: knowledge-seeking turn detection,
knowledge selection, and knowledge-grounded response generation, which can be
modeled individually or jointly. We introduce an augmented version of MultiWOZ
2.1, which includes new out-of-API-coverage turns and responses grounded on
external knowledge sources. We present baselines for each sub-task using both
conventional and neural approaches. Our experimental results demonstrate the
need for further research in this direction to enable more informative
conversational systems.
| 2,020 | Computation and Language |
CoCon: A Self-Supervised Approach for Controlled Text Generation | Pretrained Transformer-based language models (LMs) display remarkable natural
language generation capabilities. With their immense potential, controlling
text generation of such LMs is getting attention. While there are studies that
seek to control high-level attributes (such as sentiment and topic) of
generated text, there is still a lack of more precise control over its content
at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to
control an LM's output text with a content input, at a fine-grained level. In
our self-supervised approach, the CoCon block learns to help the LM complete a
partially-observed text sequence by conditioning with content inputs that are
withheld from the LM. Through experiments, we show that CoCon can naturally
incorporate target content into generated texts and control high-level text
attributes in a zero-shot manner.
| 2,022 | Computation and Language |
Sentiment Analysis Based on Deep Learning: A Comparative Study | The study of public opinion can provide us with valuable information. The
analysis of sentiment on social networks, such as Twitter or Facebook, has
become a powerful means of learning about the users' opinions and has a wide
range of applications. However, the efficiency and accuracy of sentiment
analysis is being hindered by the challenges encountered in natural language
processing (NLP). In recent years, it has been demonstrated that deep learning
models are a promising solution to the challenges of NLP. This paper reviews
the latest studies that have employed deep learning to solve sentiment analysis
problems, such as sentiment polarity. Models using term frequency-inverse
document frequency (TF-IDF) and word embedding have been applied to a series of
datasets. Finally, a comparative study has been conducted on the experimental
results obtained for the different models and input features
| 2,020 | Computation and Language |
Spoken dialect identification in Twitter using a multi-filter
architecture | This paper presents our approach for SwissText & KONVENS 2020 shared task 2,
which is a multi-stage neural model for Swiss German (GSW) identification on
Twitter. Our model outputs either GSW or non-GSW and is not meant to be used as
a generic language identifier. Our architecture consists of two independent
filters where the first one favors recall, and the second one filter favors
precision (both towards GSW). Moreover, we do not use binary models (GSW vs.
not-GSW) in our filters but rather a multi-class classifier with GSW being one
of the possible labels. Our model reaches F1-score of 0.982 on the test set of
the shared task.
| 2,020 | Computation and Language |
Filtered Inner Product Projection for Crosslingual Embedding Alignment | Due to widespread interest in machine translation and transfer learning,
there are numerous algorithms for mapping multiple embeddings to a shared
representation space. Recently, these algorithms have been studied in the
setting of bilingual dictionary induction where one seeks to align the
embeddings of a source and a target language such that translated word pairs
lie close to one another in a common representation space. In this paper, we
propose a method, Filtered Inner Product Projection (FIPP), for mapping
embeddings to a common representation space and evaluate FIPP in the context of
bilingual dictionary induction. As semantic shifts are pervasive across
languages and domains, FIPP first identifies the common geometric structure in
both embeddings and then, only on the common structure, aligns the Gram
matrices of these embeddings. Unlike previous approaches, FIPP is applicable
even when the source and target embeddings are of differing dimensionalities.
We show that our approach outperforms existing methods on the MUSE dataset for
various language pairs. Furthermore, FIPP provides computational benefits both
in ease of implementation and scalability.
| 2,021 | Computation and Language |
DeBERTa: Decoding-enhanced BERT with Disentangled Attention | Recent progress in pre-trained neural language models has significantly
improved the performance of many natural language processing (NLP) tasks. In
this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT
with disentangled attention) that improves the BERT and RoBERTa models using
two novel techniques. The first is the disentangled attention mechanism, where
each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed
using disentangled matrices on their contents and relative positions,
respectively. Second, an enhanced mask decoder is used to incorporate absolute
positions in the decoding layer to predict the masked tokens in model
pre-training. In addition, a new virtual adversarial training method is used
for fine-tuning to improve models' generalization. We show that these
techniques significantly improve the efficiency of model pre-training and the
performance of both natural language understanding (NLU) and natural langauge
generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model
trained on half of the training data performs consistently better on a wide
range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%),
on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%).
Notably, we scale up DeBERTa by training a larger version that consists of 48
Transform layers with 1.5 billion parameters. The significant performance boost
makes the single DeBERTa model surpass the human performance on the SuperGLUE
benchmark (Wang et al., 2019a) for the first time in terms of macro-average
score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the
SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline
by a decent margin (90.3 versus 89.8).
| 2,021 | Computation and Language |
DeCLUTR: Deep Contrastive Learning for Unsupervised Textual
Representations | Sentence embeddings are an important component of many natural language
processing (NLP) systems. Like word embeddings, sentence embeddings are
typically learned on large text corpora and then transferred to various
downstream tasks, such as clustering and retrieval. Unlike word embeddings, the
highest performing solutions for learning sentence embeddings require labelled
data, limiting their usefulness to languages and domains where labelled data is
abundant. In this paper, we present DeCLUTR: Deep Contrastive Learning for
Unsupervised Textual Representations. Inspired by recent advances in deep
metric learning (DML), we carefully design a self-supervised objective for
learning universal sentence embeddings that does not require labelled training
data. When used to extend the pretraining of transformer-based language models,
our approach closes the performance gap between unsupervised and supervised
pretraining for universal sentence encoders. Importantly, our experiments
suggest that the quality of the learned embeddings scale with both the number
of trainable parameters and the amount of unlabelled training data. Our code
and pretrained models are publicly available and can be easily adapted to new
domains or used to embed unseen text.
| 2,021 | Computation and Language |
Prague Dependency Treebank -- Consolidated 1.0 | We present a richly annotated and genre-diversified language resource, the
Prague Dependency Treebank-Consolidated 1.0 (PDT-C 1.0), the purpose of which
is - as it always been the case for the family of the Prague Dependency
Treebanks - to serve both as a training data for various types of NLP tasks as
well as for linguistically-oriented research. PDT-C 1.0 contains four different
datasets of Czech, uniformly annotated using the standard PDT scheme (albeit
not everything is annotated manually, as we describe in detail here). The texts
come from different sources: daily newspaper articles, Czech translation of the
Wall Street Journal, transcribed dialogs and a small amount of user-generated,
short, often non-standard language segments typed into a web translator.
Altogether, the treebank contains around 180,000 sentences with their
morphological, surface and deep syntactic annotation. The diversity of the
texts and annotations should serve well the NLP applications as well as it is
an invaluable resource for linguistic research, including comparative studies
regarding texts of different genres. The corpus is publicly and freely
available.
| 2,020 | Computation and Language |
UDPipe at EvaLatin 2020: Contextualized Embeddings and Treebank
Embeddings | We present our contribution to the EvaLatin shared task, which is the first
evaluation campaign devoted to the evaluation of NLP tools for Latin. We
submitted a system based on UDPipe 2.0, one of the winners of the CoNLL 2018
Shared Task, The 2018 Shared Task on Extrinsic Parser Evaluation and SIGMORPHON
2019 Shared Task. Our system places first by a wide margin both in
lemmatization and POS tagging in the open modality, where additional supervised
data is allowed, in which case we utilize all Universal Dependency Latin
treebanks. In the closed modality, where only the EvaLatin training data is
allowed, our system achieves the best performance in lemmatization and in
classical subtask of POS tagging, while reaching second place in cross-genre
and cross-time settings. In the ablation experiments, we also evaluate the
influence of BERT and XLM-RoBERTa contextualized embeddings, and the treebank
encodings of the different flavors of Latin treebanks.
| 2,020 | Computation and Language |
Accelerating Natural Language Understanding in Task-Oriented Dialog | Task-oriented dialog models typically leverage complex neural architectures
and large-scale, pre-trained Transformers to achieve state-of-the-art
performance on popular natural language understanding benchmarks. However,
these models frequently have in excess of tens of millions of parameters,
making them impossible to deploy on-device where resource-efficiency is a major
concern. In this work, we show that a simple convolutional model compressed
with structured pruning achieves largely comparable results to BERT on ATIS and
Snips, with under 100K parameters. Moreover, we perform acceleration
experiments on CPUs, where we observe our multi-task model predicts intents and
slots nearly 63x faster than even DistilBERT.
| 2,020 | Computation and Language |
Relation of the Relations: A New Paradigm of the Relation Extraction
Problem | In natural language, often multiple entities appear in the same text.
However, most previous works in Relation Extraction (RE) limit the scope to
identifying the relation between two entities at a time. Such an approach
induces a quadratic computation time, and also overlooks the interdependency
between multiple relations, namely the relation of relations (RoR). Due to the
significance of RoR in existing datasets, we propose a new paradigm of RE that
considers as a whole the predictions of all relations in the same context.
Accordingly, we develop a data-driven approach that does not require
hand-crafted rules but learns by itself the RoR, using Graph Neural Networks
and a relation matrix transformer. Experiments show that our model outperforms
the state-of-the-art approaches by +1.12\% on the ACE05 dataset and +2.55\% on
SemEval 2018 Task 7.2, which is a substantial improvement on the two
competitive benchmarks.
| 2,020 | Computation and Language |
Challenges and Thrills of Legal Arguments | State-of-the-art attention based models, mostly centered around the
transformer architecture, solve the problem of sequence-to-sequence translation
using the so-called scaled dot-product attention. While this technique is
highly effective for estimating inter-token attention, it does not answer the
question of inter-sequence attention when we deal with conversation-like
scenarios. We propose an extension, HumBERT, that attempts to perform
continuous contextual argument generation using locally trained transformers.
| 2,020 | Computation and Language |
A Cross-Task Analysis of Text Span Representations | Many natural language processing (NLP) tasks involve reasoning with textual
spans, including question answering, entity recognition, and coreference
resolution. While extensive research has focused on functional architectures
for representing words and sentences, there is less work on representing
arbitrary spans of text within sentences. In this paper, we conduct a
comprehensive empirical evaluation of six span representation methods using
eight pretrained language representation models across six tasks, including two
tasks that we introduce. We find that, although some simple span
representations are fairly reliable across tasks, in general the optimal span
representation varies by task, and can also vary within different facets of
individual tasks. We also find that the choice of span representation has a
bigger impact with a fixed pretrained encoder than with a fine-tuned encoder.
| 2,020 | Computation and Language |
Generative Adversarial Phonology: Modeling unsupervised phonetic and
phonological learning with neural networks | Training deep neural networks on well-understood dependencies in speech data
can provide new insights into how they learn internal representations. This
paper argues that acquisition of speech can be modeled as a dependency between
random space and generated speech data in the Generative Adversarial Network
architecture and proposes a methodology to uncover the network's internal
representations that correspond to phonetic and phonological properties. The
Generative Adversarial architecture is uniquely appropriate for modeling
phonetic and phonological learning because the network is trained on
unannotated raw acoustic data and learning is unsupervised without any
language-specific assumptions or pre-assumed levels of abstraction. A
Generative Adversarial Network was trained on an allophonic distribution in
English. The network successfully learns the allophonic alternation: the
network's generated speech signal contains the conditional distribution of
aspiration duration. The paper proposes a technique for establishing the
network's internal representations that identifies latent variables that
correspond to, for example, presence of [s] and its spectral properties. By
manipulating these variables, we actively control the presence of [s] and its
frication amplitude in the generated outputs. This suggests that the network
learns to use latent variables as an approximation of phonetic and phonological
representations. Crucially, we observe that the dependencies learned in
training extend beyond the training interval, which allows for additional
exploration of learning representations. The paper also discusses how the
network's architecture and innovative outputs resemble and differ from
linguistic behavior in language acquisition, speech disorders, and speech
errors, and how well-understood dependencies in speech data can help us
interpret how neural networks learn their representations.
| 2,020 | Computation and Language |
Medical Concept Normalization in User Generated Texts by Learning Target
Concept Embeddings | Medical concept normalization helps in discovering standard concepts in
free-form text i.e., maps health-related mentions to standard concepts in a
vocabulary. It is much beyond simple string matching and requires a deep
semantic understanding of concept mentions. Recent research approach concept
normalization as either text classification or text matching. The main drawback
in existing a) text classification approaches is ignoring valuable target
concepts information in learning input concept mention representation b) text
matching approach is the need to separately generate target concept embeddings
which is time and resource consuming. Our proposed model overcomes these
drawbacks by jointly learning the representations of input concept mention and
target concepts. First, it learns the input concept mention representation
using RoBERTa. Second, it finds cosine similarity between embeddings of input
concept mention and all the target concepts. Here, embeddings of target
concepts are randomly initialized and then updated during training. Finally,
the target concept with maximum cosine similarity is assigned to the input
concept mention. Our model surpasses all the existing methods across three
standard datasets by improving accuracy up to 2.31%.
| 2,020 | Computation and Language |
A Multitask Learning Approach for Diacritic Restoration | In many languages like Arabic, diacritics are used to specify pronunciations
as well as meanings. Such diacritics are often omitted in written text,
increasing the number of possible pronunciations and meanings for a word. This
results in a more ambiguous text making computational processing on such text
more difficult. Diacritic restoration is the task of restoring missing
diacritics in the written text. Most state-of-the-art diacritic restoration
models are built on character level information which helps generalize the
model to unseen data, but presumably lose useful information at the word level.
Thus, to compensate for this loss, we investigate the use of multi-task
learning to jointly optimize diacritic restoration with related NLP problems
namely word segmentation, part-of-speech tagging, and syntactic diacritization.
We use Arabic as a case study since it has sufficient data resources for tasks
that we consider in our joint modeling. Our joint models significantly
outperform the baselines and are comparable to the state-of-the-art models that
are more complex relying on morphological analyzers and/or a lot more data
(e.g. dialectal data).
| 2,020 | Computation and Language |
Semantic Loss Application to Entity Relation Recognition | Usually, entity relation recognition systems either use a pipe-lined model
that treats the entity tagging and relation identification as separate tasks or
a joint model that simultaneously identifies the relation and entities. This
paper compares these two general approaches for the entity relation
recognition. State-of-the-art entity relation recognition systems are built
using deep recurrent neural networks which often does not capture the symbolic
knowledge or the logical constraints in the problem. The main contribution of
this paper is an end-to-end neural model for joint entity relation extraction
which incorporates a novel loss function. This novel loss function encodes the
constraint information in the problem to guide the model training effectively.
We show that addition of this loss function to the existing typical loss
functions has a positive impact over the performance of the models. This model
is truly end-to-end, requires no feature engineering and easily extensible.
Extensive experimentation has been conducted to evaluate the significance of
capturing symbolic knowledge for natural language understanding. Models using
this loss function are observed to be outperforming their counterparts and
converging faster. Experimental results in this work suggest the use of this
methodology for other language understanding applications.
| 2,020 | Computation and Language |
Growing Together: Modeling Human Language Learning With n-Best
Multi-Checkpoint Machine Translation | We describe our submission to the 2020 Duolingo Shared Task on Simultaneous
Translation And Paraphrase for Language Education (STAPLE) (Mayhew et al.,
2020). We view MT models at various training stages (i.e., checkpoints) as
human learners at different levels. Hence, we employ an ensemble of
multi-checkpoints from the same model to generate translation sequences with
various levels of fluency. From each checkpoint, for our best model, we sample
n-Best sequences (n=10) with a beam width =100. We achieve 37.57 macro F1 with
a 6 checkpoint model ensemble on the official English to Portuguese shared task
test data, outperforming a baseline Amazon translation system of 21.30 macro F1
and ultimately demonstrating the utility of our intuitive method.
| 2,020 | Computation and Language |
Language Models as Fact Checkers? | Recent work has suggested that language models (LMs) store both common-sense
and factual knowledge learned from pre-training data. In this paper, we
leverage this implicit knowledge to create an effective end-to-end fact checker
using a solely a language model, without any external knowledge or explicit
retrieval components. While previous work on extracting knowledge from LMs have
focused on the task of open-domain question answering, to the best of our
knowledge, this is the first work to examine the use of language models as fact
checkers. In a closed-book setting, we show that our zero-shot LM approach
outperforms a random baseline on the standard FEVER task, and that our
fine-tuned LM compares favorably with standard baselines. Though we do not
ultimately outperform methods which use explicit knowledge bases, we believe
our exploration shows that this method is viable and has much room for
exploration.
| 2,020 | Computation and Language |
Interactive Extractive Search over Biomedical Corpora | We present a system that allows life-science researchers to search a
linguistically annotated corpus of scientific texts using patterns over
dependency graphs, as well as using patterns over token sequences and a
powerful variant of boolean keyword queries. In contrast to previous attempts
to dependency-based search, we introduce a light-weight query language that
does not require the user to know the details of the underlying linguistic
representations, and instead to query the corpus by providing an example
sentence coupled with simple markup. Search is performed at an interactive
speed due to efficient linguistic graph-indexing and retrieval engine. This
allows for rapid exploration, development and refinement of user queries. We
demonstrate the system using example workflows over two corpora: the PubMed
corpus including 14,446,243 PubMed abstracts and the CORD-19 dataset, a
collection of over 45,000 research papers focused on COVID-19 research. The
system is publicly available at https://allenai.github.io/spike
| 2,020 | Computation and Language |
BERT Loses Patience: Fast and Robust Inference with Early Exit | In this paper, we propose Patience-based Early Exit, a straightforward yet
effective inference method that can be used as a plug-and-play technique to
simultaneously improve the efficiency and robustness of a pretrained language
model (PLM). To achieve this, our approach couples an internal-classifier with
each layer of a PLM and dynamically stops inference when the intermediate
predictions of the internal classifiers remain unchanged for a pre-defined
number of steps. Our approach improves inference efficiency as it allows the
model to make a prediction with fewer layers. Meanwhile, experimental results
with an ALBERT model show that our method can improve the accuracy and
robustness of the model by preventing it from overthinking and exploiting
multiple classifiers for prediction, yielding a better accuracy-speed trade-off
compared to existing early exit methods.
| 2,020 | Computation and Language |
Pre-training Polish Transformer-based Language Models at Scale | Transformer-based language models are now widely used in Natural Language
Processing (NLP). This statement is especially true for English language, in
which many pre-trained models utilizing transformer-based architecture have
been published in recent years. This has driven forward the state of the art
for a variety of standard NLP tasks such as classification, regression, and
sequence labeling, as well as text-to-text tasks, such as machine translation,
question answering, or summarization. The situation have been different for
low-resource languages, such as Polish, however. Although some
transformer-based language models for Polish are available, none of them have
come close to the scale, in terms of corpus size and the number of parameters,
of the largest English-language models. In this study, we present two language
models for Polish based on the popular BERT architecture. The larger model was
trained on a dataset consisting of over 1 billion polish sentences, or 135GB of
raw text. We describe our methodology for collecting the data, preparing the
corpus, and pre-training the model. We then evaluate our models on thirteen
Polish linguistic tasks, and demonstrate improvements over previous approaches
in eleven of them.
| 2,020 | Computation and Language |
Tensors over Semirings for Latent-Variable Weighted Logic Programs | Semiring parsing is an elegant framework for describing parsers by using
semiring weighted logic programs. In this paper we present a generalization of
this concept: latent-variable semiring parsing. With our framework, any
semiring weighted logic program can be latentified by transforming weights from
scalar values of a semiring to rank-n arrays, or tensors, of semiring values,
allowing the modelling of latent variables within the semiring parsing
framework. Semiring is too strong a notion when dealing with tensors, and we
have to resort to a weaker structure: a partial semiring. We prove that this
generalization preserves all the desired properties of the original semiring
framework while strictly increasing its expressiveness.
| 2,020 | Computation and Language |
Combining word embeddings and convolutional neural networks to detect
duplicated questions | Detecting semantic similarities between sentences is still a challenge today
due to the ambiguity of natural languages. In this work, we propose a simple
approach to identifying semantically similar questions by combining the
strengths of word embeddings and Convolutional Neural Networks (CNNs). In
addition, we demonstrate how the cosine similarity metric can be used to
effectively compare feature vectors. Our network is trained on the Quora
dataset, which contains over 400k question pairs. We experiment with different
embedding approaches such as Word2Vec, Fasttext, and Doc2Vec and investigate
the effects these approaches have on model performance. Our model achieves
competitive results on the Quora dataset and complements the well-established
evidence that CNNs can be utilized for paraphrase detection tasks.
| 2,020 | Computation and Language |
Towards an Argument Mining Pipeline Transforming Texts to Argument
Graphs | This paper targets the automated extraction of components of argumentative
information and their relations from natural language text. Moreover, we
address a current lack of systems to provide complete argumentative structure
from arbitrary natural language text for general usage. We present an argument
mining pipeline as a universally applicable approach for transforming German
and English language texts to graph-based argument representations. We also
introduce new methods for evaluating the results based on existing benchmark
argument structures. Our results show that the generated argument graphs can be
beneficial to detect new connections between different statements of an
argumentative text. Our pipeline implementation is publicly available on
GitHub.
| 2,020 | Computation and Language |
CS-Embed at SemEval-2020 Task 9: The effectiveness of code-switched word
embeddings for sentiment analysis | The growing popularity and applications of sentiment analysis of social media
posts has naturally led to sentiment analysis of posts written in multiple
languages, a practice known as code-switching. While recent research into
code-switched posts has focused on the use of multilingual word embeddings,
these embeddings were not trained on code-switched data. In this work, we
present word-embeddings trained on code-switched tweets, specifically those
that make use of Spanish and English, known as Spanglish. We explore the
embedding space to discover how they capture the meanings of words in both
languages. We test the effectiveness of these embeddings by participating in
SemEval 2020 Task 9: ~\emph{Sentiment Analysis on Code-Mixed Social Media
Text}. We utilised them to train a sentiment classifier that achieves an F-1
score of 0.722. This is higher than the baseline for the competition of 0.656,
with our team (codalab username \emph{francesita}) ranking 14 out of 29
participating teams, beating the baseline.
| 2,020 | Computation and Language |
A Comprehensive Survey on Aspect Based Sentiment Analysis | Aspect Based Sentiment Analysis (ABSA) is the sub-field of Natural Language
Processing that deals with essentially splitting our data into aspects ad
finally extracting the sentiment information. ABSA is known to provide more
information about the context than general sentiment analysis. In this study,
our aim is to explore the various methodologies practiced while performing
ABSA, and providing a comparative study. This survey paper discusses various
solutions in-depth and gives a comparison between them. And is conveniently
divided into sections to get a holistic view on the process.
| 2,020 | Computation and Language |
ColdGANs: Taming Language GANs with Cautious Sampling Strategies | Training regimes based on Maximum Likelihood Estimation (MLE) suffer from
known limitations, often leading to poorly generated text sequences. At the
root of these limitations is the mismatch between training and inference, i.e.
the so-called exposure bias, exacerbated by considering only the reference
texts as correct, while in practice several alternative formulations could be
as good. Generative Adversarial Networks (GANs) can mitigate those limitations
but the discrete nature of text has hindered their application to language
generation: the approaches proposed so far, based on Reinforcement Learning,
have been shown to underperform MLE. Departing from previous works, we analyze
the exploration step in GANs applied to text generation, and show how classical
sampling results in unstable training. We propose to consider alternative
exploration strategies in a GAN framework that we name ColdGANs, where we force
the sampling to be close to the distribution modes to get smoother learning
dynamics. For the first time, to the best of our knowledge, the proposed
language GANs compare favorably to MLE, and obtain improvements over the
state-of-the-art on three generative tasks, namely unconditional text
generation, question generation, and abstractive summarization.
| 2,020 | Computation and Language |
Misinformation Has High Perplexity | Debunking misinformation is an important and time-critical task as there
could be adverse consequences when misinformation is not quashed promptly.
However, the usual supervised approach to debunking via misinformation
classification requires human-annotated data and is not suited to the fast
time-frame of newly emerging events such as the COVID-19 outbreak. In this
paper, we postulate that misinformation itself has higher perplexity compared
to truthful statements, and propose to leverage the perplexity to debunk false
claims in an unsupervised manner. First, we extract reliable evidence from
scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally
evaluate the correctness of given claims based on the perplexity scores at
debunking time. We construct two new COVID-19-related test sets, one is
scientific, and another is political in content, and empirically verify that
our system performs favorably compared to existing systems. We are releasing
these datasets publicly to encourage more research in debunking misinformation
on COVID-19 and other topics.
| 2,020 | Computation and Language |
CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via
Cycle Training | Two important tasks at the intersection of knowledge graphs and natural
language processing are graph-to-text (G2T) and text-to-graph (T2G) conversion.
Due to the difficulty and high cost of data collection, the supervised data
available in the two fields are usually on the magnitude of tens of thousands,
for example, 18K in the WebNLG~2017 dataset after preprocessing, which is far
fewer than the millions of data for other tasks such as machine translation.
Consequently, deep learning models for G2T and T2G suffer largely from scarce
training data. We present CycleGT, an unsupervised training method that can
bootstrap from fully non-parallel graph and text data, and iteratively back
translate between the two forms. Experiments on WebNLG datasets show that our
unsupervised model trained on the same number of data achieves performance on
par with several fully supervised models. Further experiments on the
non-parallel GenWiki dataset verify that our method performs the best among
unsupervised baselines. This validates our framework as an effective approach
to overcome the data scarcity problem in the fields of G2T and T2G. Our code is
available at https://github.com/QipengGuo/CycleGT.
| 2,020 | Computation and Language |
Modeling Discourse Structure for Document-level Neural Machine
Translation | Recently, document-level neural machine translation (NMT) has become a hot
topic in the community of machine translation. Despite its success, most of
existing studies ignored the discourse structure information of the input
document to be translated, which has shown effective in other tasks. In this
paper, we propose to improve document-level NMT with the aid of discourse
structure information. Our encoder is based on a hierarchical attention network
(HAN). Specifically, we first parse the input document to obtain its discourse
structure. Then, we introduce a Transformer-based path encoder to embed the
discourse structure information of each word. Finally, we combine the discourse
structure information with the word embedding before it is fed into the
encoder. Experimental results on the English-to-German dataset show that our
model can significantly outperform both Transformer and Transformer+HAN.
| 2,020 | Computation and Language |
What's the Difference Between Professional Human and Machine
Translation? A Blind Multi-language Study on Domain-specific MT | Machine translation (MT) has been shown to produce a number of errors that
require human post-editing, but the extent to which professional human
translation (HT) contains such errors has not yet been compared to MT. We
compile pre-translated documents in which MT and HT are interleaved, and ask
professional translators to flag errors and post-edit these documents in a
blind evaluation. We find that the post-editing effort for MT segments is only
higher in two out of three language pairs, and that the number of segments with
wrong terminology, omissions, and typographical problems is similar in HT.
| 2,020 | Computation and Language |
Universal Vector Neural Machine Translation With Effective Attention | Neural Machine Translation (NMT) leverages one or more trained neural
networks for the translation of phrases. Sutskever introduced a sequence to
sequence based encoder-decoder model which became the standard for NMT based
systems. Attention mechanisms were later introduced to address the issues with
the translation of long sentences and improving overall accuracy. In this
paper, we propose a singular model for Neural Machine Translation based on
encoder-decoder models. Most translation models are trained as one model for
one translation. We introduce a neutral/universal model representation that can
be used to predict more than one language depending on the source and a
provided target. Secondly, we introduce an attention model by adding an overall
learning vector to the multiplicative model. With these two changes, by using
the novel universal model the number of models needed for multiple language
translation applications are reduced.
| 2,020 | Computation and Language |
HausaMT v1.0: Towards English-Hausa Neural Machine Translation | Neural Machine Translation (NMT) for low-resource languages suffers from low
performance because of the lack of large amounts of parallel data and language
diversity. To contribute to ameliorating this problem, we built a baseline
model for English-Hausa machine translation, which is considered a task for
low-resource language. The Hausa language is the second largest Afro-Asiatic
language in the world after Arabic and it is the third largest language for
trading across a larger swath of West Africa countries, after English and
French. In this paper, we curated different datasets containing Hausa-English
parallel corpus for our translation. We trained baseline models and evaluated
the performance of our models using the Recurrent and Transformer
encoder-decoder architecture with two tokenization approaches: standard
word-level tokenization and Byte Pair Encoding (BPE) subword tokenization.
| 2,020 | Computation and Language |
Human brain activity for machine attention | Cognitively inspired NLP leverages human-derived data to teach machines about
language processing mechanisms. Recently, neural networks have been augmented
with behavioral data to solve a range of NLP tasks spanning syntax and
semantics. We are the first to exploit neuroscientific data, namely
electroencephalography (EEG), to inform a neural attention model about language
processing of the human brain. The challenge in working with EEG data is that
features are exceptionally rich and need extensive pre-processing to isolate
signals specific to text processing. We devise a method for finding such EEG
features to supervise machine attention through combining theoretically
motivated cropping with random forest tree splits. After this dimensionality
reduction, the pre-processed EEG features are capable of distinguishing two
reading tasks retrieved from a publicly available EEG corpus. We apply these
features to regularise attention on relation classification and show that EEG
is more informative than strong baselines. This improvement depends on both the
cognitive load of the task and the EEG frequency domain. Hence, informing
neural attention models with EEG signals is beneficial but requires further
investigation to understand which dimensions are the most useful across NLP
tasks.
| 2,020 | Computation and Language |
ConfNet2Seq: Full Length Answer Generation from Spoken Questions | Conversational and task-oriented dialogue systems aim to interact with the
user using natural responses through multi-modal interfaces, such as text or
speech. These desired responses are in the form of full-length natural answers
generated over facts retrieved from a knowledge source. While the task of
generating natural answers to questions from an answer span has been widely
studied, there has been little research on natural sentence generation over
spoken content. We propose a novel system to generate full length natural
language answers from spoken questions and factoid answers. The spoken sequence
is compactly represented as a confusion network extracted from a pre-trained
Automatic Speech Recognizer. This is the first attempt towards generating
full-length natural answers from a graph input(confusion network) to the best
of our knowledge. We release a large-scale dataset of 259,788 samples of spoken
questions, their factoid answers and corresponding full-length textual answers.
Following our proposed approach, we achieve comparable performance with best
ASR hypothesis.
| 2,020 | Computation and Language |
Learning to Recover from Multi-Modality Errors for Non-Autoregressive
Neural Machine Translation | Non-autoregressive neural machine translation (NAT) predicts the entire
target sequence simultaneously and significantly accelerates inference process.
However, NAT discards the dependency information in a sentence, and thus
inevitably suffers from the multi-modality problem: the target tokens may be
provided by different possible translations, often causing token repetitions or
missing. To alleviate this problem, we propose a novel semi-autoregressive
model RecoverSAT in this work, which generates a translation as a sequence of
segments. The segments are generated simultaneously while each segment is
predicted token-by-token. By dynamically determining segment length and
deleting repetitive segments, RecoverSAT is capable of recovering from
repetitive and missing token errors. Experimental results on three widely-used
benchmark datasets show that our proposed model achieves more than 4$\times$
speedup while maintaining comparable performance compared with the
corresponding autoregressive model.
| 2,020 | Computation and Language |
Re-evaluating phoneme frequencies | Causal processes can give rise to distinctive distributions in the linguistic
variables that they affect. Consequently, a secure understanding of a
variable's distribution can hold a key to understanding the forces that have
causally shaped it. A storied distribution in linguistics has been Zipf's law,
a kind of power law. In the wake of a major debate in the sciences around
power-law hypotheses and the unreliability of earlier methods of evaluating
them, here we re-evaluate the distributions claimed to characterize phoneme
frequencies. We infer the fit of power laws and three alternative distributions
to 166 Australian languages, using a maximum likelihood framework. We find
evidence supporting earlier results, but also nuancing them and increasing our
understanding of them. Most notably, phonemic inventories appear to have a
Zipfian-like frequency structure among their most-frequent members (though
perhaps also a lognormal structure) but a geometric (or exponential) structure
among the least-frequent. We compare these new insights the kinds of causal
processes that affect the evolution of phonemic inventories over time, and
identify a potential account for why, despite there being an important role for
phonetic substance in phonemic change, we could still expect inventories with
highly diverse phonetic content to share similar distributions of phoneme
frequencies. We conclude with priorities for future work in this promising
program of research.
| 2,021 | Computation and Language |
Knowledge-Aided Open-Domain Question Answering | Open-domain question answering (QA) aims to find the answer to a question
from a large collection of documents.Though many models for single-document
machine comprehension have achieved strong performance, there is still much
room for improving open-domain QA systems since document retrieval and answer
reranking are still unsatisfactory. Golden documents that contain the correct
answers may not be correctly scored by the retrieval component, and the correct
answers that have been extracted may be wrongly ranked after other candidate
answers by the reranking component. One of the reasons is derived from the
independent principle in which each candidate document (or answer) is scored
independently without considering its relationship to other documents (or
answers). In this work, we propose a knowledge-aided open-domain QA (KAQA)
method which targets at improving relevant document retrieval and candidate
answer reranking by considering the relationship between a question and the
documents (termed as question-document graph), and the relationship between
candidate documents (termed as document-document graph). The graphs are built
using knowledge triples from external knowledge resources. During document
retrieval, a candidate document is scored by considering its relationship to
the question and other documents. During answer reranking, a candidate answer
is reranked using not only its own context but also the clues from other
documents. The experimental results show that our proposed method improves
document retrieval and answer reranking, and thereby enhances the overall
performance of open-domain question answering.
| 2,020 | Computation and Language |
Extensive Error Analysis and a Learning-Based Evaluation of Medical
Entity Recognition Systems to Approximate User Experience | When comparing entities extracted by a medical entity recognition system with
gold standard annotations over a test set, two types of mismatches might occur,
label mismatch or span mismatch. Here we focus on span mismatch and show that
its severity can vary from a serious error to a fully acceptable entity
extraction due to the subjectivity of span annotations. For a domain-specific
BERT-based NER system, we showed that 25% of the errors have the same labels
and overlapping span with gold standard entities. We collected expert judgement
which shows more than 90% of these mismatches are accepted or partially
accepted by the user. Using the training set of the NER system, we built a fast
and lightweight entity classifier to approximate the user experience of such
mismatches through accepting or rejecting them. The decisions made by this
classifier are used to calculate a learning-based F-score which is shown to be
a better approximation of a forgiving user's experience than the relaxed
F-score. We demonstrated the results of applying the proposed evaluation metric
for a variety of deep learning medical entity recognition models trained with
two datasets.
| 2,020 | Computation and Language |
Combination of abstractive and extractive approaches for summarization
of long scientific texts | In this research work, we present a method to generate summaries of long
scientific documents that uses the advantages of both extractive and
abstractive approaches. Before producing a summary in an abstractive manner, we
perform the extractive step, which then is used for conditioning the abstractor
module. We used pre-trained transformer-based language models, for both
extractor and abstractor. Our experiments showed that using extractive and
abstractive models jointly significantly improves summarization results and
ROUGE scores.
| 2,020 | Computation and Language |
Examination and Extension of Strategies for Improving Personalized
Language Modeling via Interpolation | In this paper, we detail novel strategies for interpolating personalized
language models and methods to handle out-of-vocabulary (OOV) tokens to improve
personalized language models. Using publicly available data from Reddit, we
demonstrate improvements in offline metrics at the user level by interpolating
a global LSTM-based authoring model with a user-personalized n-gram model. By
optimizing this approach with a back-off to uniform OOV penalty and the
interpolation coefficient, we observe that over 80% of users receive a lift in
perplexity, with an average of 5.2% in perplexity lift per user. In doing this
research we extend previous work in building NLIs and improve the robustness of
metrics for downstream tasks.
| 2,020 | Computation and Language |
Unsupervised Paraphrase Generation using Pre-trained Language Models | Large scale Pre-trained Language Models have proven to be very powerful
approach in various Natural language tasks. OpenAI's GPT-2
\cite{radford2019language} is notable for its capability to generate fluent,
well formulated, grammatically consistent text and for phrase completions. In
this paper we leverage this generation capability of GPT-2 to generate
paraphrases without any supervision from labelled data. We examine how the
results compare with other supervised and unsupervised approaches and the
effect of using paraphrases for data augmentation on downstream tasks such as
classification. Our experiments show that paraphrases generated with our model
are of good quality, are diverse and improves the downstream task performance
when used for data augmentation.
| 2,020 | Computation and Language |
Modeling Label Semantics for Predicting Emotional Reactions | Predicting how events induce emotions in the characters of a story is
typically seen as a standard multi-label classification task, which usually
treats labels as anonymous classes to predict. They ignore information that may
be conveyed by the emotion labels themselves. We propose that the semantics of
emotion labels can guide a model's attention when representing the input story.
Further, we observe that the emotions evoked by an event are often related: an
event that evokes joy is unlikely to also evoke sadness. In this work, we
explicitly model label classes via label embeddings, and add mechanisms that
track label-label correlations both during training and inference. We also
introduce a new semi-supervision strategy that regularizes for the correlations
on unlabeled data. Our empirical evaluations show that modeling label semantics
yields consistent benefits, and we advance the state-of-the-art on an emotion
inference task.
| 2,020 | Computation and Language |
Predicting and Analyzing Law-Making in Kenya | Modelling and analyzing parliamentary legislation, roll-call votes and order
of proceedings in developed countries has received significant attention in
recent years. In this paper, we focused on understanding the bills introduced
in a developing democracy, the Kenyan bicameral parliament. We developed and
trained machine learning models on a combination of features extracted from the
bills to predict the outcome - if a bill will be enacted or not. We observed
that the texts in a bill are not as relevant as the year and month the bill was
introduced and the category the bill belongs to.
| 2,020 | Computation and Language |
Adversarial Training Based Multi-Source Unsupervised Domain Adaptation
for Sentiment Analysis | Multi-source unsupervised domain adaptation (MS-UDA) for sentiment analysis
(SA) aims to leverage useful information in multiple source domains to help do
SA in an unlabeled target domain that has no supervised information. Existing
algorithms of MS-UDA either only exploit the shared features, i.e., the
domain-invariant information, or based on some weak assumption in NLP, e.g.,
smoothness assumption. To avoid these problems, we propose two transfer
learning frameworks based on the multi-source domain adaptation methodology for
SA by combining the source hypotheses to derive a good target hypothesis. The
key feature of the first framework is a novel Weighting Scheme based
Unsupervised Domain Adaptation framework (WS-UDA), which combine the source
classifiers to acquire pseudo labels for target instances directly. While the
second framework is a Two-Stage Training based Unsupervised Domain Adaptation
framework (2ST-UDA), which further exploits these pseudo labels to train a
target private extractor. Importantly, the weights assigned to each source
classifier are based on the relations between target instances and source
domains, which measured by a discriminator through the adversarial training.
Furthermore, through the same discriminator, we also fulfill the separation of
shared features and private features. Experimental results on two SA datasets
demonstrate the promising performance of our frameworks, which outperforms
unsupervised state-of-the-art competitors.
| 2,020 | Computation and Language |
Understanding Points of Correspondence between Sentences for Abstractive
Summarization | Fusing sentences containing disparate content is a remarkable human ability
that helps create informative and succinct summaries. Such a simple task for
humans has remained challenging for modern abstractive summarizers,
substantially restricting their applicability in real-world scenarios. In this
paper, we present an investigation into fusing sentences drawn from a document
by introducing the notion of points of correspondence, which are cohesive
devices that tie any two sentences together into a coherent text. The types of
points of correspondence are delineated by text cohesion theory, covering
pronominal and nominal referencing, repetition and beyond. We create a dataset
containing the documents, source and fusion sentences, and human annotations of
points of correspondence between sentences. Our dataset bridges the gap between
coreference resolution and summarization. It is publicly shared to serve as a
basis for future work to measure the success of sentence fusion systems.
(https://github.com/ucfnlp/points-of-correspondence)
| 2,020 | Computation and Language |
Data Augmentation for Training Dialog Models Robust to Speech
Recognition Errors | Speech-based virtual assistants, such as Amazon Alexa, Google assistant, and
Apple Siri, typically convert users' audio signals to text data through
automatic speech recognition (ASR) and feed the text to downstream dialog
models for natural language understanding and response generation. The ASR
output is error-prone; however, the downstream dialog models are often trained
on error-free text data, making them sensitive to ASR errors during inference
time. To bridge the gap and make dialog models more robust to ASR errors, we
leverage an ASR error simulator to inject noise into the error-free text data,
and subsequently train the dialog models with the augmented data. Compared to
other approaches for handling ASR errors, such as using ASR lattice or
end-to-end methods, our data augmentation approach does not require any
modification to the ASR or downstream dialog models; our approach also does not
introduce any additional latency during inference time. We perform extensive
experiments on benchmark data and show that our approach improves the
performance of downstream dialog models in the presence of ASR errors, and it
is particularly effective in the low-resource situations where there are
constraints on model size or the training data is scarce.
| 2,020 | Computation and Language |
Position Masking for Language Models | Masked language modeling (MLM) pre-training models such as BERT corrupt the
input by replacing some tokens with [MASK] and then train a model to
reconstruct the original tokens. This is an effective technique which has led
to good results on all NLP benchmarks. We propose to expand upon this idea by
masking the positions of some tokens along with the masked input token ids. We
follow the same standard approach as BERT masking a percentage of the tokens
positions and then predicting their original values using an additional fully
connected classifier stage. This approach has shown good performance gains
(.3\% improvement) for the SQUAD additional improvement in convergence times.
For the Graphcore IPU the convergence of BERT Base with position masking
requires only 50\% of the tokens from the original BERT paper.
| 2,020 | Computation and Language |
Few-shot Slot Tagging with Collapsed Dependency Transfer and
Label-enhanced Task-adaptive Projection Network | In this paper, we explore the slot tagging with only a few labeled support
sentences (a.k.a. few-shot). Few-shot slot tagging faces a unique challenge
compared to the other few-shot classification problems as it calls for modeling
the dependencies between labels. But it is hard to apply previously learned
label dependencies to an unseen domain, due to the discrepancy of label sets.
To tackle this, we introduce a collapsed dependency transfer mechanism into the
conditional random field (CRF) to transfer abstract label dependency patterns
as transition scores. In the few-shot setting, the emission score of CRF can be
calculated as a word's similarity to the representation of each label. To
calculate such similarity, we propose a Label-enhanced Task-Adaptive Projection
Network (L-TapNet) based on the state-of-the-art few-shot classification model
-- TapNet, by leveraging label name semantics in representing labels.
Experimental results show that our model significantly outperforms the
strongest few-shot learning baseline by 14.64 F1 scores in the one-shot
setting.
| 2,020 | Computation and Language |
MC-BERT: Efficient Language Pre-Training via a Meta Controller | Pre-trained contextual representations (e.g., BERT) have become the
foundation to achieve state-of-the-art results on many NLP tasks. However,
large-scale pre-training is computationally expensive. ELECTRA, an early
attempt to accelerate pre-training, trains a discriminative model that predicts
whether each input token was replaced by a generator. Our studies reveal that
ELECTRA's success is mainly due to its reduced complexity of the pre-training
task: the binary classification (replaced token detection) is more efficient to
learn than the generation task (masked language modeling). However, such a
simplified task is less semantically informative. To achieve better efficiency
and effectiveness, we propose a novel meta-learning framework, MC-BERT. The
pre-training task is a multi-choice cloze test with a reject option, where a
meta controller network provides training input and candidates. Results over
GLUE natural language understanding benchmark demonstrate that our proposed
method is both efficient and effective: it outperforms baselines on GLUE
semantic tasks given the same computational budget.
| 2,020 | Computation and Language |
Gender in Danger? Evaluating Speech Translation Technology on the
MuST-SHE Corpus | Translating from languages without productive grammatical gender like English
into gender-marked languages is a well-known difficulty for machines. This
difficulty is also due to the fact that the training data on which models are
built typically reflect the asymmetries of natural languages, gender bias
included. Exclusively fed with textual data, machine translation is
intrinsically constrained by the fact that the input sentence does not always
contain clues about the gender identity of the referred human entities. But
what happens with speech translation, where the input is an audio signal? Can
audio provide additional information to reduce gender bias? We present the
first thorough investigation of gender bias in speech translation, contributing
with: i) the release of a benchmark useful for future studies, and ii) the
comparison of different technologies (cascade and end-to-end) on two language
directions (English-Italian/French).
| 2,020 | Computation and Language |
ClarQ: A large-scale and diverse dataset for Clarification Question
Generation | Question answering and conversational systems are often baffled and need help
clarifying certain ambiguities. However, limitations of existing datasets
hinder the development of large-scale models capable of generating and
utilising clarification questions. In order to overcome these limitations, we
devise a novel bootstrapping framework (based on self-supervision) that assists
in the creation of a diverse, large-scale dataset of clarification questions
based on post-comment tuples extracted from stackexchange. The framework
utilises a neural network based architecture for classifying clarification
questions. It is a two-step method where the first aims to increase the
precision of the classifier and second aims to increase its recall. We
quantitatively demonstrate the utility of the newly created dataset by applying
it to the downstream task of question-answering. The final dataset, ClarQ,
consists of ~2M examples distributed across 173 domains of stackexchange. We
release this dataset in order to foster research into the field of
clarification question generation with the larger goal of enhancing dialog and
question answering systems.
| 2,020 | Computation and Language |
Revisiting Few-sample BERT Fine-tuning | This paper is a study of fine-tuning of BERT contextual representations, with
focus on commonly observed instabilities in few-sample scenarios. We identify
several factors that cause this instability: the common use of a non-standard
optimization method with biased gradient estimation; the limited applicability
of significant parts of the BERT network for down-stream tasks; and the
prevalent practice of using a pre-determined, and small number of training
iterations. We empirically test the impact of these factors, and identify
alternative practices that resolve the commonly observed instability of the
process. In light of these observations, we re-visit recently proposed methods
to improve few-sample fine-tuning with BERT and re-evaluate their
effectiveness. Generally, we observe the impact of these methods diminishes
significantly with our modified process.
| 2,021 | Computation and Language |
Report from the NSF Future Directions Workshop, Toward User-Oriented
Agents: Research Directions and Challenges | This USER Workshop was convened with the goal of defining future research
directions for the burgeoning intelligent agent research community and to
communicate them to the National Science Foundation. It took place in
Pittsburgh Pennsylvania on October 24 and 25, 2019 and was sponsored by
National Science Foundation Grant Number IIS-1934222. Any opinions, findings
and conclusions or future directions expressed in this document are those of
the authors and do not necessarily reflect the views of the National Science
Foundation. The 27 participants presented their individual research interests
and their personal research goals. In the breakout sessions that followed, the
participants defined the main research areas within the domain of intelligent
agents and they discussed the major future directions that the research in each
area of this domain should take
| 2,020 | Computation and Language |
Towards Unified Dialogue System Evaluation: A Comprehensive Analysis of
Current Evaluation Protocols | As conversational AI-based dialogue management has increasingly become a
trending topic, the need for a standardized and reliable evaluation procedure
grows even more pressing. The current state of affairs suggests various
evaluation protocols to assess chat-oriented dialogue management systems,
rendering it difficult to conduct fair comparative studies across different
approaches and gain an insightful understanding of their values. To foster this
research, a more robust evaluation protocol must be set in place. This paper
presents a comprehensive synthesis of both automated and human evaluation
methods on dialogue systems, identifying their shortcomings while accumulating
evidence towards the most effective evaluation dimensions. A total of 20 papers
from the last two years are surveyed to analyze three types of evaluation
protocols: automated, static, and interactive. Finally, the evaluation
dimensions used in these papers are compared against our expert evaluation on
the system-user dialogue data collected from the Alexa Prize 2020.
| 2,020 | Computation and Language |
Emora STDM: A Versatile Framework for Innovative Dialogue System
Development | This demo paper presents Emora STDM (State Transition Dialogue Manager), a
dialogue system development framework that provides novel workflows for rapid
prototyping of chat-based dialogue managers as well as collaborative
development of complex interactions. Our framework caters to a wide range of
expertise levels by supporting interoperability between two popular approaches,
state machine and information state, to dialogue management. Our Natural
Language Expression package allows seamless integration of pattern matching,
custom NLP modules, and database querying, that makes the workflows much more
efficient. As a user study, we adopt this framework to an interdisciplinary
undergraduate course where students with both technical and non-technical
backgrounds are able to develop creative dialogue managers in a short period of
time.
| 2,020 | Computation and Language |
A Monolingual Approach to Contextualized Word Embeddings for
Mid-Resource Languages | We use the multilingual OSCAR corpus, extracted from Common Crawl via
language classification, filtering and cleaning, to train monolingual
contextualized word embeddings (ELMo) for five mid-resource languages. We then
compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for
these languages on the part-of-speech tagging and parsing tasks. We show that,
despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on
OSCAR perform much better than monolingual embeddings trained on Wikipedia.
They actually equal or improve the current state of the art in tagging and
parsing for all five languages. In particular, they also improve over
multilingual Wikipedia-based contextual embeddings (multilingual BERT), which
almost always constitutes the previous state of the art, thereby showing that
the benefit of a larger, more diverse corpus surpasses the cross-lingual
benefit of multilingual embedding architectures.
| 2,020 | Computation and Language |
Discrete Latent Variable Representations for Low-Resource Text
Classification | While much work on deep latent variable models of text uses continuous latent
variables, discrete latent variables are interesting because they are more
interpretable and typically more space efficient. We consider several
approaches to learning discrete latent variable models for text in the case
where exact marginalization over these variables is intractable. We compare the
performance of the learned representations as features for low-resource
document and sentence classification. Our best models outperform the previous
best reported results with continuous representations in these low-resource
settings, while learning significantly more compressed representations.
Interestingly, we find that an amortized variant of Hard EM performs
particularly well in the lowest-resource regimes.
| 2,020 | Computation and Language |
Performance in the Courtroom: Automated Processing and Visualization of
Appeal Court Decisions in France | Artificial Intelligence techniques are already popular and important in the
legal domain. We extract legal indicators from judicial judgment to decrease
the asymmetry of information of the legal system and the access-to-justice gap.
We use NLP methods to extract interesting entities/data from judgments to
construct networks of lawyers and judgments. We propose metrics to rank lawyers
based on their experience, wins/loss ratio and their importance in the network
of lawyers. We also perform community detection in the network of judgments and
propose metrics to represent the difficulty of cases capitalising on
communities features.
| 2,020 | Computation and Language |
Augmenting Data for Sarcasm Detection with Unlabeled Conversation
Context | We present a novel data augmentation technique, CRA (Contextual Response
Augmentation), which utilizes conversational context to generate meaningful
samples for training. We also mitigate the issues regarding unbalanced context
lengths by changing the input-output format of the model such that it can deal
with varying context lengths effectively. Specifically, our proposed model,
trained with the proposed data augmentation technique, participated in the
sarcasm detection task of FigLang2020, have won and achieves the best
performance in both Reddit and Twitter datasets.
| 2,020 | Computation and Language |
Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine
Translation Evaluation Metrics | Automatic metrics are fundamental for the development and evaluation of
machine translation systems. Judging whether, and to what extent, automatic
metrics concur with the gold standard of human evaluation is not a
straightforward problem. We show that current methods for judging metrics are
highly sensitive to the translations used for assessment, particularly the
presence of outliers, which often leads to falsely confident conclusions about
a metric's efficacy. Finally, we turn to pairwise system ranking, developing a
method for thresholding performance improvement under an automatic metric
against human judgements, which allows quantification of type I versus type II
errors incurred, i.e., insignificant human differences in system quality that
are accepted, and significant human differences that are rejected. Together,
these findings suggest improvements to the protocols for metric evaluation and
system performance evaluation in machine translation.
| 2,020 | Computation and Language |
Provenance for Linguistic Corpora Through Nanopublications | Research in Computational Linguistics is dependent on text corpora for
training and testing new tools and methodologies. While there exists a plethora
of annotated linguistic information, these corpora are often not interoperable
without significant manual work. Moreover, these annotations might have evolved
into different versions, making it challenging for researchers to know the
data's provenance. This paper addresses this issue with a case study on event
annotated corpora and by creating a new, more interoperable representation of
this data in the form of nanopublications. We demonstrate how linguistic
annotations from separate corpora can be reliably linked from the start, and
thereby be accessed and queried as if they were a single dataset. We describe
how such nanopublications can be created and demonstrate how SPARQL queries can
be performed to extract interesting content from the new representations. The
queries show that information of multiple corpora can be retrieved more easily
and effectively because the information of different corpora is represented in
a uniform data format.
| 2,020 | Computation and Language |
CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot
Cross-Lingual NLP | Multi-lingual contextualized embeddings, such as multilingual-BERT (mBERT),
have shown success in a variety of zero-shot cross-lingual tasks. However,
these models are limited by having inconsistent contextualized representations
of subwords across different languages. Existing work addresses this issue by
bilingual projection and fine-tuning technique. We propose a data augmentation
framework to generate multi-lingual code-switching data to fine-tune mBERT,
which encourages model to align representations from source and multiple target
languages once by mixing their context information. Compared with the existing
work, our method does not rely on bilingual sentences for training, and
requires only one training process for multiple target languages. Experimental
results on five tasks with 19 languages show that our method leads to
significantly improved performances for all the tasks compared with mBERT.
| 2,020 | Computation and Language |
A Probabilistic Model with Commonsense Constraints for Pattern-based
Temporal Fact Extraction | Textual patterns (e.g., Country's president Person) are specified and/or
generated for extracting factual information from unstructured data.
Pattern-based information extraction methods have been recognized for their
efficiency and transferability. However, not every pattern is reliable: A major
challenge is to derive the most complete and accurate facts from diverse and
sometimes conflicting extractions. In this work, we propose a probabilistic
graphical model which formulates fact extraction in a generative process. It
automatically infers true facts and pattern reliability without any
supervision. It has two novel designs specially for temporal facts: (1) it
models pattern reliability on two types of time signals, including temporal tag
in text and text generation time; (2) it models commonsense constraints as
observable variables. Experimental results demonstrate that our model
significantly outperforms existing methods on extracting true temporal facts
from news data.
| 2,020 | Computation and Language |
Multi-hop Reading Comprehension across Documents with Path-based Graph
Convolutional Network | Multi-hop reading comprehension across multiple documents attracts much
attention recently. In this paper, we propose a novel approach to tackle this
multi-hop reading comprehension problem. Inspired by human reasoning
processing, we construct a path-based reasoning graph from supporting
documents. This graph can combine both the idea of the graph-based and
path-based approaches, so it is better for multi-hop reasoning. Meanwhile, we
propose Gated-RGCN to accumulate evidence on the path-based reasoning graph,
which contains a new question-aware gating mechanism to regulate the usefulness
of information propagating across documents and add question information during
reasoning. We evaluate our approach on WikiHop dataset, and our approach
achieves state-of-the-art accuracy against previously published approaches.
Especially, our ensemble model surpasses human performance by 4.2%.
| 2,020 | Computation and Language |
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge | To what extent can a neural network systematically reason over symbolic
facts? Evidence suggests that large pre-trained language models (LMs) acquire
some reasoning capacity, but this ability is difficult to control. Recently, it
has been shown that Transformer-based models succeed in consistent reasoning
over explicit symbolic facts, under a "closed-world" assumption. However, in an
open-domain setup, it is desirable to tap into the vast reservoir of implicit
knowledge already encoded in the parameters of pre-trained LMs. In this work,
we provide a first demonstration that LMs can be trained to reliably perform
systematic reasoning combining both implicit, pre-trained knowledge and
explicit natural language statements. To do this, we describe a procedure for
automatically generating datasets that teach a model new reasoning skills, and
demonstrate that models learn to effectively perform inference which involves
implicit taxonomic and world knowledge, chaining and counting. Finally, we show
that "teaching" models to reason generalizes beyond the training distribution:
they successfully compose the usage of multiple reasoning skills in single
examples. Our work paves a path towards open-domain systems that constantly
improve by interacting with users who can instantly correct a model by adding
simple natural language statements.
| 2,020 | Computation and Language |
Modelling Hierarchical Structure between Dialogue Policy and Natural
Language Generator with Option Framework for Task-oriented Dialogue System | Designing task-oriented dialogue systems is a challenging research topic,
since it needs not only to generate utterances fulfilling user requests but
also to guarantee the comprehensibility. Many previous works trained end-to-end
(E2E) models with supervised learning (SL), however, the bias in annotated
system utterances remains as a bottleneck. Reinforcement learning (RL) deals
with the problem through using non-differentiable evaluation metrics (e.g., the
success rate) as rewards. Nonetheless, existing works with RL showed that the
comprehensibility of generated system utterances could be corrupted when
improving the performance on fulfilling user requests. In our work, we (1)
propose modelling the hierarchical structure between dialogue policy and
natural language generator (NLG) with the option framework, called HDNO, where
the latent dialogue act is applied to avoid designing specific dialogue act
representations; (2) train HDNO via hierarchical reinforcement learning (HRL),
as well as suggest the asynchronous updates between dialogue policy and NLG
during training to theoretically guarantee their convergence to a local
maximizer; and (3) propose using a discriminator modelled with language models
as an additional reward to further improve the comprehensibility. We test HDNO
on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in
comparison with word-level E2E model trained with RL, LaRL and HDSA, showing
improvements on the performance evaluated by automatic evaluation metrics and
human evaluation. Finally, we demonstrate the semantic meanings of latent
dialogue acts to show the explanability for HDNO.
| 2,021 | Computation and Language |
Speaker Sensitive Response Evaluation Model | Automatic evaluation of open-domain dialogue response generation is very
challenging because there are many appropriate responses for a given context.
Existing evaluation models merely compare the generated response with the
ground truth response and rate many of the appropriate responses as
inappropriate if they deviate from the ground truth. One approach to resolve
this problem is to consider the similarity of the generated response with the
conversational context. In this paper, we propose an automatic evaluation model
based on that idea and learn the model parameters from an unlabeled
conversation corpus. Our approach considers the speakers in defining the
different levels of similar context. We use a Twitter conversation corpus that
contains many speakers and conversations to test our evaluation model.
Experiments show that our model outperforms the other existing evaluation
metrics in terms of high correlation with human annotation scores. We also show
that our model trained on Twitter can be applied to movie dialogues without any
additional training. We provide our code and the learned parameters so that
they can be used for automatic evaluation of dialogue response generation
models.
| 2,020 | Computation and Language |
SemEval-2020 Task 12: Multilingual Offensive Language Identification in
Social Media (OffensEval 2020) | We present the results and main findings of SemEval-2020 Task 12 on
Multilingual Offensive Language Identification in Social Media (OffensEval
2020). The task involves three subtasks corresponding to the hierarchical
taxonomy of the OLID schema (Zampieri et al., 2019a) from OffensEval 2019. The
task featured five languages: English, Arabic, Danish, Greek, and Turkish for
Subtask A. In addition, English also featured Subtasks B and C. OffensEval 2020
was one of the most popular tasks at SemEval-2020 attracting a large number of
participants across all subtasks and also across all languages. A total of 528
teams signed up to participate in the task, 145 teams submitted systems during
the evaluation period, and 70 submitted system description papers.
| 2,020 | Computation and Language |
Low-resource Languages: A Review of Past Work and Future Challenges | A current problem in NLP is massaging and processing low-resource languages
which lack useful training attributes such as supervised data, number of native
speakers or experts, etc. This review paper concisely summarizes previous
groundbreaking achievements made towards resolving this problem, and analyzes
potential improvements in the context of the overall future research direction.
| 2,020 | Computation and Language |
Information Extraction of Clinical Trial Eligibility Criteria | Clinical trials predicate subject eligibility on a diversity of criteria
ranging from patient demographics to food allergies. Trials post their
requirements as semantically complex, unstructured free-text. Formalizing trial
criteria to a computer-interpretable syntax would facilitate eligibility
determination. In this paper, we investigate an information extraction (IE)
approach for grounding criteria from trials in ClinicalTrials(dot)gov to a
shared knowledge base. We frame the problem as a novel knowledge base
population task, and implement a solution combining machine learning and
context free grammar. To our knowledge, this work is the first criteria
extraction system to apply attention-based conditional random field
architecture for named entity recognition (NER), and word2vec embedding
clustering for named entity linking (NEL). We release the resources and core
components of our system on GitHub at
https://github.com/facebookresearch/Clinical-Trial-Parser. Finally, we report
our per module and end to end performances; we conclude that our system is
competitive with Criteria2Query, which we view as the current state-of-the-art
in criteria extraction.
| 2,020 | Computation and Language |
Evaluating a Multi-sense Definition Generation Model for Multiple
Languages | Most prior work on definition modeling has not accounted for polysemy, or has
done so by considering definition modeling for a target word in a given
context. In contrast, in this study, we propose a context-agnostic approach to
definition modeling, based on multi-sense word embeddings, that is capable of
generating multiple definitions for a target word. In further, contrast to most
prior work, which has primarily focused on English, we evaluate our proposed
approach on fifteen different datasets covering nine languages from several
language families. To evaluate our approach we consider several variations of
BLEU. Our results demonstrate that our proposed multi-sense model outperforms a
single-sense model on all fifteen datasets.
| 2,020 | Computation and Language |
Measuring Forecasting Skill from Text | People vary in their ability to make accurate predictions about the future.
Prior studies have shown that some individuals can predict the outcome of
future events with consistently better accuracy. This leads to a natural
question: what makes some forecasters better than others? In this paper we
explore connections between the language people use to describe their
predictions and their forecasting skill. Datasets from two different
forecasting domains are explored: (1) geopolitical forecasts from Good Judgment
Open, an online prediction forum and (2) a corpus of company earnings forecasts
made by financial analysts. We present a number of linguistic metrics which are
computed over text associated with people's predictions about the future
including: uncertainty, readability, and emotion. By studying linguistic
factors associated with predictions, we are able to shed some light on the
approach taken by skilled forecasters. Furthermore, we demonstrate that it is
possible to accurately predict forecasting skill using a model that is based
solely on language. This could potentially be useful for identifying accurate
predictions or potentially skilled forecasters earlier.
| 2,020 | Computation and Language |
A Generative Model for Joint Natural Language Understanding and
Generation | Natural language understanding (NLU) and natural language generation (NLG)
are two fundamental and related tasks in building task-oriented dialogue
systems with opposite objectives: NLU tackles the transformation from natural
language to formal representations, whereas NLG does the reverse. A key to
success in either task is parallel training data which is expensive to obtain
at a large scale. In this work, we propose a generative model which couples NLU
and NLG through a shared latent variable. This approach allows us to explore
both spaces of natural language and formal representations, and facilitates
information sharing through the latent space to eventually benefit NLU and NLG.
Our model achieves state-of-the-art performance on two dialogue datasets with
both flat and tree-structured formal representations. We also show that the
model can be trained in a semi-supervised fashion by utilising unlabelled data
to boost its performance.
| 2,020 | Computation and Language |
Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations | We present a new knowledge-base of hasPart relationships, extracted from a
large corpus of generic statements. Complementary to other resources available,
it is the first which is all three of: accurate (90% precision), salient
(covers relationships a person may mention), and has high coverage of common
terms (approximated as within a 10 year old's vocabulary), as well as having
several times more hasPart entries than in the popular ontologies ConceptNet
and WordNet. In addition, it contains information about quantifiers, argument
modifiers, and links the entities to appropriate concepts in Wikipedia and
WordNet. The knowledge base is available at https://allenai.org/data/haspartkb
| 2,020 | Computation and Language |
GIPFA: Generating IPA Pronunciation from Audio | Transcribing spoken audio samples into the International Phonetic Alphabet
(IPA) has long been reserved for experts. In this study, we examine the use of
an Artificial Neural Network (ANN) model to automatically extract the IPA
phonemic pronunciation of a word based on its audio pronunciation, hence its
name Generating IPA Pronunciation From Audio (GIPFA). Based on the French
Wikimedia dictionary, we trained our model which then correctly predicted 75%
of the IPA pronunciations tested. Interestingly, by studying inference errors,
the model made it possible to highlight possible errors in the dataset as well
as to identify the closest phonemes in French.
| 2,021 | Computation and Language |
Words ranking and Hirsch index for identifying the core of the hapaxes
in political texts | This paper deals with a quantitative analysis of the content of official
political speeches. We study a set of about one thousand talks pronounced by
the US Presidents, ranging from Washington to Trump. In particular, we search
for the relevance of the rare words, i.e. those said only once in each speech
-- the so-called hapaxes. We implement a rank-size procedure of Zipf-Mandelbrot
type for discussing the hapaxes' frequencies regularity over the overall set of
speeches. Starting from the obtained rank-size law, we define and detect the
core of the hapaxes set by means of a procedure based on an Hirsch index
variant. We discuss the resulting list of words in the light of the overall US
Presidents' speeches. We further show that this core of hapaxes itself can be
well fitted through a Zipf-Mandelbrot law and that contains elements producing
deviations at the low ranks between scatter plots and fitted curve -- the
so-called king and vice-roy effect. Some socio-political insights are derived
from the obtained findings about the US Presidents messages.
| 2,020 | Computation and Language |
Transferring Monolingual Model to Low-Resource Language: The Case of
Tigrinya | In recent years, transformer models have achieved great success in natural
language processing (NLP) tasks. Most of the current state-of-the-art NLP
results are achieved by using monolingual transformer models, where the model
is pre-trained using a single language unlabelled text corpus. Then, the model
is fine-tuned to the specific downstream task. However, the cost of
pre-training a new transformer model is high for most languages. In this work,
we propose a cost-effective transfer learning method to adopt a strong source
language model, trained from a large monolingual corpus to a low-resource
language. Thus, using XLNet language model, we demonstrate competitive
performance with mBERT and a pre-trained target language model on the
cross-lingual sentiment (CLS) dataset and on a new sentiment analysis dataset
for low-resourced language Tigrinya. With only 10k examples of the given
Tigrinya sentiment analysis dataset, English XLNet has achieved 78.88% F1-Score
outperforming BERT and mBERT by 10% and 7%, respectively. More interestingly,
fine-tuning (English) XLNet model on the CLS dataset has promising results
compared to mBERT and even outperformed mBERT for one dataset of the Japanese
language.
| 2,020 | Computation and Language |
Through the Twitter Glass: Detecting Questions in Micro-Text | In a separate study, we were interested in understanding people's Q&A habits
on Twitter. Finding questions within Twitter turned out to be a difficult
challenge, so we considered applying some traditional NLP approaches to the
problem. On the one hand, Twitter is full of idiosyncrasies, which make
processing it difficult. On the other, it is very restricted in length and
tends to employ simple syntactic constructions, which could help the
performance of NLP processing. In order to find out the viability of NLP and
Twitter, we built a pipeline of tools to work specifically with Twitter input
for the task of finding questions in tweets. This work is still preliminary,
but in this paper we discuss the techniques we used and the lessons we learned.
| 2,020 | Computation and Language |
Vietnamese Word Segmentation with SVM: Ambiguity Reduction and Suffix
Capture | In this paper, we approach Vietnamese word segmentation as a binary
classification by using the Support Vector Machine classifier. We inherit
features from prior works such as n-gram of syllables, n-gram of syllable
types, and checking conjunction of adjacent syllables in the dictionary. We
propose two novel ways to feature extraction, one to reduce the overlap
ambiguity and the other to increase the ability to predict unknown words
containing suffixes. Different from UETsegmenter and RDRsegmenter, two
state-of-the-art Vietnamese word segmentation methods, we do not employ the
longest matching algorithm as an initial processing step or any post-processing
technique. According to experimental results on benchmark Vietnamese datasets,
our proposed method obtained a better F1-score than the prior state-of-the-art
methods UETsegmenter, and RDRsegmenter.
| 2,020 | Computation and Language |
FinEst BERT and CroSloEngual BERT: less is more in multilingual models | Large pretrained masked language models have become state-of-the-art
solutions for many NLP problems. The research has been mostly focused on
English language, though. While massively multilingual models exist, studies
have shown that monolingual models produce much better results. We train two
trilingual BERT-like models, one for Finnish, Estonian, and English, the other
for Croatian, Slovenian, and English. We evaluate their performance on several
downstream tasks, NER, POS-tagging, and dependency parsing, using the
multilingual BERT and XLM-R as baselines. The newly created FinEst BERT and
CroSloEngual BERT improve the results on all tasks in most monolingual and
cross-lingual situations
| 2,020 | Computation and Language |
FinBERT: A Pretrained Language Model for Financial Communications | Contextual pretrained language models, such as BERT (Devlin et al., 2019),
have made significant breakthrough in various NLP tasks by training on large
scale of unlabeled text re-sources.Financial sector also accumulates large
amount of financial communication text.However, there is no pretrained finance
specific language models available. In this work,we address the need by
pretraining a financial domain specific BERT models, FinBERT, using a large
scale of financial communication corpora. Experiments on three financial
sentiment classification tasks confirm the advantage of FinBERT over generic
domain BERT model. The code and pretrained models are available at
https://github.com/yya518/FinBERT. We hope this will be useful for
practitioners and researchers working on financial NLP tasks.
| 2,020 | Computation and Language |
Evidence-Aware Inferential Text Generation with Vector Quantised
Variational AutoEncoder | Generating inferential texts about an event in different perspectives
requires reasoning over different contexts that the event occurs. Existing
works usually ignore the context that is not explicitly provided, resulting in
a context-independent semantic representation that struggles to support the
generation. To address this, we propose an approach that automatically finds
evidence for an event from a large text corpus, and leverages the evidence to
guide the generation of inferential texts. Our approach works in an
encoder-decoder manner and is equipped with a Vector Quantised-Variational
Autoencoder, where the encoder outputs representations from a distribution over
discrete variables. Such discrete representations enable automatically
selecting relevant evidence, which not only facilitates evidence-aware
generation, but also provides a natural way to uncover rationales behind the
generation. Our approach provides state-of-the-art performance on both
Event2Mind and ATOMIC datasets. More importantly, we find that with discrete
representations, our model selectively uses evidence to generate different
inferential texts.
| 2,020 | Computation and Language |
Extracting N-ary Cross-sentence Relations using Constrained Subsequence
Kernel | Most of the past work in relation extraction deals with relations occurring
within a sentence and having only two entity arguments. We propose a new
formulation of the relation extraction task where the relations are more
general than intra-sentence relations in the sense that they may span multiple
sentences and may have more than two arguments. Moreover, the relations are
more specific than corpus-level relations in the sense that their scope is
limited only within a document and not valid globally throughout the corpus. We
propose a novel sequence representation to characterize instances of such
relations. We then explore various classifiers whose features are derived from
this sequence representation. For SVM classifier, we design a Constrained
Subsequence Kernel which is a variant of Generalized Subsequence Kernel. We
evaluate our approach on three datasets across two domains: biomedical and
general domain.
| 2,020 | Computation and Language |
On the Multi-Property Extraction and Beyond | In this paper, we investigate the Dual-source Transformer architecture on the
WikiReading information extraction and machine reading comprehension dataset.
The proposed model outperforms the current state-of-the-art by a large margin.
Next, we introduce WikiReading Recycled - a newly developed public dataset,
supporting the task of multiple property extraction. It keeps the spirit of the
original WikiReading but does not inherit the identified disadvantages of its
predecessor.
| 2,020 | Computation and Language |
Fine-grained Human Evaluation of Transformer and Recurrent Approaches to
Neural Machine Translation for English-to-Chinese | This research presents a fine-grained human evaluation to compare the
Transformer and recurrent approaches to neural machine translation (MT), on the
translation direction English-to-Chinese. To this end, we develop an error
taxonomy compliant with the Multidimensional Quality Metrics (MQM) framework
that is customised to the relevant phenomena of this translation direction. We
then conduct an error annotation using this customised error taxonomy on the
output of state-of-the-art recurrent- and Transformer-based MT systems on a
subset of WMT2019's news test set. The resulting annotation shows that,
compared to the best recurrent system, the best Transformer system results in a
31% reduction of the total number of errors and it produced significantly less
errors in 10 out of 22 error categories. We also note that two of the systems
evaluated do not produce any error for a category that was relevant for this
translation direction prior to the advent of NMT systems: Chinese classifiers.
| 2,020 | Computation and Language |
ETHOS: an Online Hate Speech Detection Dataset | Online hate speech is a recent problem in our society that is rising at a
steady pace by leveraging the vulnerabilities of the corresponding regimes that
characterise most social media platforms. This phenomenon is primarily fostered
by offensive comments, either during user interaction or in the form of a
posted multimedia context. Nowadays, giant corporations own platforms where
millions of users log in every day, and protection from exposure to similar
phenomena appears to be necessary in order to comply with the corresponding
legislation and maintain a high level of service quality. A robust and reliable
system for detecting and preventing the uploading of relevant content will have
a significant impact on our digitally interconnected society. Several aspects
of our daily lives are undeniably linked to our social profiles, making us
vulnerable to abusive behaviours. As a result, the lack of accurate hate speech
detection mechanisms would severely degrade the overall user experience,
although its erroneous operation would pose many ethical concerns. In this
paper, we present 'ETHOS', a textual dataset with two variants: binary and
multi-label, based on YouTube and Reddit comments validated using the
Figure-Eight crowdsourcing platform. Furthermore, we present the annotation
protocol used to create this dataset: an active sampling procedure for
balancing our data in relation to the various aspects defined. Our key
assumption is that, even gaining a small amount of labelled data from such a
time-consuming process, we can guarantee hate speech occurrences in the
examined material.
| 2,022 | Computation and Language |
Probing Neural Dialog Models for Conversational Understanding | The predominant approach to open-domain dialog generation relies on
end-to-end training of neural models on chat datasets. However, this approach
provides little insight as to what these models learn (or do not learn) about
engaging in dialog. In this study, we analyze the internal representations
learned by neural open-domain dialog systems and evaluate the quality of these
representations for learning basic conversational skills. Our results suggest
that standard open-domain dialog systems struggle with answering questions,
inferring contradiction, and determining the topic of conversation, among other
tasks. We also find that the dyadic, turn-taking nature of dialog is not fully
leveraged by these models. By exploring these limitations, we highlight the
need for additional research into architectures and training methods that can
better capture high-level information about dialog.
| 2,020 | Computation and Language |
An Augmented Translation Technique for low Resource language pair:
Sanskrit to Hindi translation | Neural Machine Translation (NMT) is an ongoing technique for Machine
Translation (MT) using enormous artificial neural network. It has exhibited
promising outcomes and has shown incredible potential in solving challenging
machine translation exercises. One such exercise is the best approach to
furnish great MT to language sets with a little preparing information. In this
work, Zero Shot Translation (ZST) is inspected for a low resource language
pair. By working on high resource language pairs for which benchmarks are
available, namely Spanish to Portuguese, and training on data sets
(Spanish-English and English-Portuguese) we prepare a state of proof for ZST
system that gives appropriate results on the available data. Subsequently the
same architecture is tested for Sanskrit to Hindi translation for which data is
sparse, by training the model on English-Hindi and Sanskrit-English language
pairs. In order to prepare and decipher with ZST system, we broaden the
preparation and interpretation pipelines of NMT seq2seq model in tensorflow,
incorporating ZST features. Dimensionality reduction of word embedding is
performed to reduce the memory usage for data storage and to achieve a faster
training and translation cycles. In this work existing helpful technology has
been utilized in an imaginative manner to execute our NLP issue of Sanskrit to
Hindi translation. A Sanskrit-Hindi parallel corpus of 300 is constructed for
testing. The data required for the construction of parallel corpus has been
taken from the telecasted news, published on Department of Public Information,
state government of Madhya Pradesh, India website.
| 2,019 | Computation and Language |
StackOverflow vs Kaggle: A Study of Developer Discussions About Data
Science | Software developers are increasingly required to understand fundamental Data
science (DS) concepts. Recently, the presence of machine learning (ML) and deep
learning (DL) has dramatically increased in the development of user
applications, whether they are leveraged through frameworks or implemented from
scratch. These topics attract much discussion on online platforms. This paper
conducts large-scale qualitative and quantitative experiments to study the
characteristics of 197836 posts from StackOverflow and Kaggle. Latent Dirichlet
Allocation topic modelling is used to extract twenty-four DS discussion topics.
The main findings include that TensorFlow-related topics were most prevalent in
StackOverflow, while meta discussion topics were the prevalent ones on Kaggle.
StackOverflow tends to include lower-level troubleshooting, while Kaggle
focuses on practicality and optimising leaderboard performance. In addition,
across both communities, DS discussion is increasing at a dramatic rate. While
TensorFlow discussion on StackOverflow is slowing, interest in Keras is rising.
Finally, ensemble algorithms are the most mentioned ML/DL algorithms in Kaggle
but are rarely discussed on StackOverflow. These findings can help educators
and researchers to more effectively tailor and prioritise efforts in
researching and communicating DS concepts towards different developer
communities.
| 2,020 | Computation and Language |
A Dataset and Benchmarks for Multimedia Social Analysis | We present a new publicly available dataset with the goal of advancing
multi-modality learning by offering vision and language data within the same
context. This is achieved by obtaining data from a social media website with
posts containing multiple paired images/videos and text, along with comment
trees containing images/videos and/or text. With a total of 677k posts, 2.9
million post images, 488k post videos, 1.4 million comment images, 4.6 million
comment videos, and 96.9 million comments, data from different modalities can
be jointly used to improve performances for a variety of tasks such as image
captioning, image classification, next frame prediction, sentiment analysis,
and language modeling. We present a wide range of statistics for our dataset.
Finally, we provide baseline performance analysis for one of the regression
tasks using pre-trained models and several fully connected networks.
| 2,020 | Computation and Language |
Affective Conditioning on Hierarchical Networks applied to Depression
Detection from Transcribed Clinical Interviews | In this work we propose a machine learning model for depression detection
from transcribed clinical interviews. Depression is a mental disorder that
impacts not only the subject's mood but also the use of language. To this end
we use a Hierarchical Attention Network to classify interviews of depressed
subjects. We augment the attention layer of our model with a conditioning
mechanism on linguistic features, extracted from affective lexica. Our analysis
shows that individuals diagnosed with depression use affective language to a
greater extent than not-depressed. Our experiments show that external affective
information improves the performance of the proposed architecture in the
General Psychotherapy Corpus and the DAIC-WoZ 2017 depression datasets,
achieving state-of-the-art 71.6 and 68.6 F1 scores respectively.
| 2,020 | Computation and Language |
Open-Domain Question Answering with Pre-Constructed Question Spaces | Open-domain question answering aims at solving the task of locating the
answers to user-generated questions in massive collections of documents. There
are two families of solutions available: retriever-readers, and
knowledge-graph-based approaches. A retriever-reader usually first uses
information retrieval methods like TF-IDF to locate some documents or
paragraphs that are likely to be relevant to the question, and then feeds the
retrieved text to a neural network reader to extract the answer. Alternatively,
knowledge graphs can be constructed from the corpus and be queried against to
answer user questions. We propose a novel algorithm with a reader-retriever
structure that differs from both families. Our reader-retriever first uses an
offline reader to read the corpus and generate collections of all answerable
questions associated with their answers, and then uses an online retriever to
respond to user queries by searching the pre-constructed question spaces for
answers that are most likely to be asked in the given way. We further combine
retriever-reader and reader-retriever results into one single answer by
examining the consistency between the two components. We claim that our
algorithm solves some bottlenecks in existing work, and demonstrate that it
achieves superior accuracy on real-world datasets.
| 2,020 | Computation and Language |
DeepVar: An End-to-End Deep Learning Approach for Genomic Variant
Recognition in Biomedical Literature | We consider the problem of Named Entity Recognition (NER) on biomedical
scientific literature, and more specifically the genomic variants recognition
in this work. Significant success has been achieved for NER on canonical tasks
in recent years where large data sets are generally available. However, it
remains a challenging problem on many domain-specific areas, especially the
domains where only small gold annotations can be obtained. In addition, genomic
variant entities exhibit diverse linguistic heterogeneity, differing much from
those that have been characterized in existing canonical NER tasks. The
state-of-the-art machine learning approaches in such tasks heavily rely on
arduous feature engineering to characterize those unique patterns. In this
work, we present the first successful end-to-end deep learning approach to
bridge the gap between generic NER algorithms and low-resource applications
through genomic variants recognition. Our proposed model can result in
promising performance without any hand-crafted features or post-processing
rules. Our extensive experiments and results may shed light on other similar
low-resource NER applications.
| 2,020 | Computation and Language |
Graph-Stega: Semantic Controllable Steganographic Text Generation Guided
by Knowledge Graph | Most of the existing text generative steganographic methods are based on
coding the conditional probability distribution of each word during the
generation process, and then selecting specific words according to the secret
information, so as to achieve information hiding. Such methods have their
limitations which may bring potential security risks. Firstly, with the
increase of embedding rate, these models will choose words with lower
conditional probability, which will reduce the quality of the generated
steganographic texts; secondly, they can not control the semantic expression of
the final generated steganographic text. This paper proposes a new text
generative steganography method which is quietly different from the existing
models. We use a Knowledge Graph (KG) to guide the generation of steganographic
sentences. On the one hand, we hide the secret information by coding the path
in the knowledge graph, but not the conditional probability of each generated
word; on the other hand, we can control the semantic expression of the
generated steganographic text to a certain extent. The experimental results
show that the proposed model can guarantee both the quality of the generated
text and its semantic expression, which is a supplement and improvement to the
current text generation steganography.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.