Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Annotation Uncertainty in the Context of Grammatical Change
|
This paper elaborates on the notion of uncertainty in the context of
annotation in large text corpora, specifically focusing on (but not limited to)
historical languages. Such uncertainty might be due to inherent properties of
the language, for example, linguistic ambiguity and overlapping categories of
linguistic description, but could also be caused by lacking annotation
expertise. By examining annotation uncertainty in more detail, we identify the
sources and deepen our understanding of the nature and different types of
uncertainty encountered in daily annotation practice. Moreover, some practical
implications of our theoretical findings are also discussed. Last but not
least, this article can be seen as an attempt to reconcile the perspectives of
the main scientific disciplines involved in corpus projects, linguistics and
computer science, to develop a unified view and to highlight the potential
synergies between these disciplines.
| 2,021 |
Computation and Language
|
STAGE: Tool for Automated Extraction of Semantic Time Cues to Enrich
Neural Temporal Ordering Models
|
Despite achieving state-of-the-art accuracy on temporal ordering of events,
neural models showcase significant gaps in performance. Our work seeks to fill
one of these gaps by leveraging an under-explored dimension of textual
semantics: rich semantic information provided by explicit textual time cues. We
develop STAGE, a system that consists of a novel temporal framework and a
parser that can automatically extract time cues and convert them into
representations suitable for integration with neural models. We demonstrate the
utility of extracted cues by integrating them with an event ordering model
using a joint BiLSTM and ILP constraint architecture. We outline the
functionality of the 3-part STAGE processing approach, and show two methods of
integrating its representations with the BiLSTM-ILP model: (i) incorporating
semantic cues as additional features, and (ii) generating new constraints from
semantic cues to be enforced in the ILP. We demonstrate promising results on
two event ordering datasets, and highlight important issues in semantic cue
representation and integration for future research.
| 2,021 |
Computation and Language
|
From Masked Language Modeling to Translation: Non-English Auxiliary
Tasks Improve Zero-shot Spoken Language Understanding
|
The lack of publicly available evaluation data for low-resource languages
limits progress in Spoken Language Understanding (SLU). As key tasks like
intent classification and slot filling require abundant training data, it is
desirable to reuse existing data in high-resource languages to develop models
for low-resource scenarios. We introduce xSID, a new benchmark for
cross-lingual Slot and Intent Detection in 13 languages from 6 language
families, including a very low-resource dialect. To tackle the challenge, we
propose a joint learning approach, with English SLU training data and
non-English auxiliary tasks from raw text, syntax and translation for transfer.
We study two setups which differ by type and language coverage of the
pre-trained embeddings. Our results show that jointly learning the main tasks
with masked language modeling is effective for slots, while machine translation
transfer works best for intent classification.
| 2,021 |
Computation and Language
|
The Volctrans Neural Speech Translation System for IWSLT 2021
|
This paper describes the systems submitted to IWSLT 2021 by the Volctrans
team. We participate in the offline speech translation and text-to-text
simultaneous translation tracks. For offline speech translation, our best
end-to-end model achieves 8.1 BLEU improvements over the benchmark on the
MuST-C test set and is even approaching the results of a strong cascade
solution. For text-to-text simultaneous translation, we explore the best
practice to optimize the wait-k model. As a result, our final submitted systems
exceed the benchmark at around 7 BLEU on the same latency regime. We will
publish our code and model to facilitate both future research works and
industrial applications.
This paper describes the systems submitted to IWSLT 2021 by the Volctrans
team. We participate in the offline speech translation and text-to-text
simultaneous translation tracks. For offline speech translation, our best
end-to-end model achieves 7.9 BLEU improvements over the benchmark on the
MuST-C test set and is even approaching the results of a strong cascade
solution. For text-to-text simultaneous translation, we explore the best
practice to optimize the wait-k model. As a result, our final submitted systems
exceed the benchmark at around 7 BLEU on the same latency regime. We release
our code and model at
\url{https://github.com/bytedance/neurst/tree/master/examples/iwslt21} to
facilitate both future research works and industrial applications.
| 2,021 |
Computation and Language
|
The interplay between language similarity and script on a novel
multi-layer Algerian dialect corpus
|
Recent years have seen a rise in interest for cross-lingual transfer between
languages with similar typology, and between languages of various scripts.
However, the interplay between language similarity and difference in script on
cross-lingual transfer is a less studied problem. We explore this interplay on
cross-lingual transfer for two supervised tasks, namely part-of-speech tagging
and sentiment analysis. We introduce a newly annotated corpus of Algerian
user-generated comments comprising parallel annotations of Algerian written in
Latin, Arabic, and code-switched scripts, as well as annotations for sentiment
and topic categories. We perform baseline experiments by fine-tuning
multi-lingual language models. We further explore the effect of script vs.
language similarity in cross-lingual transfer by fine-tuning multi-lingual
models on languages which are a) typologically distinct, but use the same
script, b) typologically similar, but use a distinct script, or c) are
typologically similar and use the same script. We find there is a delicate
relationship between script and typology for part-of-speech, while sentiment
analysis is less sensitive.
| 2,021 |
Computation and Language
|
How is BERT surprised? Layerwise detection of linguistic anomalies
|
Transformer language models have shown remarkable ability in detecting when a
word is anomalous in context, but likelihood scores offer no information about
the cause of the anomaly. In this work, we use Gaussian models for density
estimation at intermediate layers of three language models (BERT, RoBERTa, and
XLNet), and evaluate our method on BLiMP, a grammaticality judgement benchmark.
In lower layers, surprisal is highly correlated to low token frequency, but
this correlation diminishes in upper layers. Next, we gather datasets of
morphosyntactic, semantic, and commonsense anomalies from psycholinguistic
studies; we find that the best performing model RoBERTa exhibits surprisal in
earlier layers when the anomaly is morphosyntactic than when it is semantic,
while commonsense anomalies do not exhibit surprisal at any intermediate layer.
These results suggest that language models employ separate mechanisms to detect
different types of linguistic anomalies.
| 2,021 |
Computation and Language
|
Few-NERD: A Few-Shot Named Entity Recognition Dataset
|
Recently, considerable literature has grown up around the theme of few-shot
named entity recognition (NER), but little published benchmark data
specifically focused on the practical and challenging task. Current approaches
collect existing supervised NER datasets and re-organize them to the few-shot
setting for empirical study. These strategies conventionally aim to recognize
coarse-grained entity types with few examples, while in practice, most unseen
entity types are fine-grained. In this paper, we present Few-NERD, a
large-scale human-annotated few-shot NER dataset with a hierarchy of 8
coarse-grained and 66 fine-grained entity types. Few-NERD consists of 188,238
sentences from Wikipedia, 4,601,160 words are included and each is annotated as
context or a part of a two-level entity type. To the best of our knowledge,
this is the first few-shot NER dataset and the largest human-crafted NER
dataset. We construct benchmark tasks with different emphases to
comprehensively assess the generalization capability of models. Extensive
empirical results and analysis show that Few-NERD is challenging and the
problem requires further research. We make Few-NERD public at
https://ningding97.github.io/fewnerd/.
| 2,021 |
Computation and Language
|
Data Augmentation for Sign Language Gloss Translation
|
Sign language translation (SLT) is often decomposed into video-to-gloss
recognition and gloss-to-text translation, where a gloss is a sequence of
transcribed spoken-language words in the order in which they are signed. We
focus here on gloss-to-text translation, which we treat as a low-resource
neural machine translation (NMT) problem. However, unlike traditional
low-resource NMT, gloss-to-text translation differs because gloss-text pairs
often have a higher lexical overlap and lower syntactic overlap than pairs of
spoken languages. We exploit this lexical overlap and handle syntactic
divergence by proposing two rule-based heuristics that generate pseudo-parallel
gloss-text pairs from monolingual spoken language text. By pre-training on the
thus obtained synthetic data, we improve translation from American Sign
Language (ASL) to English and German Sign Language (DGS) to German by up to
3.14 and 2.20 BLEU, respectively.
| 2,021 |
Computation and Language
|
Doc2Dict: Information Extraction as Text Generation
|
Typically, information extraction (IE) requires a pipeline approach: first, a
sequence labeling model is trained on manually annotated documents to extract
relevant spans; then, when a new document arrives, a model predicts spans which
are then post-processed and standardized to convert the information into a
database entry. We replace this labor-intensive workflow with a transformer
language model trained on existing database records to directly generate
structured JSON. Our solution removes the workload associated with producing
token-level annotations and takes advantage of a data source which is generally
quite plentiful (e.g. database records). As long documents are common in
information extraction tasks, we use gradient checkpointing and chunked
encoding to apply our method to sequences of up to 32,000 tokens on a single
GPU. Our Doc2Dict approach is competitive with more complex, hand-engineered
pipelines and offers a simple but effective baseline for document-level
information extraction. We release our Doc2Dict model and code to reproduce our
experiments and facilitate future work.
| 2,021 |
Computation and Language
|
Classifying Argumentative Relations Using Logical Mechanisms and
Argumentation Schemes
|
While argument mining has achieved significant success in classifying
argumentative relations between statements (support, attack, and neutral), we
have a limited computational understanding of logical mechanisms that
constitute those relations. Most recent studies rely on black-box models, which
are not as linguistically insightful as desired. On the other hand, earlier
studies use rather simple lexical features, missing logical relations between
statements. To overcome these limitations, our work classifies argumentative
relations based on four logical and theory-informed mechanisms between two
statements, namely (i) factual consistency, (ii) sentiment coherence, (iii)
causal relation, and (iv) normative relation. We demonstrate that our
operationalization of these logical mechanisms classifies argumentative
relations without directly training on data labeled with the relations,
significantly better than several unsupervised baselines. We further
demonstrate that these mechanisms also improve supervised classifiers through
representation learning.
| 2,021 |
Computation and Language
|
Ensemble-based Transfer Learning for Low-resource Machine Translation
Quality Estimation
|
Quality Estimation (QE) of Machine Translation (MT) is a task to estimate the
quality scores for given translation outputs from an unknown MT system.
However, QE scores for low-resource languages are usually intractable and hard
to collect. In this paper, we focus on the Sentence-Level QE Shared Task of the
Fifth Conference on Machine Translation (WMT20), but in a more challenging
setting. We aim to predict QE scores of given translation outputs when barely
none of QE scores of that paired languages are given during training. We
propose an ensemble-based predictor-estimator QE model with transfer learning
to overcome such QE data scarcity challenge by leveraging QE scores from other
miscellaneous languages and translation results of targeted languages. Based on
the evaluation results, we provide a detailed analysis of how each of our
extension affects QE models on the reliability and the generalization ability
to perform transfer learning under multilingual tasks. Finally, we achieve the
best performance on the ensemble model combining the models pretrained by
individual languages as well as different levels of parallel trained corpus
with a Pearson's correlation of 0.298, which is 2.54 times higher than
baselines.
| 2,021 |
Computation and Language
|
Sentence Similarity Based on Contexts
|
Existing methods to measure sentence similarity are faced with two
challenges: (1) labeled datasets are usually limited in size, making them
insufficient to train supervised neural models; (2) there is a training-test
gap for unsupervised language modeling (LM) based models to compute semantic
scores between sentences, since sentence-level semantics are not explicitly
modeled at training. This results in inferior performances in this task. In
this work, we propose a new framework to address these two issues. The proposed
framework is based on the core idea that the meaning of a sentence should be
defined by its contexts, and that sentence similarity can be measured by
comparing the probabilities of generating two sentences given the same context.
The proposed framework is able to generate high-quality, large-scale dataset
with semantic similarity scores between two sentences in an unsupervised
manner, with which the train-test gap can be largely bridged. Extensive
experiments show that the proposed framework achieves significant performance
boosts over existing baselines under both the supervised and unsupervised
settings across different datasets.
| 2,022 |
Computation and Language
|
TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and
Textual Content in Finance
|
Hybrid data combining both tabular and textual content (e.g., financial
reports) are quite pervasive in the real world. However, Question Answering
(QA) over such hybrid data is largely neglected in existing research. In this
work, we extract samples from real financial reports to build a new large-scale
QA dataset containing both Tabular And Textual data, named TAT-QA, where
numerical reasoning is usually required to infer the answer, such as addition,
subtraction, multiplication, division, counting, comparison/sorting, and the
compositions. We further propose a novel QA model termed TAGOP, which is
capable of reasoning over both tables and text. It adopts sequence tagging to
extract relevant cells from the table along with relevant spans from the text
to infer their semantics, and then applies symbolic reasoning over them with a
set of aggregation operators to arrive at the final answer. TAGOPachieves 58.0%
inF1, which is an 11.1% absolute increase over the previous best baseline
model, according to our experiments on TAT-QA. But this result still lags far
behind performance of expert human, i.e.90.8% in F1. It is demonstrated that
our TAT-QA is very challenging and can serve as a benchmark for training and
testing powerful QA models that address hybrid form data.
| 2,021 |
Computation and Language
|
Dependency Parsing as MRC-based Span-Span Prediction
|
Higher-order methods for dependency parsing can partially but not fully
address the issue that edges in dependency trees should be constructed at the
text span/subtree level rather than word level. In this paper, we propose a new
method for dependency parsing to address this issue. The proposed method
constructs dependency trees by directly modeling span-span (in other words,
subtree-subtree) relations. It consists of two modules: the {\it text span
proposal module} which proposes candidate text spans, each of which represents
a subtree in the dependency tree denoted by (root, start, end); and the {\it
span linking module}, which constructs links between proposed spans. We use the
machine reading comprehension (MRC) framework as the backbone to formalize the
span linking module, where one span is used as a query to extract the text
span/subtree it should be linked to. The proposed method has the following
merits: (1) it addresses the fundamental problem that edges in a dependency
tree should be constructed between subtrees; (2) the MRC framework allows the
method to retrieve missing spans in the span proposal stage, which leads to
higher recall for eligible spans. Extensive experiments on the PTB, CTB and
Universal Dependencies (UD) benchmarks demonstrate the effectiveness of the
proposed method. The code is available at
\url{https://github.com/ShannonAI/mrc-for-dependency-parsing}
| 2,022 |
Computation and Language
|
OntoEA: Ontology-guided Entity Alignment via Joint Knowledge Graph
Embedding
|
Semantic embedding has been widely investigated for aligning knowledge graph
(KG) entities. Current methods have explored and utilized the graph structure,
the entity names and attributes, but ignore the ontology (or ontological
schema) which contains critical meta information such as classes and their
membership relationships with entities. In this paper, we propose an
ontology-guided entity alignment method named OntoEA, where both KGs and their
ontologies are jointly embedded, and the class hierarchy and the class
disjointness are utilized to avoid false mappings. Extensive experiments on
seven public and industrial benchmarks have demonstrated the state-of-the-art
performance of OntoEA and the effectiveness of the ontologies.
| 2,021 |
Computation and Language
|
Automatic Fake News Detection: Are Models Learning to Reason?
|
Most fact checking models for automatic fake news detection are based on
reasoning: given a claim with associated evidence, the models aim to estimate
the claim veracity based on the supporting or refuting content within the
evidence. When these models perform well, it is generally assumed to be due to
the models having learned to reason over the evidence with regards to the
claim. In this paper, we investigate this assumption of reasoning, by exploring
the relationship and importance of both claim and evidence. Surprisingly, we
find on political fact checking datasets that most often the highest
effectiveness is obtained by utilizing only the evidence, as the impact of
including the claim is either negligible or harmful to the effectiveness. This
highlights an important problem in what constitutes evidence in existing
approaches for automatic fake news detection.
| 2,021 |
Computation and Language
|
A CCG-Based Version of the DisCoCat Framework
|
While the DisCoCat model (Coecke et al., 2010) has been proved a valuable
tool for studying compositional aspects of language at the level of semantics,
its strong dependency on pregroup grammars poses important restrictions: first,
it prevents large-scale experimentation due to the absence of a pregroup
parser; and second, it limits the expressibility of the model to context-free
grammars. In this paper we solve these problems by reformulating DisCoCat as a
passage from Combinatory Categorial Grammar (CCG) to a category of semantics.
We start by showing that standard categorial grammars can be expressed as a
biclosed category, where all rules emerge as currying/uncurrying the identity;
we then proceed to model permutation-inducing rules by exploiting the symmetry
of the compact closed category encoding the word meaning. We provide a proof of
concept for our method, converting "Alice in Wonderland" into DisCoCat form, a
corpus that we make available to the community.
| 2,021 |
Computation and Language
|
Studying the association of online brand importance with museum
visitors: An application of the semantic brand score
|
This paper explores the association between brand importance and growth in
museum visitors. We analyzed 10 years of online forum discussions and applied
the Semantic Brand Score (SBS) to assess the brand importance of five European
Museums. Our Naive Bayes and regression models indicate that variations in the
combined dimensions of the SBS (prevalence, diversity and connectivity) are
aligned with changes in museum visitors. Results suggest that, in order to
attract more visitors, museum brand managers should focus on increasing the
volume of online posting and the richness of information generated by users
around the brand, rather than controlling for the posts' overall positivity or
negativity.
| 2,020 |
Computation and Language
|
Factoring Statutory Reasoning as Language Understanding Challenges
|
Statutory reasoning is the task of determining whether a legal statute,
stated in natural language, applies to the text description of a case. Prior
work introduced a resource that approached statutory reasoning as a monolithic
textual entailment problem, with neural baselines performing nearly at-chance.
To address this challenge, we decompose statutory reasoning into four types of
language-understanding challenge problems, through the introduction of concepts
and structure found in Prolog programs. Augmenting an existing benchmark, we
provide annotations for the four tasks, and baselines for three of them. Models
for statutory reasoning are shown to benefit from the additional structure,
improving on prior baselines. Further, the decomposition into subtasks
facilitates finer-grained model diagnostics and clearer incremental progress.
| 2,021 |
Computation and Language
|
SeaD: End-to-end Text-to-SQL Generation with Schema-aware Denoising
|
In text-to-SQL task, seq-to-seq models often lead to sub-optimal performance
due to limitations in their architecture. In this paper, we present a simple
yet effective approach that adapts transformer-based seq-to-seq model to robust
text-to-SQL generation. Instead of inducing constraint to decoder or reformat
the task as slot-filling, we propose to train seq-to-seq model with Schema
aware Denoising (SeaD), which consists of two denoising objectives that train
model to either recover input or predict output from two novel erosion and
shuffle noises. These denoising objectives acts as the auxiliary tasks for
better modeling the structural data in S2S generation. In addition, we improve
and propose a clause-sensitive execution guided (EG) decoding strategy to
overcome the limitation of EG decoding for generative model. The experiments
show that the proposed method improves the performance of seq-to-seq model in
both schema linking and grammar correctness and establishes new
state-of-the-art on WikiSQL benchmark. The results indicate that the capacity
of vanilla seq-to-seq architecture for text-to-SQL may have been
under-estimated.
| 2,023 |
Computation and Language
|
Supporting Context Monotonicity Abstractions in Neural NLI Models
|
Natural language contexts display logical regularities with respect to
substitutions of related concepts: these are captured in a functional
order-theoretic property called monotonicity. For a certain class of NLI
problems where the resulting entailment label depends only on the context
monotonicity and the relation between the substituted concepts, we build on
previous techniques that aim to improve the performance of NLI models for these
problems, as consistent performance across both upward and downward monotone
contexts still seems difficult to attain even for state-of-the-art models. To
this end, we reframe the problem of context monotonicity classification to make
it compatible with transformer-based pre-trained NLI models and add this task
to the training pipeline. Furthermore, we introduce a sound and complete
simplified monotonicity logic formalism which describes our treatment of
contexts as abstract units. Using the notions in our formalism, we adapt
targeted challenge sets to investigate whether an intermediate context
monotonicity classification task can aid NLI models' performance on examples
exhibiting monotonicity reasoning.
| 2,021 |
Computation and Language
|
Stage-wise Fine-tuning for Graph-to-Text Generation
|
Graph-to-text generation has benefited from pre-trained language models
(PLMs) in achieving better performance than structured graph encoders. However,
they fail to fully utilize the structure information of the input graph. In
this paper, we aim to further improve the performance of the pre-trained
language model by proposing a structured graph-to-text model with a two-step
fine-tuning mechanism which first fine-tunes the model on Wikipedia before
adapting to the graph-to-text generation. In addition to using the traditional
token and position embeddings to encode the knowledge graph (KG), we propose a
novel tree-level embedding method to capture the inter-dependency structures of
the input graph. This new approach has significantly improved the performance
of all text generation metrics for the English WebNLG 2017 dataset.
| 2,021 |
Computation and Language
|
Room to Grow: Understanding Personal Characteristics Behind Self
Improvement Using Social Media
|
Many people aim for change, but not everyone succeeds. While there are a
number of social psychology theories that propose motivation-related
characteristics of those who persist with change, few computational studies
have explored the motivational stage of personal change. In this paper, we
investigate a new dataset consisting of the writings of people who manifest
intention to change, some of whom persist while others do not. Using a variety
of linguistic analysis techniques, we first examine the writing patterns that
distinguish the two groups of people. Persistent people tend to reference more
topics related to long-term self-improvement and use a more complicated writing
style. Drawing on these consistent differences, we build a classifier that can
reliably identify the people more likely to persist, based on their language.
Our experiments provide new insights into the motivation-related behavior of
people who persist with their intention to change.
| 2,021 |
Computation and Language
|
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
|
This paper is a write-up for the tutorial on "Fine-grained Interpretation and
Causation Analysis in Deep NLP Models" that we are presenting at NAACL 2021. We
present and discuss the research work on interpreting fine-grained components
of a model from two perspectives, i) fine-grained interpretation, ii) causation
analysis. The former introduces methods to analyze individual neurons and a
group of neurons with respect to a language property or a task. The latter
studies the role of neurons and input features in explaining decisions made by
the model. We also discuss application of neuron analysis such as network
manipulation and domain adaptation. Moreover, we present two toolkits namely
NeuroX and Captum, that support functionalities discussed in this tutorial.
| 2,021 |
Computation and Language
|
SGD-QA: Fast Schema-Guided Dialogue State Tracking for Unseen Services
|
Dialogue state tracking is an essential part of goal-oriented dialogue
systems, while most of these state tracking models often fail to handle unseen
services. In this paper, we propose SGD-QA, a simple and extensible model for
schema-guided dialogue state tracking based on a question answering approach.
The proposed multi-pass model shares a single encoder between the domain
information and dialogue utterance. The domain's description represents the
query and the dialogue utterance serves as the context. The model improves
performance on unseen services by at least 1.6x compared to single-pass
baseline models on the SGD dataset. SGD-QA shows competitive performance
compared to state-of-the-art multi-pass models while being significantly more
efficient in terms of memory consumption and training performance. We provide a
thorough discussion on the model with ablation study and error analysis.
| 2,021 |
Computation and Language
|
Multi-Modal Image Captioning for the Visually Impaired
|
One of the ways blind people understand their surroundings is by clicking
images and relying on descriptions generated by image captioning systems.
Current work on captioning images for the visually impaired do not use the
textual data present in the image when generating captions. This problem is
critical as many visual scenes contain text. Moreover, up to 21% of the
questions asked by blind people about the images they click pertain to the text
present in them. In this work, we propose altering AoANet, a state-of-the-art
image captioning model, to leverage the text detected in the image as an input
feature. In addition, we use a pointer-generator mechanism to copy the detected
text to the caption when tokens need to be reproduced accurately. Our model
outperforms AoANet on the benchmark dataset VizWiz, giving a 35% and 16.2%
performance improvement on CIDEr and SPICE scores, respectively.
| 2,021 |
Computation and Language
|
MUSER: MUltimodal Stress Detection using Emotion Recognition as an
Auxiliary Task
|
The capability to automatically detect human stress can benefit artificial
intelligent agents involved in affective computing and human-computer
interaction. Stress and emotion are both human affective states, and stress has
proven to have important implications on the regulation and expression of
emotion. Although a series of methods have been established for multimodal
stress detection, limited steps have been taken to explore the underlying
inter-dependence between stress and emotion. In this work, we investigate the
value of emotion recognition as an auxiliary task to improve stress detection.
We propose MUSER -- a transformer-based model architecture and a novel
multi-task learning algorithm with speed-based dynamic sampling strategy.
Evaluations on the Multimodal Stressed Emotion (MuSE) dataset show that our
model is effective for stress detection with both internal and external
auxiliary tasks, and achieves state-of-the-art results.
| 2,021 |
Computation and Language
|
SHARE: a System for Hierarchical Assistive Recipe Editing
|
The large population of home cooks with dietary restrictions is under-served
by existing cooking resources and recipe generation models. To help them, we
propose the task of controllable recipe editing: adapt a base recipe to satisfy
a user-specified dietary constraint. This task is challenging, and cannot be
adequately solved with human-written ingredient substitution rules or existing
end-to-end recipe generation models. We tackle this problem with SHARE: a
System for Hierarchical Assistive Recipe Editing, which performs simultaneous
ingredient substitution before generating natural-language steps using the
edited ingredients. By decoupling ingredient and step editing, our step
generator can explicitly integrate the available ingredients. Experiments on
the novel RecipePairs dataset -- 83K pairs of similar recipes where each recipe
satisfies one of seven dietary constraints -- demonstrate that SHARE produces
convincing, coherent recipes that are appropriate for a target dietary
constraint. We further show through human evaluations and real-world cooking
trials that recipes edited by SHARE can be easily followed by home cooks to
create appealing dishes.
| 2,022 |
Computation and Language
|
LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
|
Many types of text style transfer can be achieved with only small, precise
edits (e.g. sentiment transfer from I had a terrible time... to I had a great
time...). We propose a coarse-to-fine editor for style transfer that transforms
text using Levenshtein edit operations (e.g. insert, replace, delete). Unlike
prior single-span edit methods, our method concurrently edits multiple spans in
the source text. To train without parallel style text pairs (e.g. pairs of +/-
sentiment statements), we propose an unsupervised data synthesis procedure. We
first convert text to style-agnostic templates using style classifier attention
(e.g. I had a SLOT time...), then fill in slots in these templates using
fine-tuned pretrained language models. Our method outperforms existing
generation and editing style transfer methods on sentiment (Yelp, Amazon) and
politeness (Polite) transfer. In particular, multi-span editing achieves higher
performance and more diverse output than single-span editing. Moreover,
compared to previous methods on unsupervised data synthesis, our method results
in higher quality parallel style pairs and improves model performance.
| 2,021 |
Computation and Language
|
BookSum: A Collection of Datasets for Long-form Narrative Summarization
|
The majority of available text summarization datasets include short-form
source documents that lack long-range causal and temporal dependencies, and
often contain strong layout and stylistic biases. While relevant, such datasets
will offer limited challenges for future generations of text summarization
systems. We address these issues by introducing BookSum, a collection of
datasets for long-form narrative summarization. Our dataset covers source
documents from the literature domain, such as novels, plays and stories, and
includes highly abstractive, human written summaries on three levels of
granularity of increasing difficulty: paragraph-, chapter-, and book-level. The
domain and structure of our dataset poses a unique set of challenges for
summarization systems, which include: processing very long documents,
non-trivial causal and temporal dependencies, and rich discourse structures. To
facilitate future work, we trained and evaluated multiple extractive and
abstractive summarization models as baselines for our dataset.
| 2,022 |
Computation and Language
|
Distantly Supervised Relation Extraction via Recursive
Hierarchy-Interactive Attention and Entity-Order Perception
|
Wrong-labeling problem and long-tail relations severely affect the
performance of distantly supervised relation extraction task. Many studies
mitigate the effect of wrong-labeling through selective attention mechanism and
handle long-tail relations by introducing relation hierarchies to share
knowledge. However, almost all existing studies ignore the fact that, in a
sentence, the appearance order of two entities contributes to the understanding
of its semantics. Furthermore, they only utilize each relation level of
relation hierarchies separately, but do not exploit the heuristic effect
between relation levels, i.e., higher-level relations can give useful
information to the lower ones. Based on the above, in this paper, we design a
novel Recursive Hierarchy-Interactive Attention network (RHIA) to further
handle long-tail relations, which models the heuristic effect between relation
levels. From the top down, it passes relation-related information layer by
layer, which is the most significant difference from existing models, and
generates relation-augmented sentence representations for each relation level
in a recursive structure. Besides, we introduce a newfangled training
objective, called Entity-Order Perception (EOP), to make the sentence encoder
retain more entity appearance information. Substantial experiments on the
popular (NYT) dataset are conducted. Compared to prior baselines, our RHIA-EOP
achieves state-of-the-art performance in terms of precision-recall (P-R)
curves, AUC, Top-N precision and other evaluation metrics. Insightful analysis
also demonstrates the necessity and effectiveness of each component of
RHIA-EOP.
| 2,022 |
Computation and Language
|
An Annotated Commodity News Corpus for Event Extraction
|
Commodity News contains a wealth of information such as sum-mary of the
recent commodity price movement and notable events that led tothe movement.
Through event extraction, useful information extracted fromcommodity news is
extremely useful in mining for causal relation betweenevents and commodity
price movement, which can be used for commodity priceprediction. To facilitate
the future research, we introduce a new dataset withthe following information
identified and annotated: (i) entities (both nomi-nal and named), (ii) events
(trigger words and argument roles), (iii) eventmetadata: modality, polarity and
intensity and (iv) event-event relations.
| 2,021 |
Computation and Language
|
Emotion Eliciting Machine: Emotion Eliciting Conversation Generation
based on Dual Generator
|
Recent years have witnessed great progress on building emotional chatbots.
Tremendous methods have been proposed for chatbots to generate responses with
given emotions. However, the emotion changes of the user during the
conversation has not been fully explored. In this work, we study the problem of
positive emotion elicitation, which aims to generate responses that can elicit
positive emotion of the user, in human-machine conversation. We propose a
weakly supervised Emotion Eliciting Machine (EEM) to address this problem.
Specifically, we first collect weak labels of user emotion status changes in a
conversion based on a pre-trained emotion classifier. Then we propose a dual
encoder-decoder structure to model the generation of responses in both positive
and negative side based on the changes of the user's emotion status in the
conversation. An emotion eliciting factor is introduced on top of the dual
structure to balance the positive and negative emotional impacts on the
generated response during emotion elicitation. The factor also provides a
fine-grained controlling manner for emotion elicitation. Experimental results
on a large real-world dataset show that EEM outperforms the existing models in
generating responses with positive emotion elicitation.
| 2,021 |
Computation and Language
|
KECRS: Towards Knowledge-Enriched Conversational Recommendation System
|
The chit-chat-based conversational recommendation systems (CRS) provide item
recommendations to users through natural language interactions. To better
understand user's intentions, external knowledge graphs (KG) have been
introduced into chit-chat-based CRS. However, existing chit-chat-based CRS
usually generate repetitive item recommendations, and they cannot properly
infuse knowledge from KG into CRS to generate informative responses. To remedy
these issues, we first reformulate the conversational recommendation task to
highlight that the recommended items should be new and possibly interested by
users. Then, we propose the Knowledge-Enriched Conversational Recommendation
System (KECRS). Specifically, we develop the Bag-of-Entity (BOE) loss and the
infusion loss to better integrate KG with CRS for generating more diverse and
informative responses. BOE loss provides an additional supervision signal to
guide CRS to learn from both human-written utterances and KG. Infusion loss
bridges the gap between the word embeddings and entity embeddings by minimizing
distances of the same words in these two embeddings. Moreover, we facilitate
our study by constructing a high-quality KG, \ie The Movie Domain Knowledge
Graph (TMDKG). Experimental results on a large-scale dataset demonstrate that
KECRS outperforms state-of-the-art chit-chat-based CRS, in terms of both
recommendation accuracy and response generation quality.
| 2,021 |
Computation and Language
|
CoMAE: A Multi-factor Hierarchical Framework for Empathetic Response
Generation
|
The capacity of empathy is crucial to the success of open-domain dialog
systems. Due to its nature of multi-dimensionality, there are various factors
that relate to empathy expression, such as communication mechanism, dialog act
and emotion. However, existing methods for empathetic response generation
usually either consider only one empathy factor or ignore the hierarchical
relationships between different factors, leading to a weak ability of empathy
modeling. In this paper, we propose a multi-factor hierarchical framework,
CoMAE, for empathetic response generation, which models the above three key
factors of empathy expression in a hierarchical way. We show experimentally
that our CoMAE-based model can generate more empathetic responses than previous
methods. We also highlight the importance of hierarchical modeling of different
factors through both the empirical analysis on a real-life corpus and the
extensive experiments. Our codes and used data are available at
https://github.com/chujiezheng/CoMAE.
| 2,021 |
Computation and Language
|
Relation Classification with Entity Type Restriction
|
Relation classification aims to predict a relation between two entities in a
sentence. The existing methods regard all relations as the candidate relations
for the two entities in a sentence. These methods neglect the restrictions on
candidate relations by entity types, which leads to some inappropriate
relations being candidate relations. In this paper, we propose a novel
paradigm, RElation Classification with ENtity Type restriction (RECENT), which
exploits entity types to restrict candidate relations. Specially, the mutual
restrictions of relations and entity types are formalized and introduced into
relation classification. Besides, the proposed paradigm, RECENT, is
model-agnostic. Based on two representative models GCN and SpanBERT
respectively, RECENT_GCN and RECENT_SpanBERT are trained in RECENT.
Experimental results on a standard dataset indicate that RECENT improves the
performance of GCN and SpanBERT by 6.9 and 4.4 F1 points, respectively.
Especially, RECENT_SpanBERT achieves a new state-of-the-art on TACRED.
| 2,021 |
Computation and Language
|
DRILL: Dynamic Representations for Imbalanced Lifelong Learning
|
Continual or lifelong learning has been a long-standing challenge in machine
learning to date, especially in natural language processing (NLP). Although
state-of-the-art language models such as BERT have ushered in a new era in this
field due to their outstanding performance in multitask learning scenarios,
they suffer from forgetting when being exposed to a continuous stream of data
with shifting data distributions. In this paper, we introduce DRILL, a novel
continual learning architecture for open-domain text classification. DRILL
leverages a biologically inspired self-organizing neural architecture to
selectively gate latent language representations from BERT in a
task-incremental manner. We demonstrate in our experiments that DRILL
outperforms current methods in a realistic scenario of imbalanced,
non-stationary data without prior knowledge about task boundaries. To the best
of our knowledge, DRILL is the first of its kind to use a self-organizing
neural architecture for open-domain lifelong learning in NLP.
| 2,021 |
Computation and Language
|
Parallel Attention Network with Sequence Matching for Video Grounding
|
Given a video, video grounding aims to retrieve a temporal moment that
semantically corresponds to a language query. In this work, we propose a
Parallel Attention Network with Sequence matching (SeqPAN) to address the
challenges in this task: multi-modal representation learning, and target moment
boundary prediction. We design a self-guided parallel attention module to
effectively capture self-modal contexts and cross-modal attentive information
between video and text. Inspired by sequence labeling tasks in natural language
processing, we split the ground truth moment into begin, inside, and end
regions. We then propose a sequence matching strategy to guide start/end
boundary predictions using region labels. Experimental results on three
datasets show that SeqPAN is superior to state-of-the-art methods. Furthermore,
the effectiveness of the self-guided parallel attention module and the sequence
matching module is verified.
| 2,023 |
Computation and Language
|
Understanding the Properties of Minimum Bayes Risk Decoding in Neural
Machine Translation
|
Neural Machine Translation (NMT) currently exhibits biases such as producing
translations that are too short and overgenerating frequent words, and shows
poor robustness to copy noise in training data or domain shift. Recent work has
tied these shortcomings to beam search -- the de facto standard inference
algorithm in NMT -- and Eikema & Aziz (2020) propose to use Minimum Bayes Risk
(MBR) decoding on unbiased samples instead.
In this paper, we empirically investigate the properties of MBR decoding on a
number of previously reported biases and failure cases of beam search. We find
that MBR still exhibits a length and token frequency bias, owing to the MT
metrics used as utility functions, but that MBR also increases robustness
against copy noise in the training data and domain shift.
| 2,021 |
Computation and Language
|
Revisiting Additive Compositionality: AND, OR and NOT Operations with
Word Embeddings
|
It is well-known that typical word embedding methods such as Word2Vec and
GloVe have the property that the meaning can be composed by adding up the
embeddings (additive compositionality). Several theories have been proposed to
explain additive compositionality, but the following questions remain
unanswered: (Q1) The assumptions of those theories do not hold for the
practical word embedding. (Q2) Ordinary additive compositionality can be seen
as an AND operation of word meanings, but it is not well understood how other
operations, such as OR and NOT, can be computed by the embeddings. We address
these issues by the idea of frequency-weighted centering at its core. This
paper proposes a post-processing method for bridging the gap between practical
word embedding and the assumption of theory about additive compositionality as
an answer to (Q1). It also gives a method for taking OR or NOT of the meaning
by linear operation of word embedding as an answer to (Q2). Moreover, we
confirm experimentally that the accuracy of AND operation, i.e., the ordinary
additive compositionality, can be improved by our post-processing method (3.5x
improvement in top-100 accuracy) and that OR and NOT operations can be
performed correctly.
| 2,022 |
Computation and Language
|
Self-interpretable Convolutional Neural Networks for Text Classification
|
Deep learning models for natural language processing (NLP) are inherently
complex and often viewed as black box in nature. This paper develops an
approach for interpreting convolutional neural networks for text classification
problems by exploiting the local-linear models inherent in ReLU-DNNs. The CNN
model combines the word embedding through convolutional layers, filters them
using max-pooling, and optimizes using a ReLU-DNN for classification. To get an
overall self-interpretable model, the system of local linear models from the
ReLU DNN are mapped back through the max-pool filter to the appropriate
n-grams. Our results on experimental datasets demonstrate that our proposed
technique produce parsimonious models that are self-interpretable and have
comparable performance with respect to a more complex CNN model. We also study
the impact of the complexity of the convolutional layers and the classification
layers on the model performance.
| 2,021 |
Computation and Language
|
WOVe: Incorporating Word Order in GloVe Word Embeddings
|
Word vector representations open up new opportunities to extract useful
information from unstructured text. Defining a word as a vector made it easy
for the machine learning algorithms to understand a text and extract
information from. Word vector representations have been used in many
applications such word synonyms, word analogy, syntactic parsing, and many
others. GloVe, based on word contexts and matrix vectorization, is an
ef-fective vector-learning algorithm. It improves on previous vector-learning
algorithms. However, the GloVe model fails to explicitly consider the order in
which words appear within their contexts. In this paper, multiple methods of
incorporating word order in GloVe word embeddings are proposed. Experimental
results show that our Word Order Vector (WOVe) word embeddings approach
outperforms unmodified GloVe on the natural lan-guage tasks of analogy
completion and word similarity. WOVe with direct concatenation slightly
outperformed GloVe on the word similarity task, increasing average rank by 2%.
However, it greatly improved on the GloVe baseline on a word analogy task,
achieving an average 36.34% improvement in accuracy.
| 2,021 |
Computation and Language
|
Stylized Story Generation with Style-Guided Planning
|
Current storytelling systems focus more ongenerating stories with coherent
plots regard-less of the narration style, which is impor-tant for controllable
text generation. There-fore, we propose a new task, stylized story gen-eration,
namely generating stories with speci-fied style given a leading context. To
tacklethe problem, we propose a novel generationmodel that first plans the
stylized keywordsand then generates the whole story with theguidance of the
keywords. Besides, we pro-pose two automatic metrics to evaluate theconsistency
between the generated story andthe specified style. Experiments
demonstratesthat our model can controllably generateemo-tion-driven
orevent-driven stories based onthe ROCStories dataset (Mostafazadeh et
al.,2016). Our study presents insights for stylizedstory generation in further
research.
| 2,021 |
Computation and Language
|
LCP-RIT at SemEval-2021 Task 1: Exploring Linguistic Features for
Lexical Complexity Prediction
|
This paper describes team LCP-RIT's submission to the SemEval-2021 Task 1:
Lexical Complexity Prediction (LCP). The task organizers provided participants
with an augmented version of CompLex (Shardlow et al., 2020), an English
multi-domain dataset in which words in context were annotated with respect to
their complexity using a five point Likert scale. Our system uses logistic
regression and a wide range of linguistic features (e.g. psycholinguistic
features, n-grams, word frequency, POS tags) to predict the complexity of
single words in this dataset. We analyze the impact of different linguistic
features in the classification performance and we evaluate the results in terms
of mean absolute error, mean squared error, Pearson correlation, and Spearman
correlation.
| 2,021 |
Computation and Language
|
Exploring Text-to-Text Transformers for English to Hinglish Machine
Translation with Synthetic Code-Mixing
|
We describe models focused at the understudied problem of translating between
monolingual and code-mixed language pairs. More specifically, we offer a wide
range of models that convert monolingual English text into Hinglish (code-mixed
Hindi and English). Given the recent success of pretrained language models, we
also test the utility of two recent Transformer-based encoder-decoder models
(i.e., mT5 and mBART) on the task finding both to work well. Given the paucity
of training data for code-mixing, we also propose a dependency-free method for
generating code-mixed texts from bilingual distributed representations that we
exploit for improving language model performance. In particular, armed with
this additional data, we adopt a curriculum learning approach where we first
finetune the language models on synthetic data then on gold code-mixed data. We
find that, although simple, our synthetic code-mixing method is competitive
with (and in some cases is even superior to) several standard methods
(backtranslation, method based on equivalence constraint theory) under a
diverse set of conditions. Our work shows that the mT5 model, finetuned
following the curriculum learning procedure, achieves best translation
performance (12.67 BLEU). Our models place first in the overall ranking of the
English-Hinglish official shared task.
| 2,021 |
Computation and Language
|
An Automated Method to Enrich Consumer Health Vocabularies Using GloVe
Word Embeddings and An Auxiliary Lexical Resource
|
Background: Clear language makes communication easier between any two
parties. A layman may have difficulty communicating with a professional due to
not understanding the specialized terms common to the domain. In healthcare, it
is rare to find a layman knowledgeable in medical terminology which can lead to
poor understanding of their condition and/or treatment. To bridge this gap,
several professional vocabularies and ontologies have been created to map
laymen medical terms to professional medical terms and vice versa.
Objective: Many of the presented vocabularies are built manually or
semi-automatically requiring large investments of time and human effort and
consequently the slow growth of these vocabularies. In this paper, we present
an automatic method to enrich laymen's vocabularies that has the benefit of
being able to be applied to vocabularies in any domain.
Methods: Our entirely automatic approach uses machine learning, specifically
Global Vectors for Word Embeddings (GloVe), on a corpus collected from a social
media healthcare platform to extend and enhance consumer health vocabularies
(CHV). Our approach further improves the CHV by incorporating synonyms and
hyponyms from the WordNet ontology. The basic GloVe and our novel algorithms
incorporating WordNet were evaluated using two laymen datasets from the
National Library of Medicine (NLM), Open-Access Consumer Health Vocabulary (OAC
CHV) and MedlinePlus Healthcare Vocabulary.
Results: The results show that GloVe was able to find new laymen terms with
an F-score of 48.44%. Furthermore, our enhanced GloVe approach outperformed
basic GloVe with an average F-score of 61%, a relative improvement of 25%.
Furthermore, the enhanced GloVe showed a statistical significance over the two
ground truth datasets with P<.001.
| 2,021 |
Computation and Language
|
Training Heterogeneous Features in Sequence to Sequence Tasks: Latent
Enhanced Multi-filter Seq2Seq Model
|
In language processing, training data with extremely large variance may lead
to difficulty in the language model's convergence. It is difficult for the
network parameters to adapt sentences with largely varied semantics or
grammatical structures. To resolve this problem, we introduce a model that
concentrates the each of the heterogeneous features in the input sentences.
Building upon the encoder-decoder architecture, we design a latent-enhanced
multi-filter seq2seq model (LEMS) that analyzes the input representations by
introducing a latent space transformation and clustering. The representations
are extracted from the final hidden state of the encoder and lie in the latent
space. A latent space transformation is applied for enhancing the quality of
the representations. Thus the clustering algorithm can easily separate samples
based on the features of these representations. Multiple filters are trained by
the features from their corresponding clusters, and the heterogeneity of the
training data can be resolved accordingly. We conduct two sets of comparative
experiments on semantic parsing and machine translation, using the Geo-query
dataset and Multi30k English-French to demonstrate the enhancement our model
has made respectively.
| 2,022 |
Computation and Language
|
Effective Attention Sheds Light On Interpretability
|
An attention matrix of a transformer self-attention sublayer can provably be
decomposed into two components and only one of them (effective attention)
contributes to the model output. This leads us to ask whether visualizing
effective attention gives different conclusions than interpretation of standard
attention. Using a subset of the GLUE tasks and BERT, we carry out an analysis
to compare the two attention matrices, and show that their interpretations
differ. Effective attention is less associated with the features related to the
language modeling pretraining such as the separator token, and it has more
potential to illustrate linguistic features captured by the model for solving
the end-task. Given the found differences, we recommend using effective
attention for studying a transformer's behavior since it is more pertinent to
the model output by design.
| 2,021 |
Computation and Language
|
Improving Adverse Drug Event Extraction with SpanBERT on Different Text
Typologies
|
In recent years, Internet users are reporting Adverse Drug Events (ADE) on
social media, blogs and health forums. Because of the large volume of reports,
pharmacovigilance is seeking to resort to NLP to monitor these outlets. We
propose for the first time the use of the SpanBERT architecture for the task of
ADE extraction: this new version of the popular BERT transformer showed
improved capabilities with multi-token text spans. We validate our hypothesis
with experiments on two datasets (SMM4H and CADEC) with different text
typologies (tweets and blog posts), finding that SpanBERT combined with a CRF
outperforms all the competitors on both of them.
| 2,021 |
Computation and Language
|
A Sequence-to-Set Network for Nested Named Entity Recognition
|
Named entity recognition (NER) is a widely studied task in natural language
processing. Recently, a growing number of studies have focused on the nested
NER. The span-based methods, considering the entity recognition as a span
classification task, can deal with nested entities naturally. But they suffer
from the huge search space and the lack of interactions between entities. To
address these issues, we propose a novel sequence-to-set neural network for
nested NER. Instead of specifying candidate spans in advance, we provide a
fixed set of learnable vectors to learn the patterns of the valuable spans. We
utilize a non-autoregressive decoder to predict the final set of entities in
one pass, in which we are able to capture dependencies between entities.
Compared with the sequence-to-sequence method, our model is more suitable for
such unordered recognition task as it is insensitive to the label order. In
addition, we utilize the loss function based on bipartite matching to compute
the overall training loss. Experimental results show that our proposed model
achieves state-of-the-art on three nested NER corpora: ACE 2004, ACE 2005 and
KBP 2017. The code is available at
https://github.com/zqtan1024/sequence-to-set.
| 2,021 |
Computation and Language
|
OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics
|
Automatic metrics are essential for developing natural language generation
(NLG) models, particularly for open-ended language generation tasks such as
story generation. However, existing automatic metrics are observed to correlate
poorly with human evaluation. The lack of standardized benchmark datasets makes
it difficult to fully evaluate the capabilities of a metric and fairly compare
different metrics. Therefore, we propose OpenMEVA, a benchmark for evaluating
open-ended story generation metrics. OpenMEVA provides a comprehensive test
suite to assess the capabilities of metrics, including (a) the correlation with
human judgments, (b) the generalization to different model outputs and
datasets, (c) the ability to judge story coherence, and (d) the robustness to
perturbations. To this end, OpenMEVA includes both manually annotated stories
and auto-constructed test examples. We evaluate existing metrics on OpenMEVA
and observe that they have poor correlation with human judgments, fail to
recognize discourse-level incoherence, and lack inferential knowledge (e.g.,
causal order between events), the generalization ability and robustness. Our
study presents insights for developing NLG models and metrics in further
research.
| 2,021 |
Computation and Language
|
Investigating Math Word Problems using Pretrained Multilingual Language
Models
|
In this paper, we revisit math word problems~(MWPs) from the cross-lingual
and multilingual perspective. We construct our MWP solvers over pretrained
multilingual language models using sequence-to-sequence model with copy
mechanism. We compare how the MWP solvers perform in cross-lingual and
multilingual scenarios. To facilitate the comparison of cross-lingual
performance, we first adapt the large-scale English dataset MathQA as a
counterpart of the Chinese dataset Math23K. Then we extend several English
datasets to bilingual datasets through machine translation plus human
annotation. Our experiments show that the MWP solvers may not be transferred to
a different language even if the target expressions have the same operator set
and constants. But for both cross-lingual and multilingual cases, it can be
better generalized if problem types exist on both source language and target
language.
| 2,022 |
Computation and Language
|
Answering Product-Questions by Utilizing Questions from Other
Contextually Similar Products
|
Predicting the answer to a product-related question is an emerging field of
research that recently attracted a lot of attention. Answering subjective and
opinion-based questions is most challenging due to the dependency on
customer-generated content. Previous works mostly focused on review-aware
answer prediction; however, these approaches fail for new or unpopular
products, having no (or only a few) reviews at hand. In this work, we propose a
novel and complementary approach for predicting the answer for such questions,
based on the answers for similar questions asked on similar products. We
measure the contextual similarity between products based on the answers they
provide for the same question. A mixture-of-expert framework is used to predict
the answer by aggregating the answers from contextually similar products.
Empirical results demonstrate that our model outperforms strong baselines on
some segments of questions, namely those that have roughly ten or more similar
resolved questions in the corpus. We additionally publish two large-scale
datasets used in this work, one is of similar product question pairs, and the
second is of product question-answer pairs.
| 2,021 |
Computation and Language
|
Long Text Generation by Modeling Sentence-Level and Discourse-Level
Coherence
|
Generating long and coherent text is an important but challenging task,
particularly for open-ended language generation tasks such as story generation.
Despite the success in modeling intra-sentence coherence, existing generation
models (e.g., BART) still struggle to maintain a coherent event sequence
throughout the generated text. We conjecture that this is because of the
difficulty for the decoder to capture the high-level semantics and discourse
structures in the context beyond token-level co-occurrence. In this paper, we
propose a long text generation model, which can represent the prefix sentences
at sentence level and discourse level in the decoding process. To this end, we
propose two pretraining objectives to learn the representations by predicting
inter-sentence semantic similarity and distinguishing between normal and
shuffled sentence orders. Extensive experiments show that our model can
generate more coherent texts than state-of-the-art baselines.
| 2,021 |
Computation and Language
|
QuatDE: Dynamic Quaternion Embedding for Knowledge Graph Completion
|
Knowledge graph embedding has been an active research topic for knowledge
base completion (KGC), with progressive improvement from the initial TransE,
TransH, RotatE et al to the current state-of-the-art QuatE. However, QuatE
ignores the multi-faceted nature of the entity and the complexity of the
relation, only using rigorous operation on quaternion space to capture the
interaction between entitiy pair and relation, leaving opportunities for better
knowledge representation which will finally help KGC. In this paper, we propose
a novel model, QuatDE, with a dynamic mapping strategy to explicitly capture
the variety of relational patterns and separate different semantic information
of the entity, using transition vectors to adjust the point position of the
entity embedding vectors in the quaternion space via Hamilton product,
enhancing the feature interaction capability between elements of the triplet.
Experiment results show QuatDE achieves state-of-the-art performance on three
well-established knowledge graph completion benchmarks. In particular, the MR
evaluation has relatively increased by 26% on WN18 and 15% on WN18RR, which
proves the generalization of QuatDE.
| 2,021 |
Computation and Language
|
Sentence Extraction-Based Machine Reading Comprehension for Vietnamese
|
The development of natural language processing (NLP) in general and machine
reading comprehension in particular has attracted the great attention of the
research community. In recent years, there are a few datasets for machine
reading comprehension tasks in Vietnamese with large sizes, such as UIT-ViQuAD
and UIT-ViNewsQA. However, the datasets are not diverse in answers to serve the
research. In this paper, we introduce UIT-ViWikiQA, the first dataset for
evaluating sentence extraction-based machine reading comprehension in the
Vietnamese language. The UIT-ViWikiQA dataset is converted from the UIT-ViQuAD
dataset, consisting of comprises 23.074 question-answers based on 5.109
passages of 174 Wikipedia Vietnamese articles. We propose a conversion
algorithm to create the dataset for sentence extraction-based machine reading
comprehension and three types of approaches for sentence extraction-based
machine reading comprehension in Vietnamese. Our experiments show that the best
machine model is XLM-R_Large, which achieves an exact match (EM) of 85.97% and
an F1-score of 88.77% on our dataset. Besides, we analyze experimental results
in terms of the question type in Vietnamese and the effect of context on the
performance of the MRC models, thereby showing the challenges from the
UIT-ViWikiQA dataset that we propose to the language processing community.
| 2,021 |
Computation and Language
|
Do Models Learn the Directionality of Relations? A New Evaluation:
Relation Direction Recognition
|
Deep neural networks such as BERT have made great progress in relation
classification. Although they can achieve good performance, it is still a
question of concern whether these models recognize the directionality of
relations, especially when they may lack interpretability. To explore the
question, a novel evaluation task, called Relation Direction Recognition (RDR),
is proposed to explore whether models learn the directionality of relations.
Three metrics for RDR are introduced to measure the degree to which models
recognize the directionality of relations. Several state-of-the-art models are
evaluated on RDR. Experimental results on a real-world dataset indicate that
there are clear gaps among them in recognizing the directionality of relations,
even though these models obtain similar performance in the traditional metric
(e.g. Macro-F1). Finally, some suggestions are discussed to enhance models to
recognize the directionality of relations from the perspective of model design
or training.
| 2,021 |
Computation and Language
|
Partner Matters! An Empirical Study on Fusing Personas for Personalized
Response Selection in Retrieval-Based Chatbots
|
Persona can function as the prior knowledge for maintaining the consistency
of dialogue systems. Most of previous studies adopted the self persona in
dialogue whose response was about to be selected from a set of candidates or
directly generated, but few have noticed the role of partner in dialogue. This
paper makes an attempt to thoroughly explore the impact of utilizing personas
that describe either self or partner speakers on the task of response selection
in retrieval-based chatbots. Four persona fusion strategies are designed, which
assume personas interact with contexts or responses in different ways. These
strategies are implemented into three representative models for response
selection, which are based on the Hierarchical Recurrent Encoder (HRE),
Interactive Matching Network (IMN) and Bidirectional Encoder Representations
from Transformers (BERT) respectively. Empirical studies on the Persona-Chat
dataset show that the partner personas neglected in previous studies can
improve the accuracy of response selection in the IMN- and BERT-based models.
Besides, our BERT-based model implemented with the context-response-aware
persona fusion strategy outperforms previous methods by margins larger than
2.7% on original personas and 4.6% on revised personas in terms of hits@1
(top-1 accuracy), achieving a new state-of-the-art performance on the
Persona-Chat dataset.
| 2,021 |
Computation and Language
|
Methods for Detoxification of Texts for the Russian Language
|
We introduce the first study of automatic detoxification of Russian texts to
combat offensive language. Such a kind of textual style transfer can be used,
for instance, for processing toxic content in social media. While much work has
been done for the English language in this field, it has never been solved for
the Russian language yet. We test two types of models - unsupervised approach
based on BERT architecture that performs local corrections and supervised
approach based on pretrained language GPT-2 model - and compare them with
several baselines. In addition, we describe evaluation setup providing training
datasets and metrics for automatic evaluation. The results show that the tested
approaches can be successfully used for detoxification, although there is room
for improvement.
| 2,021 |
Computation and Language
|
Essay-BR: a Brazilian Corpus of Essays
|
Automatic Essay Scoring (AES) is defined as the computer technology that
evaluates and scores the written essays, aiming to provide computational models
to grade essays either automatically or with minimal human involvement. While
there are several AES studies in a variety of languages, few of them are
focused on the Portuguese language. The main reason is the lack of a corpus
with manually graded essays. In order to bridge this gap, we create a large
corpus with several essays written by Brazilian high school students on an
online platform. All of the essays are argumentative and were scored across
five competencies by experts. Moreover, we conducted an experiment on the
created corpus and showed challenges posed by the Portuguese language. Our
corpus is publicly available at https://github.com/rafaelanchieta/essay.
| 2,021 |
Computation and Language
|
Combining GCN and Transformer for Chinese Grammatical Error Detection
|
This paper describes our system at NLPTEA-2020 Task: Chinese Grammatical
Error Diagnosis (CGED). The goal of CGED is to diagnose four types of
grammatical errors: word selection (S), redundant words (R), missing words (M),
and disordered words (W). The automatic CGED system contains two parts
including error detection and error correction and our system is designed to
solve the error detection problem. Our system is built on three models: 1) a
BERT-based model leveraging syntactic information; 2) a BERT-based model
leveraging contextual embeddings; 3) a lexicon-based graph neural network
leveraging lexical information. We also design an ensemble mechanism to improve
the single model's performance. Finally, our system achieves the highest F1
scores at detection level and identification level among all teams
participating in the CGED 2020 task.
| 2,021 |
Computation and Language
|
Explainable Tsetlin Machine framework for fake news detection with
credibility score assessment
|
The proliferation of fake news, i.e., news intentionally spread for
misinformation, poses a threat to individuals and society. Despite various
fact-checking websites such as PolitiFact, robust detection techniques are
required to deal with the increase in fake news. Several deep learning models
show promising results for fake news classification, however, their black-box
nature makes it difficult to explain their classification decisions and
quality-assure the models. We here address this problem by proposing a novel
interpretable fake news detection framework based on the recently introduced
Tsetlin Machine (TM). In brief, we utilize the conjunctive clauses of the TM to
capture lexical and semantic properties of both true and fake news text.
Further, we use the clause ensembles to calculate the credibility of fake news.
For evaluation, we conduct experiments on two publicly available datasets,
PolitiFact and GossipCop, and demonstrate that the TM framework significantly
outperforms previously published baselines by at least $5\%$ in terms of
accuracy, with the added benefit of an interpretable logic-based
representation. Further, our approach provides higher F1-score than BERT and
XLNet, however, we obtain slightly lower accuracy. We finally present a case
study on our model's explainability, demonstrating how it decomposes into
meaningful words and their negations.
| 2,021 |
Computation and Language
|
TableZa -- A classical Computer Vision approach to Tabular Extraction
|
Computer aided Tabular Data Extraction has always been a very challenging and
error prone task because it demands both Spectral and Spatial Sanity of data.
In this paper we discuss an approach for Tabular Data Extraction in the realm
of document comprehension. Given the different kinds of the Tabular formats
that are often found across various documents, we discuss a novel approach
using Computer Vision for extraction of tabular data from images or vector
pdf(s) converted to image(s).
| 2,021 |
Computation and Language
|
Laughing Heads: Can Transformers Detect What Makes a Sentence Funny?
|
The automatic detection of humor poses a grand challenge for natural language
processing. Transformer-based systems have recently achieved remarkable results
on this task, but they usually (1)~were evaluated in setups where serious vs
humorous texts came from entirely different sources, and (2)~focused on
benchmarking performance without providing insights into how the models work.
We make progress in both respects by training and analyzing transformer-based
humor recognition models on a recently introduced dataset consisting of minimal
pairs of aligned sentences, one serious, the other humorous. We find that,
although our aligned dataset is much harder than previous datasets,
transformer-based models recognize the humorous sentence in an aligned pair
with high accuracy (78%). In a careful error analysis, we characterize easy vs
hard instances. Finally, by analyzing attention weights, we obtain important
insights into the mechanisms by which transformers recognize humor. Most
remarkably, we find clear evidence that one single attention head learns to
recognize the words that make a test sentence humorous, even without access to
this information at training time.
| 2,021 |
Computation and Language
|
A Privacy-Preserving Approach to Extraction of Personal Information
through Automatic Annotation and Federated Learning
|
We curated WikiPII, an automatically labeled dataset composed of Wikipedia
biography pages, annotated for personal information extraction. Although
automatic annotation can lead to a high degree of label noise, it is an
inexpensive process and can generate large volumes of annotated documents. We
trained a BERT-based NER model with WikiPII and showed that with an adequately
large training dataset, the model can significantly decrease the cost of manual
information extraction, despite the high level of label noise. In a similar
approach, organizations can leverage text mining techniques to create
customized annotated datasets from their historical data without sharing the
raw data for human annotation. Also, we explore collaborative training of NER
models through federated learning when the annotation is noisy. Our results
suggest that depending on the level of trust to the ML operator and the volume
of the available data, distributed training can be an effective way of training
a personal information identifier in a privacy-preserved manner. Research
material is available at https://github.com/ratmcu/wikipiifed.
| 2,021 |
Computation and Language
|
Detection of Emotions in Hindi-English Code Mixed Text Data
|
In recent times, we have seen an increased use of text chat for communication
on social networks and smartphones. This particularly involves the use of
Hindi-English code-mixed text which contains words which are not recognized in
English vocabulary. We have worked on detecting emotions in these mixed data
and classify the sentences in human emotions which are angry, fear, happy or
sad. We have used state of the art natural language processing models and
compared their performance on the dataset comprising sentences in this mixed
data. The dataset was collected and annotated from sources and then used to
train the models.
| 2,021 |
Computation and Language
|
Retrieval-Augmented Transformer-XL for Close-Domain Dialog Generation
|
Transformer-based models have demonstrated excellent capabilities of
capturing patterns and structures in natural language generation and achieved
state-of-the-art results in many tasks. In this paper we present a
transformer-based model for multi-turn dialog response generation. Our solution
is based on a hybrid approach which augments a transformer-based generative
model with a novel retrieval mechanism, which leverages the memorized
information in the training data via k-Nearest Neighbor search. Our system is
evaluated on two datasets made by customer/assistant dialogs: the Taskmaster-1,
released by Google and holding high quality, goal-oriented conversational data
and a proprietary dataset collected from a real customer service call center.
Both achieve better BLEU scores over strong baselines.
| 2,021 |
Computation and Language
|
Learning Language Specific Sub-network for Multilingual Machine
Translation
|
Multilingual neural machine translation aims at learning a single translation
model for multiple languages. These jointly trained models often suffer from
performance degradation on rich-resource language pairs. We attribute this
degeneration to parameter interference. In this paper, we propose LaSS to
jointly train a single unified multilingual MT model. LaSS learns Language
Specific Sub-network (LaSS) for each language pair to counter parameter
interference. Comprehensive experiments on IWSLT and WMT datasets with various
Transformer architectures show that LaSS obtains gains on 36 language pairs by
up to 1.2 BLEU. Besides, LaSS shows its strong generalization performance at
easy extension to new language pairs and zero-shot translation.LaSS boosts
zero-shot translation with an average of 8.3 BLEU on 30 language pairs. Codes
and trained models are available at https://github.com/NLP-Playground/LaSS.
| 2,021 |
Computation and Language
|
Geographic Question Answering: Challenges, Uniqueness, Classification,
and Future Directions
|
As an important part of Artificial Intelligence (AI), Question Answering (QA)
aims at generating answers to questions phrased in natural language. While
there has been substantial progress in open-domain question answering, QA
systems are still struggling to answer questions which involve geographic
entities or concepts and that require spatial operations. In this paper, we
discuss the problem of geographic question answering (GeoQA). We first
investigate the reasons why geographic questions are difficult to answer by
analyzing challenges of geographic questions. We discuss the uniqueness of
geographic questions compared to general QA. Then we review existing work on
GeoQA and classify them by the types of questions they can address. Based on
this survey, we provide a generic classification framework for geographic
questions. Finally, we conclude our work by pointing out unique future research
directions for GeoQA.
| 2,021 |
Computation and Language
|
Computational Morphology with Neural Network Approaches
|
Neural network approaches have been applied to computational morphology with
great success, improving the performance of most tasks by a large margin and
providing new perspectives for modeling. This paper starts with a brief
introduction to computational morphology, followed by a review of recent work
on computational morphology with neural network approaches, to provide an
overview of the area. In the end, we will analyze the advantages and problems
of neural network approaches to computational morphology, and point out some
directions to be explored by future research and study.
| 2,021 |
Computation and Language
|
MLBiNet: A Cross-Sentence Collective Event Detection Network
|
We consider the problem of collectively detecting multiple events,
particularly in cross-sentence settings. The key to dealing with the problem is
to encode semantic information and model event inter-dependency at a
document-level. In this paper, we reformulate it as a Seq2Seq task and propose
a Multi-Layer Bidirectional Network (MLBiNet) to capture the document-level
association of events and semantic information simultaneously. Specifically, a
bidirectional decoder is firstly devised to model event inter-dependency within
a sentence when decoding the event tag vector sequence. Secondly, an
information aggregation module is employed to aggregate sentence-level semantic
and event tag information. Finally, we stack multiple bidirectional decoders
and feed cross-sentence information, forming a multi-layer bidirectional
tagging architecture to iteratively propagate information across sentences. We
show that our approach provides significant improvement in performance compared
to the current state-of-the-art results.
| 2,022 |
Computation and Language
|
Contrastive Learning for Many-to-many Multilingual Neural Machine
Translation
|
Existing multilingual machine translation approaches mainly focus on
English-centric directions, while the non-English directions still lag behind.
In this work, we aim to build a many-to-many translation system with an
emphasis on the quality of non-English language directions. Our intuition is
based on the hypothesis that a universal cross-language representation leads to
better multilingual translation performance. To this end, we propose mRASP2, a
training method to obtain a single unified multilingual translation model.
mRASP2 is empowered by two techniques: a) a contrastive learning scheme to
close the gap among representations of different languages, and b) data
augmentation on both multiple parallel and monolingual data to further align
token representations. For English-centric directions, mRASP2 outperforms
existing best unified model and achieves competitive or even better performance
than the pre-trained and fine-tuned model mBART on tens of WMT's translation
directions. For non-English directions, mRASP2 achieves an improvement of
average 10+ BLEU compared with the multilingual Transformer baseline. Code,
data and trained models are available at https://github.com/PANXiao1994/mRASP2.
| 2,021 |
Computation and Language
|
Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event
Detection
|
Event detection (ED) aims at detecting event trigger words in sentences and
classifying them into specific event types. In real-world applications, ED
typically does not have sufficient labelled data, thus can be formulated as a
few-shot learning problem. To tackle the issue of low sample diversity in
few-shot ED, we propose a novel knowledge-based few-shot event detection method
which uses a definition-based encoder to introduce external event knowledge as
the knowledge prior of event types. Furthermore, as external knowledge
typically provides limited and imperfect coverage of event types, we introduce
an adaptive knowledge-enhanced Bayesian meta-learning method to dynamically
adjust the knowledge prior of event types. Experiments show our method
consistently and substantially outperforms a number of baselines by at least 15
absolute F1 points under the same few-shot settings.
| 2,021 |
Computation and Language
|
Manual Evaluation Matters: Reviewing Test Protocols of Distantly
Supervised Relation Extraction
|
Distantly supervised (DS) relation extraction (RE) has attracted much
attention in the past few years as it can utilize large-scale auto-labeled
data. However, its evaluation has long been a problem: previous works either
took costly and inconsistent methods to manually examine a small sample of
model predictions, or directly test models on auto-labeled data -- which, by
our check, produce as much as 53% wrong labels at the entity pair level in the
popular NYT10 dataset. This problem has not only led to inaccurate evaluation,
but also made it hard to understand where we are and what's left to improve in
the research of DS-RE. To evaluate DS-RE models in a more credible way, we
build manually-annotated test sets for two DS-RE datasets, NYT10 and Wiki20,
and thoroughly evaluate several competitive models, especially the latest
pre-trained ones. The experimental results show that the manual evaluation can
indicate very different conclusions from automatic ones, especially some
unexpected observations, e.g., pre-trained models can achieve dominating
performance while being more susceptible to false-positives compared to
previous methods. We hope that both our manual test sets and novel observations
can help advance future DS-RE research.
| 2,021 |
Computation and Language
|
Unified Dual-view Cognitive Model for Interpretable Claim Verification
|
Recent studies constructing direct interactions between the claim and each
single user response (a comment or a relevant article) to capture evidence have
shown remarkable success in interpretable claim verification. Owing to
different single responses convey different cognition of individual users
(i.e., audiences), the captured evidence belongs to the perspective of
individual cognition. However, individuals' cognition of social things is not
always able to truly reflect the objective. There may be one-sided or biased
semantics in their opinions on a claim. The captured evidence correspondingly
contains some unobjective and biased evidence fragments, deteriorating task
performance. In this paper, we propose a Dual-view model based on the views of
Collective and Individual Cognition (CICD) for interpretable claim
verification. From the view of the collective cognition, we not only capture
the word-level semantics based on individual users, but also focus on
sentence-level semantics (i.e., the overall responses) among all users and
adjust the proportion between them to generate global evidence. From the view
of individual cognition, we select the top-$k$ articles with high degree of
difference and interact with the claim to explore the local key evidence
fragments. To weaken the bias of individual cognition-view evidence, we devise
inconsistent loss to suppress the divergence between global and local evidence
for strengthening the consistent shared evidence between the both. Experiments
on three benchmark datasets confirm that CICD achieves state-of-the-art
performance.
| 2,021 |
Computation and Language
|
Dependency Parsing with Bottom-up Hierarchical Pointer Networks
|
Dependency parsing is a crucial step towards deep language understanding and,
therefore, widely demanded by numerous Natural Language Processing
applications. In particular, left-to-right and top-down transition-based
algorithms that rely on Pointer Networks are among the most accurate approaches
for performing dependency parsing. Additionally, it has been observed for the
top-down algorithm that Pointer Networks' sequential decoding can be improved
by implementing a hierarchical variant, more adequate to model dependency
structures. Considering all this, we develop a bottom-up-oriented Hierarchical
Pointer Network for the left-to-right parser and propose two novel
transition-based alternatives: an approach that parses a sentence in
right-to-left order and a variant that does it from the outside in. We
empirically test the proposed neural architecture with the different algorithms
on a wide variety of languages, outperforming the original approach in
practically all of them and setting new state-of-the-art results on the English
and Chinese Penn Treebanks for non-contextualized and BERT-based embeddings.
| 2,022 |
Computation and Language
|
TF-IDF vs Word Embeddings for Morbidity Identification in Clinical
Notes: An Initial Study
|
Today, we are seeing an ever-increasing number of clinical notes that contain
clinical results, images, and textual descriptions of patient's health state.
All these data can be analyzed and employed to cater novel services that can
help people and domain experts with their common healthcare tasks. However,
many technologies such as Deep Learning and tools like Word Embeddings have
started to be investigated only recently, and many challenges remain open when
it comes to healthcare domain applications. To address these challenges, we
propose the use of Deep Learning and Word Embeddings for identifying sixteen
morbidity types within textual descriptions of clinical records. For this
purpose, we have used a Deep Learning model based on Bidirectional Long-Short
Term Memory (LSTM) layers which can exploit state-of-the-art vector
representations of data such as Word Embeddings. We have employed pre-trained
Word Embeddings namely GloVe and Word2Vec, and our own Word Embeddings trained
on the target domain. Furthermore, we have compared the performances of the
deep learning approaches against the traditional tf-idf using Support Vector
Machine and Multilayer perceptron (our baselines). From the obtained results it
seems that the latter outperforms the combination of Deep Learning approaches
using any word embeddings. Our preliminary results indicate that there are
specific features that make the dataset biased in favour of traditional machine
learning approaches.
| 2,021 |
Computation and Language
|
Towards Detecting Need for Empathetic Response in Motivational
Interviewing
|
Empathetic response from the therapist is key to the success of clinical
psychotherapy, especially motivational interviewing. Previous work on
computational modelling of empathy in motivational interviewing has focused on
offline, session-level assessment of therapist empathy, where empathy captures
all efforts that the therapist makes to understand the client's perspective and
convey that understanding to the client. In this position paper, we propose a
novel task of turn-level detection of client need for empathy. Concretely, we
propose to leverage pre-trained language models and empathy-related general
conversation corpora in a unique labeller-detector framework, where the
labeller automatically annotates a motivational interviewing conversation
corpus with empathy labels to train the detector that determines the need for
therapist empathy. We also lay out our strategies of extending the detector
with additional-input and multi-task setups to improve its detection and
explainability.
| 2,021 |
Computation and Language
|
LAST at SemEval-2021 Task 1: Improving Multi-Word Complexity Prediction
Using Bigram Association Measures
|
This paper describes the system developed by the Laboratoire d'analyse
statistique des textes (LAST) for the Lexical Complexity Prediction shared task
at SemEval-2021. The proposed system is made up of a LightGBM model fed with
features obtained from many word frequency lists, published lexical norms and
psychometric data. For tackling the specificity of the multi-word task, it uses
bigram association measures. Despite that the only contextual feature used was
sentence length, the system achieved an honorable performance in the multi-word
task, but poorer in the single word task. The bigram association measures were
found useful, but to a limited extent.
| 2,021 |
Computation and Language
|
Towards Target-dependent Sentiment Classification in News Articles
|
Extensive research on target-dependent sentiment classification (TSC) has led
to strong classification performances in domains where authors tend to
explicitly express sentiment about specific entities or topics, such as in
reviews or on social media. We investigate TSC in news articles, a much less
researched domain, despite the importance of news as an essential information
source in individual and societal decision making. This article introduces
NewsTSC, a manually annotated dataset to explore TSC on news articles.
Investigating characteristics of sentiment in news and contrasting them to
popular TSC domains, we find that sentiment in the news is expressed less
explicitly, is more dependent on context and readership, and requires a greater
degree of interpretation. In an extensive evaluation, we find that the state of
the art in TSC performs worse on news articles than on other domains (average
recall AvgRec = 69.8 on NewsTSC compared to AvgRev = [75.6, 82.2] on
established TSC datasets). Reasons include incorrectly resolved relation of
target and sentiment-bearing phrases and off-context dependence. As a major
improvement over previous news TSC, we find that BERT's natural language
understanding capabilities capture the less explicit sentiment used in news
articles.
| 2,021 |
Computation and Language
|
KLUE: Korean Language Understanding Evaluation
|
We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE
is a collection of 8 Korean natural language understanding (NLU) tasks,
including Topic Classification, SemanticTextual Similarity, Natural Language
Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing,
Machine Reading Comprehension, and Dialogue State Tracking. We build all of the
tasks from scratch from diverse source corpora while respecting copyrights, to
ensure accessibility for anyone without any restrictions. With ethical
considerations in mind, we carefully design annotation protocols. Along with
the benchmark tasks and data, we provide suitable evaluation metrics and
fine-tuning recipes for pretrained language models for each task. We
furthermore release the pretrained language models (PLM), KLUE-BERT and
KLUE-RoBERTa, to help reproducing baseline models on KLUE and thereby
facilitate future research. We make a few interesting observations from the
preliminary experiments using the proposed KLUE benchmark suite, already
demonstrating the usefulness of this new benchmark suite. First, we find
KLUE-RoBERTa-large outperforms other baselines, including multilingual PLMs and
existing open-source Korean PLMs. Second, we see minimal degradation in
performance even when we replace personally identifiable information from the
pretraining corpus, suggesting that privacy and NLU capability are not at odds
with each other. Lastly, we find that using BPE tokenization in combination
with morpheme-level pre-tokenization is effective in tasks involving
morpheme-level tagging, detection and generation. In addition to accelerating
Korean NLP research, our comprehensive documentation on creating KLUE will
facilitate creating similar resources for other languages in the future. KLUE
is available at https://klue-benchmark.com.
| 2,021 |
Computation and Language
|
A Case Study on Pros and Cons of Regular Expression Detection and
Dependency Parsing for Negation Extraction from German Medical Documents.
Technical Report
|
We describe our work on information extraction in medical documents written
in German, especially detecting negations using an architecture based on the
UIMA pipeline. Based on our previous work on software modules to cover medical
concepts like diagnoses, examinations, etc. we employ a version of the NegEx
regular expression algorithm with a large set of triggers as a baseline. We
show how a significantly smaller trigger set is sufficient to achieve similar
results, in order to reduce adaptation times to new text types. We elaborate on
the question whether dependency parsing (based on the Stanford CoreNLP model)
is a good alternative and describe the potentials and shortcomings of both
approaches.
| 2,021 |
Computation and Language
|
Robustness of end-to-end Automatic Speech Recognition Models -- A Case
Study using Mozilla DeepSpeech
|
When evaluating the performance of automatic speech recognition models,
usually word error rate within a certain dataset is used. Special care must be
taken in understanding the dataset in order to report realistic performance
numbers. We argue that many performance numbers reported probably underestimate
the expected error rate. We conduct experiments controlling for selection bias,
gender as well as overlap (between training and test data) in content, voices,
and recording conditions. We find that content overlap has the biggest impact,
but other factors like gender also play a role.
| 2,021 |
Computation and Language
|
A comparative evaluation and analysis of three generations of
Distributional Semantic Models
|
Distributional semantics has deeply changed in the last decades. First,
predict models stole the thunder from traditional count ones, and more recently
both of them were replaced in many NLP applications by contextualized vectors
produced by Transformer neural language models. Although an extensive body of
research has been devoted to Distributional Semantic Model (DSM) evaluation, we
still lack a thorough comparison with respect to tested models, semantic tasks,
and benchmark datasets. Moreover, previous work has mostly focused on
task-driven evaluation, instead of exploring the differences between the way
models represent the lexical semantic space. In this paper, we perform a
comprehensive evaluation of type distributional vectors, either produced by
static DSMs or obtained by averaging the contextualized vectors generated by
BERT. First of all, we investigate the performance of embeddings in several
semantic tasks, carrying out an in-depth statistical analysis to identify the
major factors influencing the behavior of DSMs. The results show that i.) the
alleged superiority of predict based models is more apparent than real, and
surely not ubiquitous and ii.) static DSMs surpass contextualized
representations in most out-of-context semantic tasks and datasets.
Furthermore, we borrow from cognitive neuroscience the methodology of
Representational Similarity Analysis (RSA) to inspect the semantic spaces
generated by distributional models. RSA reveals important differences related
to the frequency and part-of-speech of lexical items.
| 2,022 |
Computation and Language
|
Head-driven Phrase Structure Parsing in O($n^3$) Time Complexity
|
Constituent and dependency parsing, the two classic forms of syntactic
parsing, have been found to benefit from joint training and decoding under a
uniform formalism, Head-driven Phrase Structure Grammar (HPSG). However,
decoding this unified grammar has a higher time complexity ($O(n^5)$) than
decoding either form individually ($O(n^3)$) since more factors have to be
considered during decoding. We thus propose an improved head scorer that helps
achieve a novel performance-preserved parser in $O$($n^3$) time complexity.
Furthermore, on the basis of this proposed practical HPSG parser, we
investigated the strengths of HPSG-based parsing and explored the general
method of training an HPSG-based parser from only a constituent or dependency
annotations in a multilingual scenario. We thus present a more effective, more
in-depth, and general work on HPSG parsing.
| 2,021 |
Computation and Language
|
A practical introduction to the Rational Speech Act modeling framework
|
Recent advances in computational cognitive science (i.e., simulation-based
probabilistic programs) have paved the way for significant progress in formal,
implementable models of pragmatics. Rather than describing a pragmatic
reasoning process in prose, these models formalize and implement one, deriving
both qualitative and quantitative predictions of human behavior -- predictions
that consistently prove correct, demonstrating the viability and value of the
framework. The current paper provides a practical introduction to and critical
assessment of the Bayesian Rational Speech Act modeling framework, unpacking
theoretical foundations, exploring technological innovations, and drawing
connections to issues beyond current applications.
| 2,021 |
Computation and Language
|
Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on
Twitter
|
Datasets with induced emotion labels are scarce but of utmost importance for
many NLP tasks. We present a new, automated method for collecting texts along
with their induced reaction labels. The method exploits the online use of
reaction GIFs, which capture complex affective states. We show how to augment
the data with induced emotion and induced sentiment labels. We use our method
to create and publish ReactionGIF, a first-of-its-kind affective dataset of 30K
tweets. We provide baselines for three new tasks, including induced sentiment
prediction and multilabel classification of induced emotions. Our method and
dataset open new research opportunities in emotion detection and affective
computing.
| 2,021 |
Computation and Language
|
Multi-modal Sarcasm Detection and Humor Classification in Code-mixed
Conversations
|
Sarcasm detection and humor classification are inherently subtle problems,
primarily due to their dependence on the contextual and non-verbal information.
Furthermore, existing studies in these two topics are usually constrained in
non-English languages such as Hindi, due to the unavailability of qualitative
annotated datasets. In this work, we make two major contributions considering
the above limitations: (1) we develop a Hindi-English code-mixed dataset,
MaSaC, for the multi-modal sarcasm detection and humor classification in
conversational dialog, which to our knowledge is the first dataset of its kind;
(2) we propose MSH-COMICS, a novel attention-rich neural architecture for the
utterance classification. We learn efficient utterance representation utilizing
a hierarchical attention mechanism that attends to a small portion of the input
sentence at a time. Further, we incorporate dialog-level contextual attention
mechanism to leverage the dialog history for the multi-modal classification. We
perform extensive experiments for both the tasks by varying multi-modal inputs
and various submodules of MSH-COMICS. We also conduct comparative analysis
against existing approaches. We observe that MSH-COMICS attains superior
performance over the existing models by > 1 F1-score point for the sarcasm
detection and 10 F1-score points in humor classification. We diagnose our model
and perform thorough analysis of the results to understand the superiority and
pitfalls.
| 2,021 |
Computation and Language
|
ASQ: Automatically Generating Question-Answer Pairs using AMRs
|
We introduce ASQ, a tool to automatically mine questions and answers from a
sentence using the Abstract Meaning Representation (AMR). Previous work has
used question-answer pairs to specify the predicate-argument structure of a
sentence using natural language, which does not require linguistic expertise or
training, and created datasets such as QA-SRL and QAMR, for which the
question-answer pair annotations were crowdsourced. Our goal is to build a tool
(ASQ) that maps from the traditional meaning representation AMR to a
question-answer meaning representation (QMR). This enables construction of QMR
datasets automatically in various domains using existing high-quality AMR
parsers, and provides an automatic mapping AMR to QMR for ease of understanding
by non-experts. A qualitative evaluation of the output generated by ASQ from
the AMR 2.0 data shows that the question-answer pairs are natural and valid,
and demonstrate good coverage of the content. We run ASQ on the sentences from
the QAMR dataset, to observe that the semantic roles in QAMR are also captured
by ASQ. We intend to make this tool and the results publicly available for
others to use and build upon.
| 2,021 |
Computation and Language
|
Improving Generation and Evaluation of Visual Stories via Semantic
Consistency
|
Story visualization is an under-explored task that falls at the intersection
of many important research directions in both computer vision and natural
language processing. In this task, given a series of natural language captions
which compose a story, an agent must generate a sequence of images that
correspond to the captions. Prior work has introduced recurrent generative
models which outperform text-to-image synthesis models on this task. However,
there is room for improvement of generated images in terms of visual quality,
coherence and relevance. We present a number of improvements to prior modeling
approaches, including (1) the addition of a dual learning framework that
utilizes video captioning to reinforce the semantic alignment between the story
and generated images, (2) a copy-transform mechanism for
sequentially-consistent story visualization, and (3) MART-based transformers to
model complex interactions between frames. We present ablation studies to
demonstrate the effect of each of these techniques on the generative power of
the model for both individual images as well as the entire narrative.
Furthermore, due to the complexity and generative nature of the task, standard
evaluation metrics do not accurately reflect performance. Therefore, we also
provide an exploration of evaluation metrics for the model, focused on aspects
of the generated frames such as the presence/quality of generated characters,
the relevance to captions, and the diversity of the generated images. We also
present correlation experiments of our proposed automated metrics with human
evaluations. Code and data available at:
https://github.com/adymaharana/StoryViz
| 2,021 |
Computation and Language
|
A Streaming End-to-End Framework For Spoken Language Understanding
|
End-to-end spoken language understanding (SLU) has recently attracted
increasing interest. Compared to the conventional tandem-based approach that
combines speech recognition and language understanding as separate modules, the
new approach extracts users' intentions directly from the speech signals,
resulting in joint optimization and low latency. Such an approach, however, is
typically designed to process one intention at a time, which leads users to
take multiple rounds to fulfill their requirements while interacting with a
dialogue system. In this paper, we propose a streaming end-to-end framework
that can process multiple intentions in an online and incremental way. The
backbone of our framework is a unidirectional RNN trained with the
connectionist temporal classification (CTC) criterion. By this design, an
intention can be identified when sufficient evidence has been accumulated, and
multiple intentions can be identified sequentially. We evaluate our solution on
the Fluent Speech Commands (FSC) dataset and the intent detection accuracy is
about 97 % on all multi-intent settings. This result is comparable to the
performance of the state-of-the-art non-streaming models, but is achieved in an
online and incremental way. We also employ our model to a keyword spotting task
using the Google Speech Commands dataset and the results are also highly
promising.
| 2,021 |
Computation and Language
|
Boosting Span-based Joint Entity and Relation Extraction via Squence
Tagging Mechanism
|
Span-based joint extraction simultaneously conducts named entity recognition
(NER) and relation extraction (RE) in text span form. Recent studies have shown
that token labels can convey crucial task-specific information and enrich token
semantics. However, as far as we know, due to completely abstain from sequence
tagging mechanism, all prior span-based work fails to use token label
in-formation. To solve this problem, we pro-pose Sequence Tagging enhanced
Span-based Network (STSN), a span-based joint extrac-tion network that is
enhanced by token BIO label information derived from sequence tag-ging based
NER. By stacking multiple atten-tion layers in depth, we design a deep neu-ral
architecture to build STSN, and each atten-tion layer consists of three basic
attention units. The deep neural architecture first learns seman-tic
representations for token labels and span-based joint extraction, and then
constructs in-formation interactions between them, which also realizes
bidirectional information interac-tions between span-based NER and RE.
Fur-thermore, we extend the BIO tagging scheme to make STSN can extract
overlapping en-tity. Experiments on three benchmark datasets show that our
model consistently outperforms previous optimal models by a large margin,
creating new state-of-the-art results.
| 2,022 |
Computation and Language
|
Towards Automatic Comparison of Data Privacy Documents: A Preliminary
Experiment on GDPR-like Laws
|
General Data Protection Regulation (GDPR) becomes a standard law for data
protection in many countries. Currently, twelve countries adopt the regulation
and establish their GDPR-like regulation. However, to evaluate the differences
and similarities of these GDPR-like regulations is time-consuming and needs a
lot of manual effort from legal experts. Moreover, GDPR-like regulations from
different countries are written in their languages leading to a more difficult
task since legal experts who know both languages are essential. In this paper,
we investigate a simple natural language processing (NLP) approach to tackle
the problem. We first extract chunks of information from GDPR-like documents
and form structured data from natural language. Next, we use NLP methods to
compare documents to measure their similarity. Finally, we manually label a
small set of data to evaluate our approach. The empirical result shows that the
BERT model with cosine similarity outperforms other baselines. Our data and
code are publicly available.
| 2,021 |
Computation and Language
|
Training Bi-Encoders for Word Sense Disambiguation
|
Modern transformer-based neural architectures yield impressive results in
nearly every NLP task and Word Sense Disambiguation, the problem of discerning
the correct sense of a word in a given context, is no exception.
State-of-the-art approaches in WSD today leverage lexical information along
with pre-trained embeddings from these models to achieve results comparable to
human inter-annotator agreement on standard evaluation benchmarks. In the same
vein, we experiment with several strategies to optimize bi-encoders for this
specific task and propose alternative methods of presenting lexical information
to our model. Through our multi-stage pre-training and fine-tuning pipeline we
further the state of the art in Word Sense Disambiguation.
| 2,021 |
Computation and Language
|
Should We Trust This Summary? Bayesian Abstractive Summarization to The
Rescue
|
We explore the notion of uncertainty in the context of modern abstractive
summarization models, using the tools of Bayesian Deep Learning. Our approach
approximates Bayesian inference by first extending state-of-the-art
summarization models with Monte Carlo dropout and then using them to perform
multiple stochastic forward passes. Based on Bayesian inference we are able to
effectively quantify uncertainty at prediction time. Having a reliable
uncertainty measure, we can improve the experience of the end user by filtering
out generated summaries of high uncertainty. Furthermore, uncertainty
estimation could be used as a criterion for selecting samples for annotation,
and can be paired nicely with active learning and human-in-the-loop approaches.
Finally, Bayesian inference enables us to find a Bayesian summary which
performs better than a deterministic one and is more robust to uncertainty. In
practice, we show that our Variational Bayesian equivalents of BART and PEGASUS
can outperform their deterministic counterparts on multiple benchmark datasets.
| 2,022 |
Computation and Language
|
Revisiting the Negative Data of Distantly Supervised Relation Extraction
|
Distantly supervision automatically generates plenty of training samples for
relation extraction. However, it also incurs two major problems: noisy labels
and imbalanced training data. Previous works focus more on reducing wrongly
labeled relations (false positives) while few explore the missing relations
that are caused by incompleteness of knowledge base (false negatives).
Furthermore, the quantity of negative labels overwhelmingly surpasses the
positive ones in previous problem formulations. In this paper, we first provide
a thorough analysis of the above challenges caused by negative data. Next, we
formulate the problem of relation extraction into as a positive unlabeled
learning task to alleviate false negative problem. Thirdly, we propose a
pipeline approach, dubbed \textsc{ReRe}, that performs sentence-level relation
detection then subject/object extraction to achieve sample-efficient training.
Experimental results show that the proposed method consistently outperforms
existing approaches and remains excellent performance even learned with a large
quantity of false positive samples.
| 2,021 |
Computation and Language
|
Have you tried Neural Topic Models? Comparative Analysis of Neural and
Non-Neural Topic Models with Application to COVID-19 Twitter Data
|
Topic models are widely used in studying social phenomena. We conduct a
comparative study examining state-of-the-art neural versus non-neural topic
models, performing a rigorous quantitative and qualitative assessment on a
dataset of tweets about the COVID-19 pandemic. Our results show that not only
do neural topic models outperform their classical counterparts on standard
evaluation metrics, but they also produce more coherent topics, which are of
great benefit when studying complex social problems. We also propose a novel
regularization term for neural topic models, which is designed to address the
well-documented problem of mode collapse, and demonstrate its effectiveness.
| 2,021 |
Computation and Language
|
A Non-Linear Structural Probe
|
Probes are models devised to investigate the encoding of knowledge -- e.g.
syntactic structure -- in contextual representations. Probes are often designed
for simplicity, which has led to restrictions on probe design that may not
allow for the full exploitation of the structure of encoded information; one
such restriction is linearity. We examine the case of a structural probe
(Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic
structure in contextual representations through learning only linear
transformations. By observing that the structural probe learns a metric, we are
able to kernelize it and develop a novel non-linear variant with an identical
number of parameters. We test on 6 languages and find that the radial-basis
function (RBF) kernel, in conjunction with regularization, achieves a
statistically significant improvement over the baseline in all languages --
implying that at least part of the syntactic knowledge is encoded non-linearly.
We conclude by discussing how the RBF kernel resembles BERT's self-attention
layers and speculate that this resemblance leads to the RBF-based probe's
stronger performance.
| 2,021 |
Computation and Language
|
Semantic Representation for Dialogue Modeling
|
Although neural models have achieved competitive results in dialogue systems,
they have shown limited ability in representing core semantics, such as
ignoring important entities. To this end, we exploit Abstract Meaning
Representation (AMR) to help dialogue modeling. Compared with the textual
input, AMR explicitly provides core semantic knowledge and reduces data
sparsity. We develop an algorithm to construct dialogue-level AMR graphs from
sentence-level AMRs and explore two ways to incorporate AMRs into dialogue
systems. Experimental results on both dialogue understanding and response
generation tasks show the superiority of our model. To our knowledge, we are
the first to leverage a formal semantic representation into neural dialogue
modeling.
| 2,021 |
Computation and Language
|
Rule Augmented Unsupervised Constituency Parsing
|
Recently, unsupervised parsing of syntactic trees has gained considerable
attention. A prototypical approach to such unsupervised parsing employs
reinforcement learning and auto-encoders. However, no mechanism ensures that
the learnt model leverages the well-understood language grammar. We propose an
approach that utilizes very generic linguistic knowledge of the language
present in the form of syntactic rules, thus inducing better syntactic
structures. We introduce a novel formulation that takes advantage of the
syntactic grammar rules and is independent of the base system. We achieve new
state-of-the-art results on two benchmarks datasets, MNLI and WSJ. The source
code of the paper is available at https://github.com/anshuln/Diora_with_rules.
| 2,021 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.