Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Towards a Universal NLG for Dialogue Systems and Simulators with Future
Bridging
|
In a dialogue system pipeline, a natural language generation (NLG) unit
converts the dialogue direction and content to a corresponding natural language
realization. A recent trend for dialogue systems is to first pre-train on large
datasets and then fine-tune in a supervised manner using datasets annotated
with application-specific features. Though novel behaviours can be learned from
custom annotation, the required effort severely bounds the quantity of the
training set, and the application-specific nature limits the reuse. In light of
the recent success of data-driven approaches, we propose the novel future
bridging NLG (FBNLG) concept for dialogue systems and simulators. The critical
step is for an FBNLG to accept a future user or system utterance to bridge the
present context towards. Future bridging enables self supervised training over
annotation-free datasets, decoupled the training of NLG from the rest of the
system. An FBNLG, pre-trained with massive datasets, is expected to apply in
classical or new dialogue scenarios with minimal adaptation effort. We evaluate
a prototype FBNLG to show that future bridging can be a viable approach to a
universal few-shot NLG for task-oriented and chit-chat dialogues.
| 2,021 |
Computation and Language
|
Pretrained Language Models for Text Generation: A Survey
|
Text generation has become one of the most important yet challenging tasks in
natural language processing (NLP). The resurgence of deep learning has greatly
advanced this field by neural generation models, especially the paradigm of
pretrained language models (PLMs). In this paper, we present an overview of the
major advances achieved in the topic of PLMs for text generation. As the
preliminaries, we present the general task definition and briefly describe the
mainstream architectures of PLMs for text generation. As the core content, we
discuss how to adapt existing PLMs to model different input data and satisfy
special properties in the generated text. We further summarize several
important fine-tuning strategies for text generation. Finally, we present
several future directions and conclude this paper. Our survey aims to provide
text generation researchers a synthesis and pointer to related research.
| 2,021 |
Computation and Language
|
Learning from My Friends: Few-Shot Personalized Conversation Systems via
Social Networks
|
Personalized conversation models (PCMs) generate responses according to
speaker preferences. Existing personalized conversation tasks typically require
models to extract speaker preferences from user descriptions or their
conversation histories, which are scarce for newcomers and inactive users. In
this paper, we propose a few-shot personalized conversation task with an
auxiliary social network. The task requires models to generate personalized
responses for a speaker given a few conversations from the speaker and a social
network. Existing methods are mainly designed to incorporate descriptions or
conversation histories. Those methods can hardly model speakers with so few
conversations or connections between speakers. To better cater for newcomers
with few resources, we propose a personalized conversation model (PCM) that
learns to adapt to new speakers as well as enabling new speakers to learn from
resource-rich speakers. Particularly, based on a meta-learning based PCM, we
propose a task aggregator (TA) to collect other speakers' information from the
social network. The TA provides prior knowledge of the new speaker in its
meta-learning. Experimental results show our methods outperform all baselines
in appropriateness, diversity, and consistency with speakers.
| 2,021 |
Computation and Language
|
Fact-driven Logical Reasoning for Machine Reading Comprehension
|
Recent years have witnessed an increasing interest in training machines with
reasoning ability, which deeply relies on accurately and clearly presented clue
forms. The clues are usually modeled as entity-aware knowledge in existing
studies. However, those entity-aware clues are primarily focused on
commonsense, making them insufficient for tasks that require knowledge of
temporary facts or events, particularly in logical reasoning for reading
comprehension. To address this challenge, we are motivated to cover both
commonsense and temporary knowledge clues hierarchically. Specifically, we
propose a general formalism of knowledge units by extracting backbone
constituents of the sentence, such as the subject-verb-object formed ``facts''.
We then construct a supergraph on top of the fact units, allowing for the
benefit of sentence-level (relations among fact groups) and entity-level
interactions (concepts or actions inside a fact). Experimental results on
logical reasoning benchmarks and dialogue modeling datasets show that our
approach improves the baselines substantially, and it is general across
backbone models. Code is available at
\url{https://github.com/ozyyshr/FocalReasoner}.
| 2,023 |
Computation and Language
|
Functionals in the Clouds: An abstract architecture of serverless
Cloud-Native Apps
|
Cloud Native Application CNApp (as a distributed system) is a collection of
independent components (micro-services) interacting via communication
protocols. This gives rise to present an abstract architecture of CNApp as
dynamically re-configurable acyclic directed multi graph where vertices are
microservices, and edges are the protocols. Generic mechanisms for such
reconfigurations evidently correspond to higher-level functions (functionals).
This implies also internal abstract architecture of microservice as a
collection of event-triggered serverless functions (including functions
implementing the protocols) that are dynamically composed into event-dependent
data-flow graphs. Again, generic mechanisms for such compositions correspond to
calculus of functionals and relations.
| 2,022 |
Computation and Language
|
Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining
|
Existing models of multilingual sentence embeddings require large parallel
data resources which are not available for low-resource languages. We propose a
novel unsupervised method to derive multilingual sentence embeddings relying
only on monolingual data. We first produce a synthetic parallel corpus using
unsupervised machine translation, and use it to fine-tune a pretrained
cross-lingual masked language model (XLM) to derive the multilingual sentence
representations. The quality of the representations is evaluated on two
parallel corpus mining tasks with improvements of up to 22 F1 points over
vanilla XLM. In addition, we observe that a single synthetic bilingual corpus
is able to improve results for other language pairs.
| 2,020 |
Computation and Language
|
CEREC: A Corpus for Entity Resolution in Email Conversations
|
We present the first large scale corpus for entity resolution in email
conversations (CEREC). The corpus consists of 6001 email threads from the Enron
Email Corpus containing 36,448 email messages and 60,383 entity coreference
chains. The annotation is carried out as a two-step process with minimal manual
effort. Experiments are carried out for evaluating different features and
performance of four baselines on the created corpus. For the task of mention
identification and coreference resolution, a best performance of 59.2 F1 is
reported, highlighting the room for improvement. An in-depth qualitative and
quantitative error analysis is presented to understand the limitations of the
baselines considered.
| 2,020 |
Computation and Language
|
RST Parsing from Scratch
|
We introduce a novel top-down end-to-end formulation of document-level
discourse parsing in the Rhetorical Structure Theory (RST) framework. In this
formulation, we consider discourse parsing as a sequence of splitting decisions
at token boundaries and use a seq2seq network to model the splitting decisions.
Our framework facilitates discourse parsing from scratch without requiring
discourse segmentation as a prerequisite; rather, it yields segmentation as
part of the parsing process. Our unified parsing model adopts a beam search to
decode the best tree structure by searching through a space of high-scoring
trees. With extensive experiments on the standard English RST discourse
treebank, we demonstrate that our parser outperforms existing methods by a good
margin in both end-to-end parsing and parsing with gold segmentation. More
importantly, it does so without using any handcrafted features, making it
faster and easily adaptable to new languages and domains.
| 2,021 |
Computation and Language
|
CiteWorth: Cite-Worthiness Detection for Improved Scientific Document
Understanding
|
Scientific document understanding is challenging as the data is highly domain
specific and diverse. However, datasets for tasks with scientific text require
expensive manual annotation and tend to be small and limited to only one or a
few fields. At the same time, scientific documents contain many potential
training signals, such as citations, which can be used to build large labelled
datasets. Given this, we present an in-depth study of cite-worthiness detection
in English, where a sentence is labelled for whether or not it cites an
external source. To accomplish this, we introduce CiteWorth, a large,
contextualized, rigorously cleaned labelled dataset for cite-worthiness
detection built from a massive corpus of extracted plain-text scientific
documents. We show that CiteWorth is high-quality, challenging, and suitable
for studying problems such as domain adaptation. Our best performing
cite-worthiness detection model is a paragraph-level contextualized sentence
labelling model based on Longformer, exhibiting a 5 F1 point improvement over
SciBERT which considers only individual sentences. Finally, we demonstrate that
language model fine-tuning with cite-worthiness as a secondary task leads to
improved performance on downstream scientific document understanding tasks.
| 2,021 |
Computation and Language
|
Structural Pre-training for Dialogue Comprehension
|
Pre-trained language models (PrLMs) have demonstrated superior performance
due to their strong ability to learn universal language representations from
self-supervised pre-training. However, even with the help of the powerful
PrLMs, it is still challenging to effectively capture task-related knowledge
from dialogue texts which are enriched by correlations among speaker-aware
utterances. In this work, we present SPIDER, Structural Pre-traIned DialoguE
Reader, to capture dialogue exclusive features. To simulate the dialogue-like
features, we propose two training objectives in addition to the original LM
objectives: 1) utterance order restoration, which predicts the order of the
permuted utterances in dialogue context; 2) sentence backbone regularization,
which regularizes the model to improve the factual correctness of summarized
subject-verb-object triplets. Experimental results on widely used dialogue
benchmarks verify the effectiveness of the newly introduced self-supervised
tasks.
| 2,021 |
Computation and Language
|
Automatic Product Ontology Extraction from Textual Reviews
|
Ontologies have proven beneficial in different settings that make use of
textual reviews. However, manually constructing ontologies is a laborious and
time-consuming process in need of automation. We propose a novel methodology
for automatically extracting ontologies, in the form of meronomies, from
product reviews, using a very limited amount of hand-annotated training data.
We show that the ontologies generated by our method outperform hand-crafted
ontologies (WordNet) and ontologies extracted by existing methods (Text2Onto
and COMET) in several, diverse settings. Specifically, our generated ontologies
outperform the others when evaluated by human annotators as well as on an
existing Q&A dataset from Amazon. Moreover, our method is better able to
generalise, in capturing knowledge about unseen products. Finally, we consider
a real-world setting, showing that our method is better able to determine
recommended products based on their reviews, in alternative to using Amazon's
standard score aggregations.
| 2,021 |
Computation and Language
|
Controlling Text Edition by Changing Answers of Specific Questions
|
In this paper, we introduce the new task of controllable text edition, in
which we take as input a long text, a question, and a target answer, and the
output is a minimally modified text, so that it fits the target answer. This
task is very important in many situations, such as changing some conditions,
consequences, or properties in a legal document, or changing some key
information of an event in a news text. This is very challenging, as it is hard
to obtain a parallel corpus for training, and we need to first find all text
positions that should be changed and then decide how to change them. We
constructed the new dataset WikiBioCTE for this task based on the existing
dataset WikiBio (originally created for table-to-text generation). We use
WikiBioCTE for training, and manually labeled a test set for testing. We also
propose novel evaluation metrics and a novel method for solving the new task.
Experimental results on the test set show that our proposed method is a good
fit for this novel NLP task.
| 2,021 |
Computation and Language
|
Unsupervised Speech Recognition
|
Despite rapid progress in the recent past, current speech recognition systems
still require labeled training data which limits this technology to a small
fraction of the languages spoken around the globe. This paper describes
wav2vec-U, short for wav2vec Unsupervised, a method to train speech recognition
models without any labeled data. We leverage self-supervised speech
representations to segment unlabeled audio and learn a mapping from these
representations to phonemes via adversarial training. The right representations
are key to the success of our method. Compared to the best previous
unsupervised work, wav2vec-U reduces the phoneme error rate on the TIMIT
benchmark from 26.1 to 11.3. On the larger English Librispeech benchmark,
wav2vec-U achieves a word error rate of 5.9 on test-other, rivaling some of the
best published systems trained on 960 hours of labeled data from only two years
ago. We also experiment on nine other languages, including low-resource
languages such as Kyrgyz, Swahili and Tatar.
| 2,022 |
Computation and Language
|
Prevent the Language Model from being Overconfident in Neural Machine
Translation
|
The Neural Machine Translation (NMT) model is essentially a joint language
model conditioned on both the source sentence and partial translation.
Therefore, the NMT model naturally involves the mechanism of the Language Model
(LM) that predicts the next token only based on partial translation. Despite
its success, NMT still suffers from the hallucination problem, generating
fluent but inadequate translations. The main reason is that NMT pays excessive
attention to the partial translation while neglecting the source sentence to
some extent, namely overconfidence of the LM. Accordingly, we define the Margin
between the NMT and the LM, calculated by subtracting the predicted probability
of the LM from that of the NMT model for each token. The Margin is negatively
correlated to the overconfidence degree of the LM. Based on the property, we
propose a Margin-based Token-level Objective (MTO) and a Margin-based
Sentencelevel Objective (MSO) to maximize the Margin for preventing the LM from
being overconfident. Experiments on WMT14 English-to-German, WMT19
Chinese-to-English, and WMT14 English-to-French translation tasks demonstrate
the effectiveness of our approach, with 1.36, 1.50, and 0.63 BLEU improvements,
respectively, compared to the Transformer baseline. The human evaluation
further verifies that our approaches improve translation adequacy as well as
fluency.
| 2,021 |
Computation and Language
|
Self-Attention Networks Can Process Bounded Hierarchical Languages
|
Despite their impressive performance in NLP, self-attention networks were
recently proved to be limited for processing formal languages with hierarchical
structure, such as $\mathsf{Dyck}_k$, the language consisting of well-nested
parentheses of $k$ types. This suggested that natural language can be
approximated well with models that are too weak for formal languages, or that
the role of hierarchy and recursion in natural language might be limited. We
qualify this implication by proving that self-attention networks can process
$\mathsf{Dyck}_{k, D}$, the subset of $\mathsf{Dyck}_{k}$ with depth bounded by
$D$, which arguably better captures the bounded hierarchical structure of
natural language. Specifically, we construct a hard-attention network with
$D+1$ layers and $O(\log k)$ memory size (per token per layer) that recognizes
$\mathsf{Dyck}_{k, D}$, and a soft-attention network with two layers and
$O(\log k)$ memory size that generates $\mathsf{Dyck}_{k, D}$. Experiments show
that self-attention networks trained on $\mathsf{Dyck}_{k, D}$ generalize to
longer inputs with near-perfect accuracy, and also verify the theoretical
memory advantage of self-attention networks over recurrent networks.
| 2,023 |
Computation and Language
|
Abusive Language Detection in Heterogeneous Contexts: Dataset Collection
and the Role of Supervised Attention
|
Abusive language is a massive problem in online social platforms. Existing
abusive language detection techniques are particularly ill-suited to comments
containing heterogeneous abusive language patterns, i.e., both abusive and
non-abusive parts. This is due in part to the lack of datasets that explicitly
annotate heterogeneity in abusive language. We tackle this challenge by
providing an annotated dataset of abusive language in over 11,000 comments from
YouTube. We account for heterogeneity in this dataset by separately annotating
both the comment as a whole and the individual sentences that comprise each
comment. We then propose an algorithm that uses a supervised attention
mechanism to detect and categorize abusive content using multi-task learning.
We empirically demonstrate the challenges of using traditional techniques on
heterogeneous content and the comparative gains in performance of the proposed
approach over state-of-the-art methods.
| 2,021 |
Computation and Language
|
One2Set: Generating Diverse Keyphrases as a Set
|
Recently, the sequence-to-sequence models have made remarkable progress on
the task of keyphrase generation (KG) by concatenating multiple keyphrases in a
predefined order as a target sequence during training. However, the keyphrases
are inherently an unordered set rather than an ordered sequence. Imposing a
predefined order will introduce wrong bias during training, which can highly
penalize shifts in the order between keyphrases. In this work, we propose a new
training paradigm One2Set without predefining an order to concatenate the
keyphrases. To fit this paradigm, we propose a novel model that utilizes a
fixed set of learned control codes as conditions to generate a set of
keyphrases in parallel. To solve the problem that there is no correspondence
between each prediction and target during training, we propose a $K$-step
target assignment mechanism via bipartite matching, which greatly increases the
diversity and reduces the duplication ratio of generated keyphrases. The
experimental results on multiple benchmarks demonstrate that our approach
significantly outperforms the state-of-the-art methods.
| 2,021 |
Computation and Language
|
Using Adversarial Attacks to Reveal the Statistical Bias in Machine
Reading Comprehension Models
|
Pre-trained language models have achieved human-level performance on many
Machine Reading Comprehension (MRC) tasks, but it remains unclear whether these
models truly understand language or answer questions by exploiting statistical
biases in datasets. Here, we demonstrate a simple yet effective method to
attack MRC models and reveal the statistical biases in these models. We apply
the method to the RACE dataset, for which the answer to each MRC question is
selected from 4 options. It is found that several pre-trained language models,
including BERT, ALBERT, and RoBERTa, show consistent preference to some
options, even when these options are irrelevant to the question. When
interfered by these irrelevant options, the performance of MRC models can be
reduced from human-level performance to the chance-level performance. Human
readers, however, are not clearly affected by these irrelevant options.
Finally, we propose an augmented training method that can greatly reduce
models' statistical biases.
| 2,021 |
Computation and Language
|
Retrieval Enhanced Model for Commonsense Generation
|
Commonsense generation is a challenging task of generating a plausible
sentence describing an everyday scenario using provided concepts. Its
requirement of reasoning over commonsense knowledge and compositional
generalization ability even puzzles strong pre-trained language generation
models. We propose a novel framework using retrieval methods to enhance both
the pre-training and fine-tuning for commonsense generation. We retrieve
prototype sentence candidates by concept matching and use them as auxiliary
input. For fine-tuning, we further boost its performance with a trainable
sentence retriever. We demonstrate experimentally on the large-scale CommonGen
benchmark that our approach achieves new state-of-the-art results.
| 2,021 |
Computation and Language
|
Context-Preserving Text Simplification
|
We present a context-preserving text simplification (TS) approach that
recursively splits and rephrases complex English sentences into a semantic
hierarchy of simplified sentences. Using a set of linguistically principled
transformation patterns, input sentences are converted into a hierarchical
representation in the form of core sentences and accompanying contexts that are
linked via rhetorical relations. Hence, as opposed to previously proposed
sentence splitting approaches, which commonly do not take into account
discourse-level aspects, our TS approach preserves the semantic relationship of
the decomposed constituents in the output. A comparative analysis with the
annotations contained in the RST-DT shows that we are able to capture the
contextual hierarchy between the split sentences with a precision of 89% and
reach an average precision of 69% for the classification of the rhetorical
relations that hold between them.
| 2,021 |
Computation and Language
|
Towards Standard Criteria for human evaluation of Chatbots: A Survey
|
Human evaluation is becoming a necessity to test the performance of Chatbots.
However, off-the-shelf settings suffer the severe reliability and replication
issues partly because of the extremely high diversity of criteria. It is high
time to come up with standard criteria and exact definitions. To this end, we
conduct a through investigation of 105 papers involving human evaluation for
Chatbots. Deriving from this, we propose five standard criteria along with
precise definitions.
| 2,021 |
Computation and Language
|
StructuralLM: Structural Pre-training for Form Understanding
|
Large pre-trained language models achieve state-of-the-art results when
fine-tuned on downstream NLP tasks. However, they almost exclusively focus on
text-only representation, while neglecting cell-level layout information that
is important for form image understanding. In this paper, we propose a new
pre-training approach, StructuralLM, to jointly leverage cell and layout
information from scanned documents. Specifically, we pre-train StructuralLM
with two new designs to make the most of the interactions of cell and layout
information: 1) each cell as a semantic unit; 2) classification of cell
positions. The pre-trained StructuralLM achieves new state-of-the-art results
in different types of downstream tasks, including form understanding (from
78.95 to 85.14), document visual question answering (from 72.59 to 83.94) and
document image classification (from 94.43 to 96.08).
| 2,021 |
Computation and Language
|
Hater-O-Genius Aggression Classification using Capsule Networks
|
Contending hate speech in social media is one of the most challenging social
problems of our time. There are various types of anti-social behavior in social
media. Foremost of them is aggressive behavior, which is causing many social
issues such as affecting the social lives and mental health of social media
users. In this paper, we propose an end-to-end ensemble-based architecture to
automatically identify and classify aggressive tweets. Tweets are classified
into three categories - Covertly Aggressive, Overtly Aggressive, and
Non-Aggressive. The proposed architecture is an ensemble of smaller subnetworks
that are able to characterize the feature embeddings effectively. We
demonstrate qualitatively that each of the smaller subnetworks is able to learn
unique features. Our best model is an ensemble of Capsule Networks and results
in a 65.2% F1 score on the Facebook test set, which results in a performance
gain of 0.95% over the TRAC-2018 winners. The code and the model weights are
publicly available at
https://github.com/parthpatwa/Hater-O-Genius-Aggression-Classification-using-Capsule-Networks.
| 2,021 |
Computation and Language
|
De-identification of Privacy-related Entities in Job Postings
|
De-identification is the task of detecting privacy-related entities in text,
such as person names, emails and contact data. It has been well-studied within
the medical domain. The need for de-identification technology is increasing, as
privacy-preserving data handling is in high demand in many domains. In this
paper, we focus on job postings. We present JobStack, a new corpus for
de-identification of personal data in job vacancies on Stackoverflow. We
introduce baselines, comparing Long-Short Term Memory (LSTM) and Transformer
models. To improve upon these baselines, we experiment with contextualized
embeddings and distantly related auxiliary data via multi-task learning. Our
results show that auxiliary data improves de-identification performance.
Surprisingly, vanilla BERT turned out to be more effective than a BERT model
trained on other portions of Stackoverflow.
| 2,021 |
Computation and Language
|
Distantly-Supervised Long-Tailed Relation Extraction Using Constraint
Graphs
|
Label noise and long-tailed distributions are two major challenges in
distantly supervised relation extraction. Recent studies have shown great
progress on denoising, but paid little attention to the problem of long-tailed
relations. In this paper, we introduce a constraint graph to model the
dependencies between relation labels. On top of that, we further propose a
novel constraint graph-based relation extraction framework(CGRE) to handle the
two challenges simultaneously. CGRE employs graph convolution networks to
propagate information from data-rich relation nodes to data-poor relation
nodes, and thus boosts the representation learning of long-tailed relations. To
further improve the noise immunity, a constraint-aware attention module is
designed in CGRE to integrate the constraint information. Extensive
experimental results indicate that CGRE achieves significant improvements over
the previous methods for both denoising and long-tailed relation extraction.
The pre-processed datasets and source code are publicly available at
https://github.com/tmliang/CGRE.
| 2,022 |
Computation and Language
|
Cross-lingual Text Classification with Heterogeneous Graph Neural
Network
|
Cross-lingual text classification aims at training a classifier on the source
language and transferring the knowledge to target languages, which is very
useful for low-resource languages. Recent multilingual pretrained language
models (mPLM) achieve impressive results in cross-lingual classification tasks,
but rarely consider factors beyond semantic similarity, causing performance
degradation between some language pairs. In this paper we propose a simple yet
effective method to incorporate heterogeneous information within and across
languages for cross-lingual text classification using graph convolutional
networks (GCN). In particular, we construct a heterogeneous graph by treating
documents and words as nodes, and linking nodes with different relations, which
include part-of-speech roles, semantic similarity, and document translations.
Extensive experiments show that our graph-based method significantly
outperforms state-of-the-art models on all tasks, and also achieves consistent
performance gain over baselines in low-resource settings where external tools
like translators are unavailable.
| 2,021 |
Computation and Language
|
PTR: Prompt Tuning with Rules for Text Classification
|
Fine-tuned pre-trained language models (PLMs) have achieved awesome
performance on almost all NLP tasks. By using additional prompts to fine-tune
PLMs, we can further stimulate the rich knowledge distributed in PLMs to better
serve downstream tasks. Prompt tuning has achieved promising results on some
few-class classification tasks such as sentiment classification and natural
language inference. However, manually designing lots of language prompts is
cumbersome and fallible. For those auto-generated prompts, it is also expensive
and time-consuming to verify their effectiveness in non-few-shot scenarios.
Hence, it is still challenging for prompt tuning to address many-class
classification tasks. To this end, we propose prompt tuning with rules (PTR)
for many-class text classification and apply logic rules to construct prompts
with several sub-prompts. In this way, PTR is able to encode prior knowledge of
each class into prompt tuning. We conduct experiments on relation
classification, a typical and complicated many-class classification task, and
the results show that PTR can significantly and consistently outperform
existing state-of-the-art baselines. This indicates that PTR is a promising
approach to take advantage of both human prior knowledge and PLMs for those
complicated classification tasks.
| 2,021 |
Computation and Language
|
Few-Shot Upsampling for Protest Size Detection
|
We propose a new task and dataset for a common problem in social science
research: "upsampling" coarse document labels to fine-grained labels or spans.
We pose the problem in a question answering format, with the answers providing
the fine-grained labels. We provide a benchmark dataset and baselines on a
socially impactful task: identifying the exact crowd size at protests and
demonstrations in the United States given only order-of-magnitude information
about protest attendance, a very small sample of fine-grained examples, and
English-language news text. We evaluate several baseline models, including
zero-shot results from rule-based and question-answering models, few-shot
models fine-tuned on a small set of documents, and weakly supervised models
using a larger set of coarsely-labeled documents. We find that our rule-based
model initially outperforms a zero-shot pre-trained transformer language model
but that further fine-tuning on a very small subset of 25 examples
substantially improves out-of-sample performance. We also demonstrate a method
for fine-tuning the transformer span on only the coarse labels that performs
similarly to our rule-based approach. This work will contribute to social
scientists' ability to generate data to understand the causes and successes of
collective action.
| 2,021 |
Computation and Language
|
Editorial introduction: The power of words and networks
|
According to Freud "words were originally magic and to this day words have
retained much of their ancient magical power". By words, behaviors are
transformed and problems are solved. The way we use words reveals our
intentions, goals and values. Novel tools for text analysis help understand the
magical power of words. This power is multiplied, if it is combined with the
study of social networks, i.e. with the analysis of relationships among social
units. This special issue of the International Journal of Information
Management, entitled "Combining Social Network Analysis and Text Mining: from
Theory to Practice", includes heterogeneous and innovative research at the
nexus of text mining and social network analysis. It aims to enrich work at the
intersection of these fields, which still lags behind in theoretical,
empirical, and methodological foundations. The nine articles accepted for
inclusion in this special issue all present methods and tools that have
business applications. They are summarized in this editorial introduction.
| 2,020 |
Computation and Language
|
Neural Machine Translation with Monolingual Translation Memory
|
Prior work has proved that Translation memory (TM) can boost the performance
of Neural Machine Translation (NMT). In contrast to existing work that uses
bilingual corpus as TM and employs source-side similarity search for memory
retrieval, we propose a new framework that uses monolingual memory and performs
learnable memory retrieval in a cross-lingual manner. Our framework has unique
advantages. First, the cross-lingual memory retriever allows abundant
monolingual data to be TM. Second, the memory retriever and NMT model can be
jointly optimized for the ultimate translation goal. Experiments show that the
proposed method obtains substantial improvements. Remarkably, it even
outperforms strong TM-augmented NMT baselines using bilingual TM. Owning to the
ability to leverage monolingual data, our model also demonstrates effectiveness
in low-resource and domain adaptation scenarios.
| 2,021 |
Computation and Language
|
Assessing perceived organizational leadership styles through twitter
text mining
|
We propose a text classification tool based on support vector machines for
the assessment of organizational leadership styles, as appearing to Twitter
users. We collected Twitter data over 51 days, related to the first 30 Italian
organizations in the 2015 ranking of Forbes Global 2000-out of which we
selected the five with the most relevant volumes of tweets. We analyzed the
communication of the company leaders, together with the dialogue among the
stakeholders of each company, to understand the association with perceived
leadership styles and dimensions. To assess leadership profiles, we referred to
the 10-factor model developed by Barchiesi and La Bella in 2007. We maintain
the distinctiveness of the approach we propose, as it allows a rapid assessment
of the perceived leadership capabilities of an enterprise, as they emerge from
its social media interactions. It can also be used to show how companies
respond and manage their communication when specific events take place, and to
assess their stakeholder's reactions.
| 2,018 |
Computation and Language
|
Introducing the Talk Markup Language (TalkML):Adding a little social
intelligence to industrial speech interfaces
|
Virtual Personal Assistants like Siri have great potential but such
developments hit the fundamental problem of how to make computational devices
that understand human speech. Natural language understanding is one of the more
disappointing failures of AI research and it seems there is something we
computer scientists don't get about the nature of language. Of course
philosophers and linguists think quite differently about language and this
paper describes how we have taken ideas from other disciplines and implemented
them. The background to the work is to take seriously the notion of language as
action and look at what people actually do with language using the techniques
of Conversation Analysis. The observation has been that human communication is
(behind the scenes) about the management of social relations as well as the
(foregrounded) passing of information. To claim this is one thing but to
implement it requires a mechanism. The mechanism described here is based on the
notion of language being intentional - we think intentionally, talk about them
and recognise them in others - and cooperative in that we are compelled to help
out. The way we are compelled points to a solution to the ever present problem
of keeping the human on topic. The approach has led to a recent success in
which we significantly improve user satisfaction independent of task
completion. Talk Markup Language (TalkML) is a draft alternative to VoiceXML
that, we propose, greatly simplifies the scripting of interaction by providing
default behaviours for no input and not recognised speech events.
| 2,021 |
Computation and Language
|
DaN+: Danish Nested Named Entities and Lexical Normalization
|
This paper introduces DaN+, a new multi-domain corpus and annotation
guidelines for Danish nested named entities (NEs) and lexical normalization to
support research on cross-lingual cross-domain learning for a less-resourced
language. We empirically assess three strategies to model the two-layer Named
Entity Recognition (NER) task. We compare transfer capabilities from German
versus in-language annotation from scratch. We examine language-specific versus
multilingual BERT, and study the effect of lexical normalization on NER. Our
results show that 1) the most robust strategy is multi-task learning which is
rivaled by multi-label decoding, 2) BERT-based NER models are sensitive to
domain shifts, and 3) in-language BERT and lexical normalization are the most
beneficial on the least canonical data. Our results also show that an
out-of-domain setup remains challenging, while performance on news plateaus
quickly. This highlights the importance of cross-domain evaluation of
cross-lingual transfer.
| 2,021 |
Computation and Language
|
RobeCzech: Czech RoBERTa, a monolingual contextualized language
representation model
|
We present RobeCzech, a monolingual RoBERTa language representation model
trained on Czech data. RoBERTa is a robustly optimized Transformer-based
pretraining approach. We show that RobeCzech considerably outperforms
equally-sized multilingual and Czech-trained contextualized language
representation models, surpasses current state of the art in all five evaluated
NLP tasks and reaches state-of-the-art results in four of them. The RobeCzech
model is released publicly at https://hdl.handle.net/11234/1-3691 and
https://huggingface.co/ufal/robeczech-base.
| 2,021 |
Computation and Language
|
Neural Language Models for Nineteenth-Century English
|
We present four types of neural language models trained on a large historical
dataset of books in English, published between 1760-1900 and comprised of ~5.1
billion tokens. The language model architectures include static (word2vec and
fastText) and contextualized models (BERT and Flair). For each architecture, we
trained a model instance using the whole dataset. Additionally, we trained
separate instances on text published before 1850 for the two static models, and
four instances considering different time slices for BERT. Our models have
already been used in various downstream tasks where they consistently improved
performance. In this paper, we describe how the models have been created and
outline their reuse potential.
| 2,021 |
Computation and Language
|
Classifying Math KCs via Task-Adaptive Pre-Trained BERT
|
Educational content labeled with proper knowledge components (KCs) are
particularly useful to teachers or content organizers. However, manually
labeling educational content is labor intensive and error-prone. To address
this challenge, prior research proposed machine learning based solutions to
auto-label educational content with limited success. In this work, we
significantly improve prior research by (1) expanding the input types to
include KC descriptions, instructional video titles, and problem descriptions
(i.e., three types of prediction task), (2) doubling the granularity of the
prediction from 198 to 385 KC labels (i.e., more practical setting but much
harder multinomial classification problem), (3) improving the prediction
accuracies by 0.5-2.3% using Task-adaptive Pre-trained BERT, outperforming six
baselines, and (4) proposing a simple evaluation measure by which we can
recover 56-73% of mispredicted KC labels. All codes and data sets in the
experiments are available at:https://github.com/tbs17/TAPT-BERT
| 2,021 |
Computation and Language
|
IITP at AILA 2019: System Report for Artificial Intelligence for Legal
Assistance Shared Task
|
In this article, we present a description of our systems as a part of our
participation in the shared task namely Artificial Intelligence for Legal
Assistance (AILA 2019). This is an integral event of Forum for Information
Retrieval Evaluation-2019. The outcomes of this track would be helpful for the
automation of the working process of the Indian Judiciary System. The manual
working procedures and documentation at any level (from lower to higher court)
of the judiciary system are very complex in nature. The systems produced as a
part of this track would assist the law practitioners. It would be helpful for
common men too. This kind of track also opens the path of research of Natural
Language Processing (NLP) in the judicial domain. This track defined two
problems such as Task 1: Identifying relevant prior cases for a given situation
and Task 2: Identifying the most relevant statutes for a given situation. We
tackled both of them. Our proposed approaches are based on BM25 and Doc2Vec. As
per the results declared by the task organizers, we are in 3rd and a modest
position in Task 1 and Task 2 respectively.
| 2,021 |
Computation and Language
|
View Distillation with Unlabeled Data for Extracting Adverse Drug
Effects from User-Generated Data
|
We present an algorithm based on multi-layer transformers for identifying
Adverse Drug Reactions (ADR) in social media data. Our model relies on the
properties of the problem and the characteristics of contextual word embeddings
to extract two views from documents. Then a classifier is trained on each view
to label a set of unlabeled documents to be used as an initializer for a new
classifier in the other view. Finally, the initialized classifier in each view
is further trained using the initial training examples. We evaluated our model
in the largest publicly available ADR dataset. The experiments testify that our
model significantly outperforms the transformer-based models pretrained on
domain-specific data.
| 2,021 |
Computation and Language
|
VANiLLa : Verbalized Answers in Natural Language at Large Scale
|
In the last years, there have been significant developments in the area of
Question Answering over Knowledge Graphs (KGQA). Despite all the notable
advancements, current KGQA datasets only provide the answers as the direct
output result of the formal query, rather than full sentences incorporating
question context. For achieving coherent answers sentence with the question's
vocabulary, template-based verbalization so are usually employed for a better
representation of answers, which in turn require extensive expert intervention.
Thus, making way for machine learning approaches; however, there is a scarcity
of datasets that empower machine learning models in this area. Hence, we
provide the VANiLLa dataset which aims at reducing this gap by offering answers
in natural language sentences. The answer sentences in this dataset are
syntactically and semantically closer to the question than to the triple fact.
Our dataset consists of over 100k simple questions adapted from the CSQA and
SimpleQuestionsWikidata datasets and generated using a semi-automatic
framework. We also present results of training our dataset on multiple baseline
models adapted from current state-of-the-art Natural Language Generation (NLG)
architectures. We believe that this dataset will allow researchers to focus on
finding suitable methodologies and architectures for answer verbalization.
| 2,021 |
Computation and Language
|
Diacritics Restoration using BERT with Analysis on Czech language
|
We propose a new architecture for diacritics restoration based on
contextualized embeddings, namely BERT, and we evaluate it on 12 languages with
diacritics. Furthermore, we conduct a detailed error analysis on Czech, a
morphologically rich language with a high level of diacritization. Notably, we
manually annotate all mispredictions, showing that roughly 44% of them are
actually not errors, but either plausible variants (19%), or the system
corrections of erroneous data (25%). Finally, we categorize the real errors in
detail. We release the code at
https://github.com/ufal/bert-diacritics-restoration.
| 2,021 |
Computation and Language
|
Reproducibility Report: Contextualizing Hate Speech Classifiers with
Post-hoc Explanation
|
The presented report evaluates Contextualizing Hate Speech Classifiers with
Post-hoc Explanation paper within the scope of ML Reproducibility Challenge
2020. Our work focuses on both aspects constituting the paper: the method
itself and the validity of the stated results. In the following sections, we
have described the paper, related works, algorithmic frameworks, our
experiments and evaluations.
| 2,021 |
Computation and Language
|
True Few-Shot Learning with Language Models
|
Pretrained language models (LMs) perform well on many tasks even when
learning from a few examples, but prior work uses many held-out examples to
tune various aspects of learning, such as hyperparameters, training objectives,
and natural language templates ("prompts"). Here, we evaluate the few-shot
ability of LMs when such held-out examples are unavailable, a setting we call
true few-shot learning. We test two model selection criteria, cross-validation
and minimum description length, for choosing LM prompts and hyperparameters in
the true few-shot setting. On average, both marginally outperform random
selection and greatly underperform selection based on held-out examples.
Moreover, selection criteria often prefer models that perform significantly
worse than randomly-selected ones. We find similar results even when taking
into account our uncertainty in a model's true performance during selection, as
well as when varying the amount of computation and number of examples used for
selection. Overall, our findings suggest that prior work significantly
overestimated the true few-shot ability of LMs given the difficulty of few-shot
model selection.
| 2,021 |
Computation and Language
|
The advent and fall of a vocabulary learning bias from communicative
efficiency
|
Biosemiosis is a process of choice-making between simultaneously alternative
options. It is well-known that, when sufficiently young children encounter a
new word, they tend to interpret it as pointing to a meaning that does not have
a word yet in their lexicon rather than to a meaning that already has a word
attached. In previous research, the strategy was shown to be optimal from an
information theoretic standpoint. In that framework, interpretation is
hypothesized to be driven by the minimization of a cost function: the option of
least communication cost is chosen. However, the information theoretic model
employed in that research neither explains the weakening of that vocabulary
learning bias in older children or polylinguals nor reproduces Zipf's
meaning-frequency law, namely the non-linear relationship between the number of
meanings of a word and its frequency. Here we consider a generalization of the
model that is channeled to reproduce that law. The analysis of the new model
reveals regions of the phase space where the bias disappears consistently with
the weakening or loss of the bias in older children or polylinguals. The model
is abstract enough to support future research on other levels of life that are
relevant to biosemiotics. In the deep learning era, the model is a transparent
low-dimensional tool for future experimental research and illustrates the
predictive power of a theoretical framework originally designed to shed light
on the origins of Zipf's rank-frequency law.
| 2,021 |
Computation and Language
|
TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference
|
Existing pre-trained language models (PLMs) are often computationally
expensive in inference, making them impractical in various resource-limited
real-world applications. To address this issue, we propose a dynamic token
reduction approach to accelerate PLMs' inference, named TR-BERT, which could
flexibly adapt the layer number of each token in inference to avoid redundant
calculation. Specially, TR-BERT formulates the token reduction process as a
multi-step token selection problem and automatically learns the selection
strategy via reinforcement learning. The experimental results on several
downstream NLP tasks show that TR-BERT is able to speed up BERT by 2-5 times to
satisfy various performance demands. Moreover, TR-BERT can also achieve better
performance with less computation in a suite of long-text tasks since its
token-level layer number adaption greatly accelerates the self-attention
operation in PLMs. The source code and experiment details of this paper can be
obtained from https://github.com/thunlp/TR-BERT.
| 2,021 |
Computation and Language
|
A Survey on Complex Knowledge Base Question Answering: Methods,
Challenges and Solutions
|
Knowledge base question answering (KBQA) aims to answer a question over a
knowledge base (KB). Recently, a large number of studies focus on semantically
or syntactically complicated questions. In this paper, we elaborately summarize
the typical challenges and solutions for complex KBQA. We begin with
introducing the background about the KBQA task. Next, we present the two
mainstream categories of methods for complex KBQA, namely semantic
parsing-based (SP-based) methods and information retrieval-based (IR-based)
methods. We then review the advanced methods comprehensively from the
perspective of the two categories. Specifically, we explicate their solutions
to the typical challenges. Finally, we conclude and discuss some promising
directions for future research.
| 2,021 |
Computation and Language
|
ViBERTgrid: A Jointly Trained Multi-Modal 2D Document Representation for
Key Information Extraction from Documents
|
Recent grid-based document representations like BERTgrid allow the
simultaneous encoding of the textual and layout information of a document in a
2D feature map so that state-of-the-art image segmentation and/or object
detection models can be straightforwardly leveraged to extract key information
from documents. However, such methods have not achieved comparable performance
to state-of-the-art sequence- and graph-based methods such as LayoutLM and PICK
yet. In this paper, we propose a new multi-modal backbone network by
concatenating a BERTgrid to an intermediate layer of a CNN model, where the
input of CNN is a document image and the BERTgrid is a grid of word embeddings,
to generate a more powerful grid-based document representation, named
ViBERTgrid. Unlike BERTgrid, the parameters of BERT and CNN in our multimodal
backbone network are trained jointly. Our experimental results demonstrate that
this joint training strategy improves significantly the representation ability
of ViBERTgrid. Consequently, our ViBERTgrid-based key information extraction
approach has achieved state-of-the-art performance on real-world datasets.
| 2,021 |
Computation and Language
|
Multi-Task Learning of Generation and Classification for Emotion-Aware
Dialogue Response Generation
|
For a computer to naturally interact with a human, it needs to be human-like.
In this paper, we propose a neural response generation model with multi-task
learning of generation and classification, focusing on emotion. Our model based
on BART (Lewis et al., 2020), a pre-trained transformer encoder-decoder model,
is trained to generate responses and recognize emotions simultaneously.
Furthermore, we weight the losses for the tasks to control the update of
parameters. Automatic evaluations and crowdsourced manual evaluations show that
the proposed model makes generated responses more emotionally aware.
| 2,021 |
Computation and Language
|
Guiding the Growth: Difficulty-Controllable Question Generation through
Step-by-Step Rewriting
|
This paper explores the task of Difficulty-Controllable Question Generation
(DCQG), which aims at generating questions with required difficulty levels.
Previous research on this task mainly defines the difficulty of a question as
whether it can be correctly answered by a Question Answering (QA) system,
lacking interpretability and controllability. In our work, we redefine question
difficulty as the number of inference steps required to answer it and argue
that Question Generation (QG) systems should have stronger control over the
logic of generated questions. To this end, we propose a novel framework that
progressively increases question difficulty through step-by-step rewriting
under the guidance of an extracted reasoning chain. A dataset is automatically
constructed to facilitate the research, on which extensive experiments are
conducted to test the performance of our method.
| 2,021 |
Computation and Language
|
ConSERT: A Contrastive Framework for Self-Supervised Sentence
Representation Transfer
|
Learning high-quality sentence representations benefits a wide range of
natural language processing tasks. Though BERT-based pre-trained language
models achieve high performance on many downstream tasks, the native derived
sentence representations are proved to be collapsed and thus produce a poor
performance on the semantic textual similarity (STS) tasks. In this paper, we
present ConSERT, a Contrastive Framework for Self-Supervised Sentence
Representation Transfer, that adopts contrastive learning to fine-tune BERT in
an unsupervised and effective way. By making use of unlabeled texts, ConSERT
solves the collapse issue of BERT-derived sentence representations and make
them more applicable for downstream tasks. Experiments on STS datasets
demonstrate that ConSERT achieves an 8\% relative improvement over the previous
state-of-the-art, even comparable to the supervised SBERT-NLI. And when further
incorporating NLI supervision, we achieve new state-of-the-art performance on
STS tasks. Moreover, ConSERT obtains comparable results with only 1000 samples
available, showing its robustness in data scarcity scenarios.
| 2,021 |
Computation and Language
|
Argument Undermining: Counter-Argument Generation by Attacking Weak
Premises
|
Text generation has received a lot of attention in computational
argumentation research as of recent. A particularly challenging task is the
generation of counter-arguments. So far, approaches primarily focus on
rebutting a given conclusion, yet other ways to counter an argument exist. In
this work, we go beyond previous research by exploring argument undermining,
that is, countering an argument by attacking one of its premises. We
hypothesize that identifying the argument's weak premises is key to effective
countering. Accordingly, we propose a pipeline approach that first assesses the
premises' strength and then generates a counter-argument targeting the weak
ones. On the one hand, both manual and automatic evaluation proves the
importance of identifying weak premises in counter-argument generation. On the
other hand, when considering correctness and content richness, human annotators
favored our approach over state-of-the-art counter-argument generation.
| 2,021 |
Computation and Language
|
Dynamic Semantic Graph Construction and Reasoning for Explainable
Multi-hop Science Question Answering
|
Knowledge retrieval and reasoning are two key stages in multi-hop question
answering (QA) at web scale. Existing approaches suffer from low confidence
when retrieving evidence facts to fill the knowledge gap and lack transparent
reasoning process. In this paper, we propose a new framework to exploit more
valid facts while obtaining explainability for multi-hop QA by dynamically
constructing a semantic graph and reasoning over it. We employ Abstract Meaning
Representation (AMR) as semantic graph representation. Our framework contains
three new ideas: (a) {\tt AMR-SG}, an AMR-based Semantic Graph, constructed by
candidate fact AMRs to uncover any hop relations among question, answer and
multiple facts. (b) A novel path-based fact analytics approach exploiting {\tt
AMR-SG} to extract active facts from a large fact pool to answer questions. (c)
A fact-level relation modeling leveraging graph convolution network (GCN) to
guide the reasoning process. Results on two scientific multi-hop QA datasets
show that we can surpass recent approaches including those using additional
knowledge graphs while maintaining high explainability on OpenBookQA and
achieve a new state-of-the-art result on ARC-Challenge in a computationally
practicable setting.
| 2,021 |
Computation and Language
|
Look inside. Predicting stock prices by analysing an enterprise intranet
social network and using word co-occurrence networks
|
This study looks into employees' communication, offering novel metrics which
can help to predict a company's stock price. We studied the intranet forum of a
large Italian company, exploring the interactions and the use of language of
about 8,000 employees. We built a network linking words included in the general
discourse. In this network, we focused on the position of the node representing
the company brand. We found that a lower sentiment, a higher betweenness
centrality of the company brand, a denser word co-occurrence network and more
equally distributed centrality scores of employees (lower group betweenness
centrality) are all significant predictors of higher stock prices. Our findings
offers new metrics that can be helpful for scholars, company managers and
professional investors and could be integrated into existing forecasting models
to improve their accuracy. Lastly, we contribute to the research on word
co-occurrence networks by extending their field of application.
| 2,019 |
Computation and Language
|
Extending the Abstraction of Personality Types based on MBTI with
Machine Learning and Natural Language Processing
|
A data-centric approach with Natural Language Processing (NLP) to predict
personality types based on the MBTI (an introspective self-assessment
questionnaire that indicates different psychological preferences about how
people perceive the world and make decisions) through systematic enrichment of
text representation, based on the domain of the area, under the generation of
features based on three types of analysis: sentimental, grammatical and
aspects. The experimentation had a robust baseline of stacked models, with
premature optimization of hyperparameters through grid search, with gradual
feedback, for each of the four classifiers (dichotomies) of MBTI. The results
showed that attention to the data iteration loop focused on quality,
explanatory power and representativeness for the abstraction of more
relevant/important resources for the studied phenomenon made it possible to
improve the evaluation metrics results more quickly and less costly than
complex models such as the LSTM or state of the art ones as BERT, as well as
the importance of these results by comparisons made from various perspectives.
In addition, the study demonstrated a broad spectrum for the evolution and
deepening of the task and possible approaches for a greater extension of the
abstraction of personality types.
| 2,021 |
Computation and Language
|
Estimating Redundancy in Clinical Text
|
The current mode of use of Electronic Health Record (EHR) elicits text
redundancy. Clinicians often populate new documents by duplicating existing
notes, then updating accordingly. Data duplication can lead to a propagation of
errors, inconsistencies and misreporting of care. Therefore, quantifying
information redundancy can play an essential role in evaluating innovations
that operate on clinical narratives.
This work is a quantitative examination of information redundancy in EHR
notes. We present and evaluate two strategies to measure redundancy: an
information-theoretic approach and a lexicosyntactic and semantic model. We
evaluate the measures by training large Transformer-based language models using
clinical text from a large openly available US-based ICU dataset and a large
multi-site UK based Trust. By comparing the information-theoretic content of
the trained models with open-domain language models, the language models
trained using clinical text have shown ~1.5x to ~3x less efficient than
open-domain corpora. Manual evaluation shows a high correlation with
lexicosyntactic and semantic redundancy, with averages ~43 to ~65%.
| 2,021 |
Computation and Language
|
Empirical Error Modeling Improves Robustness of Noisy Neural Sequence
Labeling
|
Despite recent advances, standard sequence labeling systems often fail when
processing noisy user-generated text or consuming the output of an Optical
Character Recognition (OCR) process. In this paper, we improve the noise-aware
training method by proposing an empirical error generation approach that
employs a sequence-to-sequence model trained to perform translation from
error-free to erroneous text. Using an OCR engine, we generated a large
parallel text corpus for training and produced several real-world noisy
sequence labeling benchmarks for evaluation. Moreover, to overcome the data
sparsity problem that exacerbates in the case of imperfect textual input, we
learned noisy language model-based embeddings. Our approach outperformed the
baseline noise generation and error correction techniques on the erroneous
sequence labeling data sets. To facilitate future research on robustness, we
make our code, embeddings, and data conversion scripts publicly available.
| 2,021 |
Computation and Language
|
Unsupervised Sentiment Analysis by Transferring Multi-source Knowledge
|
Sentiment analysis (SA) is an important research area in cognitive
computation-thus in-depth studies of patterns of sentiment analysis are
necessary. At present, rich resource data-based SA has been well developed,
while the more challenging and practical multi-source unsupervised SA (i.e. a
target domain SA by transferring from multiple source domains) is seldom
studied. The challenges behind this problem mainly locate in the lack of
supervision information, the semantic gaps among domains (i.e., domain shifts),
and the loss of knowledge. However, existing methods either lack the
distinguishable capacity of the semantic gaps among domains or lose private
knowledge. To alleviate these problems, we propose a two-stage domain
adaptation framework. In the first stage, a multi-task methodology-based
shared-private architecture is employed to explicitly model the domain common
features and the domain-specific features for the labeled source domains. In
the second stage, two elaborate mechanisms are embedded in the shared private
architecture to transfer knowledge from multiple source domains. The first
mechanism is a selective domain adaptation (SDA) method, which transfers
knowledge from the closest source domain. And the second mechanism is a
target-oriented ensemble (TOE) method, in which knowledge is transferred
through a well-designed ensemble method. Extensive experiment evaluations
verify that the performance of the proposed framework outperforms unsupervised
state-of-the-art competitors. What can be concluded from the experiments is
that transferring from very different distributed source domains may degrade
the target-domain performance, and it is crucial to choose the proper source
domains to transfer from.
| 2,021 |
Computation and Language
|
Towards an Online Empathetic Chatbot with Emotion Causes
|
Existing emotion-aware conversational models usually focus on controlling the
response contents to align with a specific emotion class, whereas empathy is
the ability to understand and concern the feelings and experience of others.
Hence, it is critical to learn the causes that evoke the users' emotion for
empathetic responding, a.k.a. emotion causes. To gather emotion causes in
online environments, we leverage counseling strategies and develop an
empathetic chatbot to utilize the causal emotion information. On a real-world
online dataset, we verify the effectiveness of the proposed approach by
comparing our chatbot with several SOTA methods using automatic metrics,
expert-based human judgements as well as user-based online evaluation.
| 2,021 |
Computation and Language
|
Ensemble Making Few-Shot Learning Stronger
|
Few-shot learning has been proposed and rapidly emerging as a viable means
for completing various tasks. Many few-shot models have been widely used for
relation learning tasks. However, each of these models has a shortage of
capturing a certain aspect of semantic features, for example, CNN on long-range
dependencies part, Transformer on local features. It is difficult for a single
model to adapt to various relation learning, which results in the high variance
problem. Ensemble strategy could be competitive on improving the accuracy of
few-shot relation extraction and mitigating high variance risks. This paper
explores an ensemble approach to reduce the variance and introduces fine-tuning
and feature attention strategies to calibrate relation-level features. Results
on several few-shot relation learning tasks show that our model significantly
outperforms the previous state-of-the-art models.
| 2,021 |
Computation and Language
|
Exploiting Adapters for Cross-lingual Low-resource Speech Recognition
|
Cross-lingual speech adaptation aims to solve the problem of leveraging
multiple rich-resource languages to build models for a low-resource target
language. Since the low-resource language has limited training data, speech
recognition models can easily overfit. In this paper, we propose to use
adapters to investigate the performance of multiple adapters for
parameter-efficient cross-lingual speech adaptation. Based on our previous
MetaAdapter that implicitly leverages adapters, we propose a novel algorithms
called SimAdapter for explicitly learning knowledge from adapters. Our
algorithm leverages adapters which can be easily integrated into the
Transformer structure.MetaAdapter leverages meta-learning to transfer the
general knowledge from training data to the test language. SimAdapter aims to
learn the similarities between the source and target languages during
fine-tuning using the adapters. We conduct extensive experiments on
five-low-resource languages in Common Voice dataset. Results demonstrate that
our MetaAdapter and SimAdapter methods can reduce WER by 2.98% and 2.55% with
only 2.5% and 15.5% of trainable parameters compared to the strong full-model
fine-tuning baseline. Moreover, we also show that these two novel algorithms
can be integrated for better performance with up to 3.55% relative WER
reduction.
| 2,021 |
Computation and Language
|
Analysis of GraphSum's Attention Weights to Improve the Explainability
of Multi-Document Summarization
|
Modern multi-document summarization (MDS) methods are based on transformer
architectures. They generate state of the art summaries, but lack
explainability. We focus on graph-based transformer models for MDS as they
gained recent popularity. We aim to improve the explainability of the
graph-based MDS by analyzing their attention weights. In a graph-based MDS such
as GraphSum, vertices represent the textual units, while the edges form some
similarity graph over the units. We compare GraphSum's performance utilizing
different textual units, i. e., sentences versus paragraphs, on two news
benchmark datasets, namely WikiSum and MultiNews. Our experiments show that
paragraph-level representations provide the best summarization performance.
Thus, we subsequently focus oAnalysisn analyzing the paragraph-level attention
weights of GraphSum's multi-heads and decoding layers in order to improve the
explainability of a transformer-based MDS model. As a reference metric, we
calculate the ROUGE scores between the input paragraphs and each sentence in
the generated summary, which indicate source origin information via text
similarity. We observe a high correlation between the attention weights and
this reference metric, especially on the the later decoding layers of the
transformer architecture. Finally, we investigate if the generated summaries
follow a pattern of positional bias by extracting which paragraph provided the
most information for each generated summary. Our results show that there is a
high correlation between the position in the summary and the source origin.
| 2,022 |
Computation and Language
|
MBIC -- A Media Bias Annotation Dataset Including Annotator
Characteristics
|
Many people consider news articles to be a reliable source of information on
current events. However, due to the range of factors influencing news agencies,
such coverage may not always be impartial. Media bias, or slanted news
coverage, can have a substantial impact on public perception of events, and,
accordingly, can potentially alter the beliefs and views of the public. The
main data gap in current research on media bias detection is a robust,
representative, and diverse dataset containing annotations of biased words and
sentences. In particular, existing datasets do not control for the individual
background of annotators, which may affect their assessment and, thus,
represents critical information for contextualizing their annotations. In this
poster, we present a matrix-based methodology to crowdsource such data using a
self-developed annotation platform. We also present MBIC (Media Bias Including
Characteristics) - the first sample of 1,700 statements representing various
media bias instances. The statements were reviewed by ten annotators each and
contain labels for media bias identification both on the word and sentence
level. MBIC is the first available dataset about media bias reporting detailed
information on annotator characteristics and their individual background. The
current dataset already significantly extends existing data in this domain
providing unique and more reliable insights into the perception of bias. In
future, we will further extend it both with respect to the number of articles
and annotators per article.
| 2,021 |
Computation and Language
|
Focus Attention: Promoting Faithfulness and Diversity in Summarization
|
Professional summaries are written with document-level information, such as
the theme of the document, in mind. This is in contrast with most seq2seq
decoders which simultaneously learn to focus on salient content, while deciding
what to generate, at each decoding step. With the motivation to narrow this
gap, we introduce Focus Attention Mechanism, a simple yet effective method to
encourage decoders to proactively generate tokens that are similar or topical
to the input document. Further, we propose a Focus Sampling method to enable
generation of diverse summaries, an area currently understudied in
summarization. When evaluated on the BBC extreme summarization task, two
state-of-the-art models augmented with Focus Attention generate summaries that
are closer to the target and more faithful to their input documents,
outperforming their vanilla counterparts on \rouge and multiple faithfulness
measures. We also empirically demonstrate that Focus Sampling is more effective
in generating diverse and faithful summaries than top-$k$ or nucleus
sampling-based decoding methods.
| 2,021 |
Computation and Language
|
Extending rational models of communication from beliefs to actions
|
Speakers communicate to influence their partner's beliefs and shape their
actions. Belief- and action-based objectives have been explored independently
in recent computational models, but it has been challenging to explicitly
compare or integrate them. Indeed, we find that they are conflated in standard
referential communication tasks. To distinguish these accounts, we introduce a
new paradigm called signaling bandits, generalizing classic Lewis signaling
games to a multi-armed bandit setting where all targets in the context have
some relative value. We develop three speaker models: a belief-oriented speaker
with a purely informative objective; an action-oriented speaker with an
instrumental objective; and a combined speaker which integrates the two by
inducing listener beliefs that generally lead to desirable actions. We then
present a series of simulations demonstrating that grounding production choices
in future listener actions results in relevance effects and flexible uses of
nonliteral language. More broadly, our findings suggest that language games
based on richer decision problems are a promising avenue for insight into
rational communication.
| 2,021 |
Computation and Language
|
BASS: Boosting Abstractive Summarization with Unified Semantic Graph
|
Abstractive summarization for long-document or multi-document remains
challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing
long-distance relations in text. In this paper, we present BASS, a novel
framework for Boosting Abstractive Summarization based on a unified Semantic
graph, which aggregates co-referent phrases distributing across a long range of
context and conveys rich relations between phrases. Further, a graph-based
encoder-decoder model is proposed to improve both the document representation
and summary generation process by leveraging the graph structure. Specifically,
several graph augmentation methods are designed to encode both the explicit and
implicit relations in the text while the graph-propagation attention mechanism
is developed in the decoder to select salient content into the summary.
Empirical results show that the proposed architecture brings substantial
improvements for both long-document and multi-document summarization tasks.
| 2,021 |
Computation and Language
|
NEUer at SemEval-2021 Task 4: Complete Summary Representation by Filling
Answers into Question for Matching Reading Comprehension
|
SemEval task 4 aims to find a proper option from multiple candidates to
resolve the task of machine reading comprehension. Most existing approaches
propose to concat question and option together to form a context-aware model.
However, we argue that straightforward concatenation can only provide a
coarse-grained context for the MRC task, ignoring the specific positions of the
option relative to the question. In this paper, we propose a novel MRC model by
filling options into the question to produce a fine-grained context (defined as
summary) which can better reveal the relationship between option and question.
We conduct a series of experiments on the given dataset, and the results show
that our approach outperforms other counterparts to a large extent.
| 2,021 |
Computation and Language
|
IntelliCAT: Intelligent Machine Translation Post-Editing with Quality
Estimation and Translation Suggestion
|
We present IntelliCAT, an interactive translation interface with neural
models that streamline the post-editing process on machine translation output.
We leverage two quality estimation (QE) models at different granularities:
sentence-level QE, to predict the quality of each machine-translated sentence,
and word-level QE, to locate the parts of the machine-translated sentence that
need correction. Additionally, we introduce a novel translation suggestion
model conditioned on both the left and right contexts, providing alternatives
for specific words or phrases for correction. Finally, with word alignments,
IntelliCAT automatically preserves the original document's styles in the
translated document. The experimental results show that post-editing based on
the proposed QE and translation suggestions can significantly improve
translation quality. Furthermore, a user study reveals that three features
provided in IntelliCAT significantly accelerate the post-editing task,
achieving a 52.9\% speedup in translation time compared to translating from
scratch. The interface is publicly available at
https://intellicat.beringlab.com/.
| 2,021 |
Computation and Language
|
NukeLM: Pre-Trained and Fine-Tuned Language Models for the Nuclear and
Energy Domains
|
Natural language processing (NLP) tasks (text classification, named entity
recognition, etc.) have seen revolutionary improvements over the last few
years. This is due to language models such as BERT that achieve deep knowledge
transfer by using a large pre-trained model, then fine-tuning the model on
specific tasks. The BERT architecture has shown even better performance on
domain-specific tasks when the model is pre-trained using domain-relevant
texts. Inspired by these recent advancements, we have developed NukeLM, a
nuclear-domain language model pre-trained on 1.5 million abstracts from the
U.S. Department of Energy Office of Scientific and Technical Information (OSTI)
database. This NukeLM model is then fine-tuned for the classification of
research articles into either binary classes (related to the nuclear fuel cycle
[NFC] or not) or multiple categories related to the subject of the article. We
show that continued pre-training of a BERT-style architecture prior to
fine-tuning yields greater performance on both article classification tasks.
This information is critical for properly triaging manuscripts, a necessary
task for better understanding citation networks that publish in the nuclear
space, and for uncovering new areas of research in the nuclear (or
nuclear-relevant) domains.
| 2,021 |
Computation and Language
|
Context-Sensitive Visualization of Deep Learning Natural Language
Processing Models
|
The introduction of Transformer neural networks has changed the landscape of
Natural Language Processing (NLP) during the last years. So far, none of the
visualization systems has yet managed to examine all the facets of the
Transformers. This gave us the motivation of the current work. We propose a new
NLP Transformer context-sensitive visualization method that leverages existing
NLP tools to find the most significant groups of tokens (words) that have the
greatest effect on the output, thus preserving some context from the original
text. First, we use a sentence-level dependency parser to highlight promising
word groups. The dependency parser creates a tree of relationships between the
words in the sentence. Next, we systematically remove adjacent and non-adjacent
tuples of \emph{n} tokens from the input text, producing several new texts with
those tokens missing. The resulting texts are then passed to a pre-trained BERT
model. The classification output is compared with that of the full text, and
the difference in the activation strength is recorded. The modified texts that
produce the largest difference in the target classification output neuron are
selected, and the combination of removed words are then considered to be the
most influential on the model's output. Finally, the most influential word
combinations are visualized in a heatmap.
| 2,021 |
Computation and Language
|
Impact of detecting clinical trial elements in exploration of COVID-19
literature
|
The COVID-19 pandemic has driven ever-greater demand for tools which enable
efficient exploration of biomedical literature. Although semi-structured
information resulting from concept recognition and detection of the defining
elements of clinical trials (e.g. PICO criteria) has been commonly used to
support literature search, the contributions of this abstraction remain poorly
understood, especially in relation to text-based retrieval. In this study, we
compare the results retrieved by a standard search engine with those filtered
using clinically-relevant concepts and their relations. With analysis based on
the annotations from the TREC-COVID shared task, we obtain quantitative as well
as qualitative insights into characteristics of relational and concept-based
literature exploration. Most importantly, we find that the relational concept
selection filters the original retrieved collection in a way that decreases the
proportion of unjudged documents and increases the precision, which means that
the user is likely to be exposed to a larger number of relevant documents.
| 2,021 |
Computation and Language
|
Word Embedding Transformation for Robust Unsupervised Bilingual Lexicon
Induction
|
Great progress has been made in unsupervised bilingual lexicon induction
(UBLI) by aligning the source and target word embeddings independently trained
on monolingual corpora. The common assumption of most UBLI models is that the
embedding spaces of two languages are approximately isomorphic. Therefore the
performance is bound by the degree of isomorphism, especially on etymologically
and typologically distant languages. To address this problem, we propose a
transformation-based method to increase the isomorphism. Embeddings of two
languages are made to match with each other by rotating and scaling. The method
does not require any form of supervision and can be applied to any language
pair. On a benchmark data set of bilingual lexicon induction, our approach can
achieve competitive or superior performance compared to state-of-the-art
methods, with particularly strong results being found on distant languages.
| 2,021 |
Computation and Language
|
SGPT: Semantic Graphs based Pre-training for Aspect-based Sentiment
Analysis
|
Previous studies show effective of pre-trained language models for sentiment
analysis. However, most of these studies ignore the importance of sentimental
information for pre-trained models.Therefore, we fully investigate the
sentimental information for pre-trained models and enhance pre-trained language
models with semantic graphs for sentiment analysis.In particular, we introduce
Semantic Graphs based Pre-training(SGPT) using semantic graphs to obtain
synonym knowledge for aspect-sentiment pairs and similar aspect/sentiment
terms.We then optimize the pre-trained language model with the semantic
graphs.Empirical studies on several downstream tasks show that proposed model
outperforms strong pre-trained baselines. The results also show the
effectiveness of proposed semantic graphs for pre-trained model.
| 2,021 |
Computation and Language
|
Read, Listen, and See: Leveraging Multimodal Information Helps Chinese
Spell Checking
|
Chinese Spell Checking (CSC) aims to detect and correct erroneous characters
for user-generated text in the Chinese language. Most of the Chinese spelling
errors are misused semantically, phonetically or graphically similar
characters. Previous attempts noticed this phenomenon and try to use the
similarity for this task. However, these methods use either heuristics or
handcrafted confusion sets to predict the correct character. In this paper, we
propose a Chinese spell checker called ReaLiSe, by directly leveraging the
multimodal information of the Chinese characters. The ReaLiSe model tackles the
CSC task by (1) capturing the semantic, phonetic and graphic information of the
input characters, and (2) selectively mixing the information in these
modalities to predict the correct output. Experiments on the SIGHAN benchmarks
show that the proposed model outperforms strong baselines by a large margin.
| 2,021 |
Computation and Language
|
Unsupervised Pronoun Resolution via Masked Noun-Phrase Prediction
|
In this work, we propose Masked Noun-Phrase Prediction (MNPP), a pre-training
strategy to tackle pronoun resolution in a fully unsupervised setting. Firstly,
We evaluate our pre-trained model on various pronoun resolution datasets
without any finetuning. Our method outperforms all previous unsupervised
methods on all datasets by large margins. Secondly, we proceed to a few-shot
setting where we finetune our pre-trained model on WinoGrande-S and XS
separately. Our method outperforms RoBERTa-large baseline with large margins,
meanwhile, achieving a higher AUC score after further finetuning on the
remaining three official splits of WinoGrande.
| 2,021 |
Computation and Language
|
SentEmojiBot: Empathising Conversations Generation with Emojis
|
The increasing use of dialogue agents makes it extremely desirable for them
to understand and acknowledge the implied emotions to respond like humans with
empathy. Chatbots using traditional techniques analyze emotions based on the
context and meaning of the text and lack the understanding of emotions
expressed through face. Emojis representing facial expressions present a
promising way to express emotions. However, none of the AI systems utilizes
emojis for empathetic conversation generation. We propose, SentEmojiBot, based
on the SentEmoji dataset, to generate empathetic conversations with a
combination of emojis and text. Evaluation metrics show that the BERT-based
model outperforms the vanilla transformer model. A user study indicates that
the dialogues generated by our model were understandable and adding emojis
improved empathetic traits in conversations by 9.8%
| 2,021 |
Computation and Language
|
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
|
Backdoor attacks are a kind of insidious security threat against machine
learning models. After being injected with a backdoor in training, the victim
model will produce adversary-specified outputs on the inputs embedded with
predesigned triggers but behave properly on normal inputs during inference. As
a sort of emergent attack, backdoor attacks in natural language processing
(NLP) are investigated insufficiently. As far as we know, almost all existing
textual backdoor attack methods insert additional contents into normal samples
as triggers, which causes the trigger-embedded samples to be detected and the
backdoor attacks to be blocked without much effort. In this paper, we propose
to use the syntactic structure as the trigger in textual backdoor attacks. We
conduct extensive experiments to demonstrate that the syntactic trigger-based
attack method can achieve comparable attack performance (almost 100% success
rate) to the insertion-based methods but possesses much higher invisibility and
stronger resistance to defenses. These results also reveal the significant
insidiousness and harmfulness of textual backdoor attacks. All the code and
data of this paper can be obtained at https://github.com/thunlp/HiddenKiller.
| 2,021 |
Computation and Language
|
Joint Optimization of Tokenization and Downstream Model
|
Since traditional tokenizers are isolated from a downstream task and model,
they cannot output an appropriate tokenization depending on the task and model,
although recent studies imply that the appropriate tokenization improves the
performance. In this paper, we propose a novel method to find an appropriate
tokenization to a given downstream model by jointly optimizing a tokenizer and
the model. The proposed method has no restriction except for using loss values
computed by the downstream model to train the tokenizer, and thus, we can apply
the proposed method to any NLP task. Moreover, the proposed method can be used
to explore the appropriate tokenization for an already trained model as
post-processing. Therefore, the proposed method is applicable to various
situations. We evaluated whether our method contributes to improving
performance on text classification in three languages and machine translation
in eight language pairs. Experimental results show that our proposed method
improves the performance by determining appropriate tokenizations.
| 2,021 |
Computation and Language
|
Neural Morphology Dataset and Models for Multiple Languages, from the
Large to the Endangered
|
We train neural models for morphological analysis, generation and
lemmatization for morphologically rich languages. We present a method for
automatically extracting substantially large amount of training data from FSTs
for 22 languages, out of which 17 are endangered. The neural models follow the
same tagset as the FSTs in order to make it possible to use them as fallback
systems together with the FSTs. The source code, models and datasets have been
released on Zenodo.
| 2,021 |
Computation and Language
|
The statistical advantage of automatic NLG metrics at the system level
|
Estimating the expected output quality of generation systems is central to
NLG. This paper qualifies the notion that automatic metrics are not as good as
humans in estimating system-level quality. Statistically, humans are unbiased,
high variance estimators, while metrics are biased, low variance estimators. We
compare these estimators by their error in pairwise prediction (which
generation system is better?) using the bootstrap. Measuring this error is
complicated: predictions are evaluated against noisy, human predicted labels
instead of the ground truth, and metric predictions fluctuate based on the test
sets they were calculated on. By applying a bias-variance-noise decomposition,
we adjust this error to a noise-free, infinite test set setting. Our analysis
compares the adjusted error of metrics to humans and a derived, perfect
segment-level annotator, both of which are unbiased estimators dependent on the
number of judgments collected. In MT, we identify two settings where metrics
outperform humans due to a statistical advantage in variance: when the number
of human judgments used is small, and when the quality difference between
compared systems is small. The data and code to reproduce our analyses are
available at https://github.com/johntzwei/metric-statistical-advantage .
| 2,021 |
Computation and Language
|
LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and
Beyond
|
Distributional semantics based on neural approaches is a cornerstone of
Natural Language Processing, with surprising connections to human meaning
representation as well. Recent Transformer-based Language Models have proven
capable of producing contextual word representations that reliably convey
sense-specific information, simply as a product of self-supervision. Prior work
has shown that these contextual representations can be used to accurately
represent large sense inventories as sense embeddings, to the extent that a
distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms
models trained specifically for the task. Still, there remains much to
understand on how to use these Neural Language Models (NLMs) to produce sense
embeddings that can better harness each NLM's meaning representation abilities.
In this work we introduce a more principled approach to leverage information
from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We
also emphasize the versatility of these sense embeddings in contrast to
task-specific models, applying them on several sense-related tasks, besides
WSD, while demonstrating improved performance using our proposed approach over
prior work focused on sense embeddings. Finally, we discuss unexpected findings
regarding layer and model performance variations, and potential applications
for downstream tasks.
| 2,022 |
Computation and Language
|
Bilingual Mutual Information Based Adaptive Training for Neural Machine
Translation
|
Recently, token-level adaptive training has achieved promising improvement in
machine translation, where the cross-entropy loss function is adjusted by
assigning different training weights to different tokens, in order to alleviate
the token imbalance problem. However, previous approaches only use static word
frequency information in the target language without considering the source
language, which is insufficient for bilingual tasks like machine translation.
In this paper, we propose a novel bilingual mutual information (BMI) based
adaptive objective, which measures the learning difficulty for each target
token from the perspective of bilingualism, and assigns an adaptive weight
accordingly to improve token-level adaptive training. This method assigns
larger training weights to tokens with higher BMI, so that easy tokens are
updated with coarse granularity while difficult tokens are updated with fine
granularity. Experimental results on WMT14 English-to-German and WMT19
Chinese-to-English demonstrate the superiority of our approach compared with
the Transformer baseline and previous token-level adaptive training approaches.
Further analyses confirm that our method can improve the lexical diversity.
| 2,021 |
Computation and Language
|
Deception detection in text and its relation to the cultural dimension
of individualism/collectivism
|
Deception detection is a task with many applications both in direct physical
and in computer-mediated communication. Our focus is on automatic deception
detection in text across cultures. We view culture through the prism of the
individualism/collectivism dimension and we approximate culture by using
country as a proxy. Having as a starting point recent conclusions drawn from
the social psychology discipline, we explore if differences in the usage of
specific linguistic features of deception across cultures can be confirmed and
attributed to norms in respect to the individualism/collectivism divide. We
also investigate if a universal feature set for cross-cultural text deception
detection tasks exists. We evaluate the predictive power of different feature
sets and approaches. We create culture/language-aware classifiers by
experimenting with a wide range of n-gram features based on phonology,
morphology and syntax, other linguistic cues like word and phoneme counts,
pronouns use, etc., and token embeddings. We conducted our experiments over 11
datasets from 5 languages i.e., English, Dutch, Russian, Spanish and Romanian,
from six countries (US, Belgium, India, Russia, Mexico and Romania), and we
applied two classification methods i.e, logistic regression and fine-tuned BERT
models. The results showed that our task is fairly complex and demanding. There
are indications that some linguistic cues of deception have cultural origins,
and are consistent in the context of diverse domains and dataset settings for
the same language. This is more evident for the usage of pronouns and the
expression of sentiment in deceptive language. The results of this work show
that the automatic deception detection across cultures and languages cannot be
handled in a unified manner, and that such approaches should be augmented with
knowledge about cultural differences and the domains of interest.
| 2,021 |
Computation and Language
|
Language Model as an Annotator: Exploring DialoGPT for Dialogue
Summarization
|
Current dialogue summarization systems usually encode the text with a number
of general semantic features (e.g., keywords and topics) to gain more powerful
dialogue modeling capabilities. However, these features are obtained via
open-domain toolkits that are dialog-agnostic or heavily relied on human
annotations. In this paper, we show how DialoGPT, a pre-trained model for
conversational response generation, can be developed as an unsupervised
dialogue annotator, which takes advantage of dialogue background knowledge
encoded in DialoGPT. We apply DialoGPT to label three types of features on two
dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non
pre-trained models as our summarizes. Experimental results show that our
proposed method can obtain remarkable improvements on both datasets and
achieves new state-of-the-art performance on the SAMSum dataset.
| 2,021 |
Computation and Language
|
Automatic Construction of Sememe Knowledge Bases via Dictionaries
|
A sememe is defined as the minimum semantic unit in linguistics. Sememe
knowledge bases (SKBs), which comprise words annotated with sememes, enable
sememes to be applied to natural language processing. So far a large body of
research has showcased the unique advantages and effectiveness of SKBs in
various tasks. However, most languages have no SKBs, and manual construction of
SKBs is time-consuming and labor-intensive. To tackle this challenge, we
propose a simple and fully automatic method of building an SKB via an existing
dictionary. We use this method to build an English SKB and a French SKB, and
conduct comprehensive evaluations from both intrinsic and extrinsic
perspectives. Experimental results demonstrate that the automatically built
English SKB is even superior to HowNet, the most widely used SKB that takes
decades to build manually. And both the English and French SKBs can bring
obvious performance enhancement in multiple downstream tasks. All the code and
data of this paper (except the copyrighted dictionaries) can be obtained at
https://github.com/thunlp/DictSKB.
| 2,021 |
Computation and Language
|
Prosodic segmentation for parsing spoken dialogue
|
Parsing spoken dialogue poses unique difficulties, including disfluencies and
unmarked boundaries between sentence-like units. Previous work has shown that
prosody can help with parsing disfluent speech (Tran et al. 2018), but has
assumed that the input to the parser is already segmented into sentence-like
units (SUs), which isn't true in existing speech applications. We investigate
how prosody affects a parser that receives an entire dialogue turn as input (a
turn-based model), instead of gold standard pre-segmented SUs (an SU-based
model). In experiments on the English Switchboard corpus, we find that when
using transcripts alone, the turn-based model has trouble segmenting SUs,
leading to worse parse performance than the SU-based model. However, prosody
can effectively replace gold standard SU boundaries: with prosody, the
turn-based model performs as well as the SU-based model (90.79 vs. 90.65 F1
score, respectively), despite performing two tasks (SU segmentation and
parsing) rather than one (parsing alone). Analysis shows that pitch and
intensity features are the most important for this corpus, since they allow the
model to correctly distinguish an SU boundary from a speech disfluency -- a
distinction that the model otherwise struggles to make.
| 2,021 |
Computation and Language
|
Zero-shot Medical Entity Retrieval without Annotation: Learning From
Rich Knowledge Graph Semantics
|
Medical entity retrieval is an integral component for understanding and
communicating information across various health systems. Current approaches
tend to work well on specific medical domains but generalize poorly to unseen
sub-specialties. This is of increasing concern under a public health crisis as
new medical conditions and drug treatments come to light frequently. Zero-shot
retrieval is challenging due to the high degree of ambiguity and variability in
medical corpora, making it difficult to build an accurate similarity measure
between mentions and concepts. Medical knowledge graphs (KG), however, contain
rich semantics including large numbers of synonyms as well as its curated
graphical structures. To take advantage of this valuable information, we
propose a suite of learning tasks designed for training efficient zero-shot
entity retrieval models. Without requiring any human annotation, our knowledge
graph enriched architecture significantly outperforms common zero-shot
benchmarks including BM25 and Clinical BERT with 7% to 30% higher recall across
multiple major medical ontologies, such as UMLS, SNOMED, and ICD-10.
| 2,021 |
Computation and Language
|
Multitask Learning for Grapheme-to-Phoneme Conversion of Anglicisms in
German Speech Recognition
|
Anglicisms are a challenge in German speech recognition. Due to their
irregular pronunciation compared to native German words, automatically
generated pronunciation dictionaries often include faulty phoneme sequences for
Anglicisms. In this work, we propose a multitask sequence-to-sequence approach
for grapheme-to-phoneme conversion to improve the phonetization of Anglicisms.
We extended a grapheme-to-phoneme model with a classifier to distinguish
Anglicisms from native German words. With this approach, the model learns to
generate pronunciations differently depending on the classification result. We
used our model to create supplementary Anglicism pronunciation dictionaries
that are added to an existing German speech recognition model. Tested on a
dedicated Anglicism evaluation set, we improved the recognition of Anglicisms
compared to a baseline model, reducing the word error rate by 1 % and the
Anglicism error rate by 3 %. We show that multitask learning can help solving
the challenge of Anglicisms in German speech recognition.
| 2,022 |
Computation and Language
|
Quantifying and Avoiding Unfair Qualification Labour in Crowdsourcing
|
Extensive work has argued in favour of paying crowd workers a wage that is at
least equivalent to the U.S. federal minimum wage. Meanwhile, research on
collecting high quality annotations suggests using a qualification that
requires workers to have previously completed a certain number of tasks. If
most requesters who pay fairly require workers to have completed a large number
of tasks already then workers need to complete a substantial amount of poorly
paid work before they can earn a fair wage. Through analysis of worker
discussions and guidance for researchers, we estimate that workers spend
approximately 2.25 months of full time effort on poorly paid tasks in order to
get the qualifications needed for better paid tasks. We discuss alternatives to
this qualification and conduct a study of the correlation between
qualifications and work quality on two NLP tasks. We find that it is possible
to reduce the burden on workers while still collecting high quality data.
| 2,021 |
Computation and Language
|
TexRel: a Green Family of Datasets for Emergent Communications on
Relations
|
We propose a new dataset TexRel as a playground for the study of emergent
communications, in particular for relations. By comparison with other relations
datasets, TexRel provides rapid training and experimentation, whilst being
sufficiently large to avoid overfitting in the context of emergent
communications. By comparison with using symbolic inputs, TexRel provides a
more realistic alternative whilst remaining efficient and fast to learn. We
compare the performance of TexRel with a related relations dataset Shapeworld.
We provide baseline performance results on TexRel for sender architectures,
receiver architectures and end-to-end architectures. We examine the effect of
multitask learning in the context of shapes, colors and relations on accuracy,
topological similarity and clustering precision. We investigate whether
increasing the size of the latent meaning space improves metrics of
compositionality. We carry out a case-study on using TexRel to reproduce the
results of an experiment in a recent paper that used symbolic inputs, but using
our own non-symbolic inputs, from TexRel, instead.
| 2,021 |
Computation and Language
|
Trade the Event: Corporate Events Detection for News-Based Event-Driven
Trading
|
In this paper, we introduce an event-driven trading strategy that predicts
stock movements by detecting corporate events from news articles. Unlike
existing models that utilize textual features (e.g., bag-of-words) and
sentiments to directly make stock predictions, we consider corporate events as
the driving force behind stock movements and aim to profit from the temporary
stock mispricing that may occur when corporate events take place. The core of
the proposed strategy is a bi-level event detection model. The low-level event
detector identifies events' existences from each token, while the high-level
event detector incorporates the entire article's representation and the
low-level detected results to discover events at the article-level. We also
develop an elaborately-annotated dataset EDT for corporate event detection and
news-based stock prediction benchmark. EDT includes 9721 news articles with
token-level event labels as well as 303893 news articles with minute-level
timestamps and comprehensive stock price labels. Experiments on EDT indicate
that the proposed strategy outperforms all the baselines in winning rate,
excess returns over the market, and the average return on each transaction.
| 2,021 |
Computation and Language
|
BERTifying the Hidden Markov Model for Multi-Source Weakly Supervised
Named Entity Recognition
|
We study the problem of learning a named entity recognition (NER) tagger
using noisy labels from multiple weak supervision sources. Though cheap to
obtain, the labels from weak supervision sources are often incomplete,
inaccurate, and contradictory, making it difficult to learn an accurate NER
model. To address this challenge, we propose a conditional hidden Markov model
(CHMM), which can effectively infer true labels from multi-source noisy labels
in an unsupervised way. CHMM enhances the classic hidden Markov model with the
contextual representation power of pre-trained language models. Specifically,
CHMM learns token-wise transition and emission probabilities from the BERT
embeddings of the input tokens to infer the latent true labels from noisy
observations. We further refine CHMM with an alternate-training approach
(CHMM-ALT). It fine-tunes a BERT-NER model with the labels inferred by CHMM,
and this BERT-NER's output is regarded as an additional weak source to train
the CHMM in return. Experiments on four NER benchmarks from various domains
show that our method outperforms state-of-the-art weakly supervised NER models
by wide margins.
| 2,021 |
Computation and Language
|
Multi-turn Dialog System on Single-turn Data in Medical Domain
|
Recently there has been a huge interest in dialog systems. This interest has
also been developed in the field of the medical domain where researchers are
focusing on building a dialog system in the medical domain. This research is
focused on the multi-turn dialog system trained on the multi-turn dialog data.
It is difficult to gather a huge amount of multi-turn conversational data in
the medical domain that is verified by professionals and can be trusted.
However, there are several frequently asked questions (FAQs) or single-turn QA
pairs that have information that is verified by the experts and can be used to
build a multi-turn dialog system.
| 2,021 |
Computation and Language
|
How Does Distilled Data Complexity Impact the Quality and Confidence of
Non-Autoregressive Machine Translation?
|
While non-autoregressive (NAR) models are showing great promise for machine
translation, their use is limited by their dependence on knowledge distillation
from autoregressive models. To address this issue, we seek to understand why
distillation is so effective. Prior work suggests that distilled training data
is less complex than manual translations. Based on experiments with the
Levenshtein Transformer and the Mask-Predict NAR models on the WMT14
German-English task, this paper shows that different types of complexity have
different impacts: while reducing lexical diversity and decreasing reordering
complexity both help NAR learn better alignment between source and target, and
thus improve translation quality, lexical diversity is the main reason why
distillation increases model confidence, which affects the calibration of
different NAR models differently.
| 2,021 |
Computation and Language
|
Directed Acyclic Graph Network for Conversational Emotion Recognition
|
The modeling of conversational context plays a vital role in emotion
recognition from conversation (ERC). In this paper, we put forward a novel idea
of encoding the utterances with a directed acyclic graph (DAG) to better model
the intrinsic structure within a conversation, and design a directed acyclic
neural network, namely DAG-ERC, to implement this idea. In an attempt to
combine the strengths of conventional graph-based neural models and
recurrence-based neural models, DAG-ERC provides a more intuitive way to model
the information flow between long-distance conversation background and nearby
context. Extensive experiments are conducted on four ERC benchmarks with
state-of-the-art models employed as baselines for comparison. The empirical
results demonstrate the superiority of this new model and confirm the
motivation of the directed acyclic graph architecture for ERC.
| 2,021 |
Computation and Language
|
Corpus-Level Evaluation for Event QA: The IndiaPoliceEvents Corpus
Covering the 2002 Gujarat Violence
|
Automated event extraction in social science applications often requires
corpus-level evaluations: for example, aggregating text predictions across
metadata and unbiased estimates of recall. We combine corpus-level evaluation
requirements with a real-world, social science setting and introduce the
IndiaPoliceEvents corpus--all 21,391 sentences from 1,257 English-language
Times of India articles about events in the state of Gujarat during March 2002.
Our trained annotators read and label every document for mentions of police
activity events, allowing for unbiased recall evaluations. In contrast to other
datasets with structured event representations, we gather annotations by posing
natural questions, and evaluate off-the-shelf models for three different tasks:
sentence classification, document ranking, and temporal aggregation of target
events. We present baseline results from zero-shot BERT-based models fine-tuned
on natural language inference and passage retrieval tasks. Our novel
corpus-level evaluations and annotation approach can guide creation of similar
social-science-oriented resources in the future.
| 2,021 |
Computation and Language
|
Selective Knowledge Distillation for Neural Machine Translation
|
Neural Machine Translation (NMT) models achieve state-of-the-art performance
on many translation benchmarks. As an active research field in NMT, knowledge
distillation is widely applied to enhance the model's performance by
transferring teacher model's knowledge on each training sample. However,
previous work rarely discusses the different impacts and connections among
these samples, which serve as the medium for transferring teacher knowledge. In
this paper, we design a novel protocol that can effectively analyze the
different impacts of samples by comparing various samples' partitions. Based on
above protocol, we conduct extensive experiments and find that the teacher's
knowledge is not the more, the better. Knowledge over specific samples may even
hurt the whole performance of knowledge distillation. Finally, to address these
issues, we propose two simple yet effective strategies, i.e., batch-level and
global-level selections, to pick suitable samples for distillation. We evaluate
our approaches on two large-scale machine translation tasks, WMT'14
English->German and WMT'19 Chinese->English. Experimental results show that our
approaches yield up to +1.28 and +0.89 BLEU points improvements over the
Transformer baseline, respectively.
| 2,021 |
Computation and Language
|
Improve Query Focused Abstractive Summarization by Incorporating Answer
Relevance
|
Query focused summarization (QFS) models aim to generate summaries from
source documents that can answer the given query. Most previous work on QFS
only considers the query relevance criterion when producing the summary.
However, studying the effect of answer relevance in the summary generating
process is also important. In this paper, we propose QFS-BART, a model that
incorporates the explicit answer relevance of the source documents given the
query via a question answering model, to generate coherent and answer-related
summaries. Furthermore, our model can take advantage of large pre-trained
models which improve the summarization performance significantly. Empirical
results on the Debatepedia dataset show that the proposed model achieves the
new state-of-the-art performance.
| 2,021 |
Computation and Language
|
Investigating label suggestions for opinion mining in German Covid-19
social media
|
This work investigates the use of interactively updated label suggestions to
improve upon the efficiency of gathering annotations on the task of opinion
mining in German Covid-19 social media data. We develop guidelines to conduct a
controlled annotation study with social science students and find that
suggestions from a model trained on a small, expert-annotated dataset already
lead to a substantial improvement - in terms of inter-annotator agreement(+.14
Fleiss' $\kappa$) and annotation quality - compared to students that do not
receive any label suggestions. We further find that label suggestions from
interactively trained models do not lead to an improvement over suggestions
from a static model. Nonetheless, our analysis of suggestion bias shows that
annotators remain capable of reflecting upon the suggested label in general.
Finally, we confirm the quality of the annotated data in transfer learning
experiments between different annotator groups. To facilitate further research
in opinion mining on social media data, we release our collected data
consisting of 200 expert and 2,785 student annotations.
| 2,021 |
Computation and Language
|
ProtAugment: Unsupervised diverse short-texts paraphrasing for intent
detection meta-learning
|
Recent research considers few-shot intent detection as a meta-learning
problem: the model is learning to learn from a consecutive set of small tasks
named episodes. In this work, we propose ProtAugment, a meta-learning algorithm
for short texts classification (the intent detection task). ProtAugment is a
novel extension of Prototypical Networks, that limits overfitting on the bias
introduced by the few-shots classification objective at each episode. It relies
on diverse paraphrasing: a conditional language model is first fine-tuned for
paraphrasing, and diversity is later introduced at the decoding stage at each
meta-learning episode. The diverse paraphrasing is unsupervised as it is
applied to unlabelled data, and then fueled to the Prototypical Network
training objective as a consistency loss. ProtAugment is the state-of-the-art
method for intent detection meta-learning, at no extra labeling efforts and
without the need to fine-tune a conditional language model on a given
application domain.
| 2,021 |
Computation and Language
|
Adaptive Nearest Neighbor Machine Translation
|
kNN-MT, recently proposed by Khandelwal et al. (2020a), successfully combines
pre-trained neural machine translation (NMT) model with token-level
k-nearest-neighbor (kNN) retrieval to improve the translation accuracy.
However, the traditional kNN algorithm used in kNN-MT simply retrieves a same
number of nearest neighbors for each target token, which may cause prediction
errors when the retrieved neighbors include noises. In this paper, we propose
Adaptive kNN-MT to dynamically determine the number of k for each target token.
We achieve this by introducing a light-weight Meta-k Network, which can be
efficiently trained with only a few training samples. On four benchmark machine
translation datasets, we demonstrate that the proposed method is able to
effectively filter out the noises in retrieval results and significantly
outperforms the vanilla kNN-MT model. Even more noteworthy is that the Meta-k
Network learned on one domain could be directly applied to other domains and
obtain consistent improvements, illustrating the generality of our method. Our
implementation is open-sourced at https://github.com/zhengxxn/adaptive-knn-mt.
| 2,021 |
Computation and Language
|
Put your money where your mouth is: Using deep learning to identify
consumer tribes from word usage
|
Internet and social media offer firms novel ways of managing their marketing
strategy and gain competitive advantage. The groups of users expressing
themselves on the Internet about a particular topic, product, or brand are
frequently called a virtual tribe or E-tribe. However, there are no automatic
tools for identifying and studying the characteristics of these virtual tribes.
Towards this aim, this paper presents Tribefinder, a system to reveal Twitter
users' tribal affiliations, by analyzing their tweets and language use. To show
the potential of this instrument, we provide an example considering three
specific tribal macro-categories: alternative realities, lifestyle, and
recreation. In addition, we discuss the different characteristics of each
identified tribe, in terms of use of language and social interaction metrics.
Tribefinder illustrates the importance of adopting a new lens for studying
virtual tribes, which is crucial for firms to properly design their marketing
strategy, and for scholars to extend prior marketing research.
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.