Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
How Does Selective Mechanism Improve Self-Attention Networks? | Self-attention networks (SANs) with selective mechanism has produced
substantial improvements in various NLP tasks by concentrating on a subset of
input words. However, the underlying reasons for their strong performance have
not been well explained. In this paper, we bridge the gap by assessing the
strengths of selective SANs (SSANs), which are implemented with a flexible and
universal Gumbel-Softmax. Experimental results on several representative NLP
tasks, including natural language inference, semantic role labelling, and
machine translation, show that SSANs consistently outperform the standard SANs.
Through well-designed probing experiments, we empirically validate that the
improvement of SSANs can be attributed in part to mitigating two commonly-cited
weaknesses of SANs: word order encoding and structure modeling. Specifically,
the selective mechanism improves SANs by paying more attention to content words
that contribute to the meaning of the sentence. The code and data are released
at https://github.com/xwgeng/SSAN.
| 2,020 | Computation and Language |
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language
Models in Grammatical Error Correction | This paper investigates how to effectively incorporate a pre-trained masked
language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for
grammatical error correction (GEC). The answer to this question is not as
straightforward as one might expect because the previous common methods for
incorporating a MLM into an EncDec model have potential drawbacks when applied
to GEC. For example, the distribution of the inputs to a GEC model can be
considerably different (erroneous, clumsy, etc.) from that of the corpora used
for pre-training MLMs; however, this issue is not addressed in the previous
methods. Our experiments show that our proposed method, where we first
fine-tune a MLM with a given GEC corpus and then use the output of the
fine-tuned MLM as additional features in the GEC model, maximizes the benefit
of the MLM. The best-performing model achieves state-of-the-art performances on
the BEA-2019 and CoNLL-2014 benchmarks. Our code is publicly available at:
https://github.com/kanekomasahiro/bert-gec.
| 2,020 | Computation and Language |
An Accurate Model for Predicting the (Graded) Effect of Context in Word
Similarity Based on Bert | Natural Language Processing (NLP) has been widely used in the semantic
analysis in recent years. Our paper mainly discusses a methodology to analyze
the effect that context has on human perception of similar words, which is the
third task of SemEval 2020. We apply several methods in calculating the
distance between two embedding vector generated by Bidirectional Encoder
Representation from Transformer (BERT). Our team will_go won the 1st place in
Finnish language track of subtask1, the second place in English track of
subtask1.
| 2,020 | Computation and Language |
A Position Aware Decay Weighted Network for Aspect based Sentiment
Analysis | Aspect Based Sentiment Analysis (ABSA) is the task of identifying sentiment
polarity of a text given another text segment or aspect. In ABSA, a text can
have multiple sentiments depending upon each aspect. Aspect Term Sentiment
Analysis (ATSA) is a subtask of ABSA, in which aspect terms are contained
within the given sentence. Most of the existing approaches proposed for ATSA,
incorporate aspect information through a different subnetwork thereby
overlooking the advantage of aspect terms' presence within the sentence. In
this paper, we propose a model that leverages the positional information of the
aspect. The proposed model introduces a decay mechanism based on position. This
decay function mandates the contribution of input words for ABSA. The
contribution of a word declines as farther it is positioned from the aspect
terms in the sentence. The performance is measured on two standard datasets
from SemEval 2014 Task 4. In comparison with recent architectures, the
effectiveness of the proposed model is demonstrated.
| 2,020 | Computation and Language |
A Two-Stage Masked LM Method for Term Set Expansion | We tackle the task of Term Set Expansion (TSE): given a small seed set of
example terms from a semantic class, finding more members of that class. The
task is of great practical utility, and also of theoretical utility as it
requires generalization from few examples. Previous approaches to the TSE task
can be characterized as either distributional or pattern-based. We harness the
power of neural masked language models (MLM) and propose a novel TSE algorithm,
which combines the pattern-based and distributional approaches. Due to the
small size of the seed set, fine-tuning methods are not effective, calling for
more creative use of the MLM. The gist of the idea is to use the MLM to first
mine for informative patterns with respect to the seed set, and then to obtain
more members of the seed class by generalizing these patterns. Our method
outperforms state-of-the-art TSE algorithms. Implementation is available at:
https://github.com/ guykush/TermSetExpansion-MPB/
| 2,020 | Computation and Language |
Neural Data-to-Text Generation via Jointly Learning the Segmentation and
Correspondence | The neural attention model has achieved great success in data-to-text
generation tasks. Though usually excelling at producing fluent text, it suffers
from the problem of information missing, repetition and "hallucination". Due to
the black-box nature of the neural attention architecture, avoiding these
problems in a systematic way is non-trivial. To address this concern, we
propose to explicitly segment target text into fragment units and align them
with their data correspondences. The segmentation and correspondence are
jointly learned as latent variables without any human annotations. We further
impose a soft statistical constraint to regularize the segmental granularity.
The resulting architecture maintains the same expressive power as neural
attention models, while being able to generate fully interpretable outputs with
several times less computational cost. On both E2E and WebNLG benchmarks, we
show the proposed model consistently outperforms its neural attention
counterparts.
| 2,020 | Computation and Language |
Simplifying Paragraph-level Question Generation via Transformer Language
Models | Question generation (QG) is a natural language generation task where a model
is trained to ask questions corresponding to some input text. Most recent
approaches frame QG as a sequence-to-sequence problem and rely on additional
features and mechanisms to increase performance; however, these often increase
model complexity, and can rely on auxiliary data unavailable in practical use.
A single Transformer-based unidirectional language model leveraging transfer
learning can be used to produce high quality questions while disposing of
additional task-specific complexity. Our QG model, finetuned from GPT-2 Small,
outperforms several paragraph-level QG baselines on the SQuAD dataset by 0.95
METEOR points. Human evaluators rated questions as easy to answer, relevant to
their context paragraph, and corresponding well to natural human speech. Also
introduced is a new set of baseline scores on the RACE dataset, which has not
previously been used for QG tasks. Further experimentation with varying model
capacities and datasets with non-identification type questions is recommended
in order to further verify the robustness of pretrained Transformer-based LMs
as question generators.
| 2,021 | Computation and Language |
Emergence of Syntax Needs Minimal Supervision | This paper is a theoretical contribution to the debate on the learnability of
syntax from a corpus without explicit syntax-specific guidance. Our approach
originates in the observable structure of a corpus, which we use to define and
isolate grammaticality (syntactic information) and meaning/pragmatics
information. We describe the formal characteristics of an autonomous syntax and
show that it becomes possible to search for syntax-based lexical categories
with a simple optimization process, without any prior hypothesis on the form of
the model.
| 2,020 | Computation and Language |
Let Me Choose: From Verbal Context to Font Selection | In this paper, we aim to learn associations between visual attributes of
fonts and the verbal context of the texts they are typically applied to.
Compared to related work leveraging the surrounding visual context, we choose
to focus only on the input text as this can enable new applications for which
the text is the only visual element in the document. We introduce a new
dataset, containing examples of different topics in social media posts and ads,
labeled through crowd-sourcing. Due to the subjective nature of the task,
multiple fonts might be perceived as acceptable for an input text, which makes
this problem challenging. To this end, we investigate different end-to-end
models to learn label distributions on crowd-sourced data and capture
inter-subjectivity across all annotations.
| 2,020 | Computation and Language |
Out of the Echo Chamber: Detecting Countering Debate Speeches | An educated and informed consumption of media content has become a challenge
in modern times. With the shift from traditional news outlets to social media
and similar venues, a major concern is that readers are becoming encapsulated
in "echo chambers" and may fall prey to fake news and disinformation, lacking
easy access to dissenting views. We suggest a novel task aiming to alleviate
some of these concerns -- that of detecting articles that most effectively
counter the arguments -- and not just the stance -- made in a given text. We
study this problem in the context of debate speeches. Given such a speech, we
aim to identify, from among a set of speeches on the same topic and with an
opposing stance, the ones that directly counter it. We provide a large dataset
of 3,685 such speeches (in English), annotated for this relation, which
hopefully would be of general interest to the NLP community. We explore several
algorithms addressing this task, and while some are successful, all fall short
of expert human performance, suggesting room for further research. All data
collected during this work is freely available for research.
| 2,020 | Computation and Language |
Correcting the Autocorrect: Context-Aware Typographical Error Correction
via Training Data Augmentation | In this paper, we explore the artificial generation of typographical errors
based on real-world statistics. We first draw on a small set of annotated data
to compute spelling error statistics. These are then invoked to introduce
errors into substantially larger corpora. The generation methodology allows us
to generate particularly challenging errors that require context-aware error
detection. We use it to create a set of English language error detection and
correction datasets. Finally, we examine the effectiveness of machine learning
models for detecting and correcting errors based on this data. The datasets are
available at http://typo.nlproc.org
| 2,020 | Computation and Language |
Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward | Sequence-to-sequence models for abstractive summarization have been studied
extensively, yet the generated summaries commonly suffer from fabricated
content, and are often found to be near-extractive. We argue that, to address
these issues, the summarizer should acquire semantic interpretation over input,
e.g., via structured representation, to allow the generation of more
informative summaries. In this paper, we present ASGARD, a novel framework for
Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a
graph-structured encoder---to maintain the global context and local
characteristics of entities, complementing each other. We further design a
reward based on a multiple choice cloze test to drive the model to better
capture entity interactions. Results show that our models produce significantly
higher ROUGE scores than a variant without knowledge graph as input on both New
York Times and CNN/Daily Mail datasets. We also obtain better or comparable
performance compared to systems that are fine-tuned from large pretrained
language models. Human judges further rate our model outputs as more
informative and containing fewer unfaithful errors.
| 2,020 | Computation and Language |
Similarity Analysis of Contextual Word Representation Models | This paper investigates contextual word representation models from the lens
of similarity analysis. Given a collection of trained models, we measure the
similarity of their internal representations and attention. Critically, these
models come from vastly different architectures. We use existing and novel
similarity measures that aim to gauge the level of localization of information
in the deep models, and facilitate the investigation of which design factors
affect model similarity, without requiring any external linguistic annotation.
The analysis reveals that models within the same family are more similar to one
another, as may be expected. Surprisingly, different architectures have rather
similar representations, but different individual neurons. We also observed
differences in information localization in lower and higher layers and found
that higher layers are more affected by fine-tuning on downstream tasks.
| 2,020 | Computation and Language |
Tailoring and Evaluating the Wikipedia for in-Domain Comparable Corpora
Extraction | We propose an automatic language-independent graph-based method to build
\`a-la-carte article collections on user-defined domains from the Wikipedia.
The core model is based on the exploration of the encyclopaedia's category
graph and can produce both monolingual and multilingual comparable collections.
We run thorough experiments to assess the quality of the obtained corpora in 10
languages and 743 domains. According to an extensive manual evaluation, our
graph-based model outperforms a retrieval-based approach and reaches an average
precision of 84% on in-domain articles. As manual evaluations are costly, we
introduce the concept of "domainness" and design several automatic metrics to
account for the quality of the collections. Our best metric for domainness
shows a strong correlation with the human-judged precision, representing a
reasonable automatic alternative to assess the quality of domain-specific
corpora. We release the WikiTailor toolkit with the implementation of the
extraction methods, the evaluation measures and several utilities. WikiTailor
makes obtaining multilingual in-domain data from the Wikipedia easy.
| 2,020 | Computation and Language |
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM
Language Models | LSTM-based recurrent neural networks are the state-of-the-art for many
natural language processing (NLP) tasks. Despite their performance, it is
unclear whether, or how, LSTMs learn structural features of natural languages
such as subject-verb number agreement in English. Lacking this understanding,
the generality of LSTM performance on this task and their suitability for
related tasks remains uncertain. Further, errors cannot be properly attributed
to a lack of structural capability, training data omissions, or other
exceptional faults. We introduce *influence paths*, a causal account of
structural properties as carried by paths across gates and neurons of a
recurrent neural network. The approach refines the notion of influence (the
subject's grammatical number has influence on the grammatical number of the
subsequent verb) into a set of gate or neuron-level paths. The set localizes
and segments the concept (e.g., subject-verb agreement), its constituent
elements (e.g., the subject), and related or interfering elements (e.g.,
attractors). We exemplify the methodology on a widely-studied multi-layer LSTM
language model, demonstrating its accounting for subject-verb number agreement.
The results offer both a finer and a more complete view of an LSTM's handling
of this structural aspect of the English language than prior results based on
diagnostic classifiers and ablation.
| 2,020 | Computation and Language |
On the Limitations of Cross-lingual Encoders as Exposed by
Reference-Free Machine Translation Evaluation | Evaluation of cross-lingual encoders is usually performed either via
zero-shot cross-lingual transfer in supervised downstream tasks or via
unsupervised cross-lingual textual similarity. In this paper, we concern
ourselves with reference-free machine translation (MT) evaluation where we
directly compare source texts to (sometimes low-quality) system translations,
which represents a natural adversarial setup for multilingual encoders.
Reference-free evaluation holds the promise of web-scale comparison of MT
systems. We systematically investigate a range of metrics based on
state-of-the-art cross-lingual semantic representations obtained with
pretrained M-BERT and LASER. We find that they perform poorly as semantic
encoders for reference-free MT evaluation and identify their two key
limitations, namely, (a) a semantic mismatch between representations of mutual
translations and, more prominently, (b) the inability to punish
"translationese", i.e., low-quality literal translations. We propose two
partial remedies: (1) post-hoc re-alignment of the vector spaces and (2)
coupling of semantic-similarity based metrics with target-side language
modeling. In segment-level MT evaluation, our best metric surpasses
reference-based BLEU by 5.7 correlation points.
| 2,020 | Computation and Language |
On the Relationships Between the Grammatical Genders of Inanimate Nouns
and Their Co-Occurring Adjectives and Verbs | We use large-scale corpora in six different gendered languages, along with
tools from NLP and information theory, to test whether there is a relationship
between the grammatical genders of inanimate nouns and the adjectives used to
describe those nouns. For all six languages, we find that there is a
statistically significant relationship. We also find that there are
statistically significant relationships between the grammatical genders of
inanimate nouns and the verbs that take those nouns as direct objects, as
indirect objects, and as subjects. We defer a deeper investigation of these
relationships for future work.
| 2,020 | Computation and Language |
Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop
Question Answering | Evidence retrieval is a critical stage of question answering (QA), necessary
not only to improve performance, but also to explain the decisions of the
corresponding QA method. We introduce a simple, fast, and unsupervised
iterative evidence retrieval method, which relies on three ideas: (a) an
unsupervised alignment approach to soft-align questions and answers with
justification sentences using only GloVe embeddings, (b) an iterative process
that reformulates queries focusing on terms that are not covered by existing
justifications, which (c) a stopping criterion that terminates retrieval when
the terms in the given question and candidate answers are covered by the
retrieved justifications. Despite its simplicity, our approach outperforms all
the previous methods (including supervised methods) on the evidence selection
task on two datasets: MultiRC and QASC. When these evidence sentences are fed
into a RoBERTa answer classification component, we achieve state-of-the-art QA
performance on these two datasets.
| 2,020 | Computation and Language |
Robust Encodings: A Framework for Combating Adversarial Typos | Despite excellent performance on many tasks, NLP systems are easily fooled by
small adversarial perturbations of inputs. Existing procedures to defend
against such perturbations are either (i) heuristic in nature and susceptible
to stronger attacks or (ii) provide guaranteed robustness to worst-case
attacks, but are incompatible with state-of-the-art models like BERT. In this
work, we introduce robust encodings (RobEn): a simple framework that confers
guaranteed robustness, without making compromises on model architecture. The
core component of RobEn is an encoding function, which maps sentences to a
smaller, discrete space of encodings. Systems using these encodings as a
bottleneck confer guaranteed robustness with standard training, and the same
encodings can be used across multiple tasks. We identify two desiderata to
construct robust encoding functions: perturbations of a sentence should map to
a small set of encodings (stability), and models using encodings should still
perform well (fidelity). We instantiate RobEn to defend against a large family
of adversarial typos. Across six tasks from GLUE, our instantiation of RobEn
paired with BERT achieves an average robust accuracy of 71.3% against all
adversarial typos in the family considered, while previous work using a
typo-corrector achieves only 35.3% accuracy against a simple greedy attack.
| 2,020 | Computation and Language |
Noise Pollution in Hospital Readmission Prediction: Long Document
Classification with Reinforcement Learning | This paper presents a reinforcement learning approach to extract noise in
long clinical documents for the task of readmission prediction after kidney
transplant. We face the challenges of developing robust models on a small
dataset where each document may consist of over 10K tokens with full of noise
including tabular text and task-irrelevant sentences. We first experiment four
types of encoders to empirically decide the best document representation, and
then apply reinforcement learning to remove noisy text from the long documents,
which models the noise extraction process as a sequential decision problem. Our
results show that the old bag-of-words encoder outperforms deep learning-based
encoders on this task, and reinforcement learning is able to improve upon
baseline while pruning out 25% text segments. Our analysis depicts that
reinforcement learning is able to identify both typical noisy tokens and
task-specific noisy text.
| 2,020 | Computation and Language |
A New Data Normalization Method to Improve Dialogue Generation by
Minimizing Long Tail Effect | Recent neural models have shown significant progress in dialogue generation.
Most generation models are based on language models. However, due to the Long
Tail Phenomenon in linguistics, the trained models tend to generate words that
appear frequently in training datasets, leading to a monotonous issue. To
address this issue, we analyze a large corpus from Wikipedia and propose three
frequency-based data normalization methods. We conduct extensive experiments
based on transformers and three datasets respectively collected from social
media, subtitles, and the industrial application. Experimental results
demonstrate significant improvements in diversity and informativeness (defined
as the numbers of nouns and verbs) of generated responses. More specifically,
the unigram and bigram diversity are increased by 2.6%-12.6% and 2.2%-18.9% on
the three datasets, respectively. Moreover, the informativeness, i.e. the
numbers of nouns and verbs, are increased by 4.0%-7.0% and 1.4%-12.1%,
respectively. Additionally, the simplicity and effectiveness enable our methods
to be adapted to different generation models without much extra computational
cost.
| 2,020 | Computation and Language |
Improving Adversarial Text Generation by Modeling the Distant Future | Auto-regressive text generation models usually focus on local fluency, and
may cause inconsistent semantic meaning in long text generation. Further,
automatically generating words with similar semantics is challenging, and
hand-crafted linguistic rules are difficult to apply. We consider a text
planning scheme and present a model-based imitation-learning approach to
alleviate the aforementioned issues. Specifically, we propose a novel guider
network to focus on the generative process over a longer horizon, which can
assist next-word prediction and provide intermediate rewards for generator
optimization. Extensive experiments demonstrate that the proposed method leads
to improved performance.
| 2,020 | Computation and Language |
WikiUMLS: Aligning UMLS to Wikipedia via Cross-lingual Neural Ranking | We present our work on aligning the Unified Medical Language System (UMLS) to
Wikipedia, to facilitate manual alignment of the two resources. We propose a
cross-lingual neural reranking model to match a UMLS concept with a Wikipedia
page, which achieves a recall@1 of 72%, a substantial improvement of 20% over
word- and char-level BM25, enabling manual alignment with minimal effort. We
release our resources, including ranked Wikipedia pages for 700k UMLS concepts,
and WikiUMLS, a dataset for training and evaluation of alignment models between
UMLS and Wikipedia. This will provide easier access to Wikipedia for health
professionals, patients, and NLP systems, including in multilingual settings.
| 2,020 | Computation and Language |
Distributional Discrepancy: A Metric for Unconditional Text Generation | The purpose of unconditional text generation is to train a model with real
sentences, then generate novel sentences of the same quality and diversity as
the training data. However, when different metrics are used for comparing the
methods of unconditional text generation, contradictory conclusions are drawn.
The difficulty is that both the diversity and quality of the sample should be
considered simultaneously when the models are evaluated. To solve this problem,
a novel metric of distributional discrepancy (DD) is designed to evaluate
generators based on the discrepancy between the generated and real training
sentences. However, it cannot compute the DD directly because the distribution
of real sentences is unavailable. Thus, we propose a method for estimating the
DD by training a neural-network-based text classifier. For comparison, three
existing metrics, bi-lingual evaluation understudy (BLEU) versus self-BLEU,
language model score versus reverse language model score, and Fr\'{e}chet
embedding distance, along with the proposed DD, are used to evaluate two
popular generative models of long short-term memory and generative pretrained
transformer 2 on both syntactic and real data. Experimental results show that
DD is significantly better than the three existing metrics for ranking these
generative models.
| 2,020 | Computation and Language |
pyBART: Evidence-based Syntactic Transformations for IE | Syntactic dependencies can be predicted with high accuracy, and are useful
for both machine-learned and pattern-based information extraction tasks.
However, their utility can be improved. These syntactic dependencies are
designed to accurately reflect syntactic relations, and they do not make
semantic relations explicit. Therefore, these representations lack many
explicit connections between content words, that would be useful for downstream
applications. Proposals like English Enhanced UD improve the situation by
extending universal dependency trees with additional explicit arcs. However,
they are not available to Python users, and are also limited in coverage. We
introduce a broad-coverage, data-driven and linguistically sound set of
transformations, that makes event-structure and many lexical relations
explicit. We present pyBART, an easy-to-use open-source Python library for
converting English UD trees either to Enhanced UD graphs or to our
representation. The library can work as a standalone package or be integrated
within a spaCy NLP pipeline. When evaluated in a pattern-based relation
extraction scenario, our representation results in higher extraction scores
than Enhanced UD, while requiring fewer patterns.
| 2,020 | Computation and Language |
NLP in FinTech Applications: Past, Present and Future | Financial Technology (FinTech) is one of the worldwide rapidly-rising topics
in the past five years according to the statistics of FinTech from Google
Trends. In this position paper, we focus on the researches applying natural
language processing (NLP) technologies in the finance domain. Our goal is to
indicate the position we are now and provide the blueprint for future
researches. We go through the application scenarios from three aspects
including Know Your Customer (KYC), Know Your Product (KYP), and Satisfy Your
Customer (SYC). Both formal documents and informal textual data are analyzed to
understand corporate customers and personal customers. Furthermore, we talk
over how to dynamically update the features of products from the prospect and
the risk points of view. Finally, we discuss satisfying the customers in both
B2C and C2C business models. After summarizing the past and the recent
challenges, we highlight several promising future research directions in the
trend of FinTech and the open finance tendency.
| 2,020 | Computation and Language |
DoQA -- Accessing Domain-Specific FAQs via Conversational QA | The goal of this work is to build conversational Question Answering (QA)
interfaces for the large body of domain-specific information available in FAQ
sites. We present DoQA, a dataset with 2,437 dialogues and 10,917 QA pairs. The
dialogues are collected from three Stack Exchange sites using the Wizard of Oz
method with crowdsourcing. Compared to previous work, DoQA comprises
well-defined information needs, leading to more coherent and natural
conversations with less factoid questions and is multi-domain. In addition, we
introduce a more realistic information retrieval(IR) scenario where the system
needs to find the answer in any of the FAQ documents. The results of an
existing, strong, system show that, thanks to transfer learning from a
Wikipedia QA dataset and fine tuning on a single FAQ domain, it is possible to
build high quality conversational QA systems for FAQs without in-domain
training data. The good results carry over into the more challenging IR
scenario. In both cases, there is still ample room for improvement, as
indicated by the higher human upperbound.
| 2,020 | Computation and Language |
From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of
Parsing Morphologically-Rich Languages (MRLs)? | It has been exactly a decade since the first establishment of SPMRL, a
research initiative unifying multiple research efforts to address the peculiar
challenges of Statistical Parsing for Morphologically-Rich Languages
(MRLs).Here we reflect on parsing MRLs in that decade, highlight the solutions
and lessons learned for the architectural, modeling and lexical challenges in
the pre-neural era, and argue that similar challenges re-emerge in neural
architectures for MRLs. We then aim to offer a climax, suggesting that
incorporating symbolic ideas proposed in SPMRL terms into nowadays neural
architectures has the potential to push NLP for MRLs to a new level. We sketch
strategies for designing Neural Models for MRLs (NMRL), and showcase
preliminary support for these strategies via investigating the task of
multi-tagging in Hebrew, a morphologically-rich, high-fusion, language
| 2,020 | Computation and Language |
The Sensitivity of Language Models and Humans to Winograd Schema
Perturbations | Large-scale pretrained language models are the major driving force behind
recent improvements in performance on the Winograd Schema Challenge, a widely
employed test of common sense reasoning ability. We show, however, with a new
diagnostic dataset, that these models are sensitive to linguistic perturbations
of the Winograd examples that minimally affect human understanding. Our results
highlight interesting differences between humans and language models: language
models are more sensitive to number or gender alternations and synonym
replacements than humans, and humans are more stable and consistent in their
predictions, maintain a much higher absolute performance, and perform better on
non-associative instances than associative ones. Overall, humans are correct
more often than out-of-the-box models, and the models are sometimes right for
the wrong reasons. Finally, we show that fine-tuning on a large, task-specific
dataset can offer a solution to these issues.
| 2,020 | Computation and Language |
Introducing the VoicePrivacy Initiative | The VoicePrivacy initiative aims to promote the development of privacy
preservation tools for speech technology by gathering a new community to define
the tasks of interest and the evaluation methodology, and benchmarking
solutions through a series of challenges. In this paper, we formulate the voice
anonymization task selected for the VoicePrivacy 2020 Challenge and describe
the datasets used for system development and evaluation. We also present the
attack models and the associated objective and subjective evaluation metrics.
We introduce two anonymization baselines and report objective evaluation
results.
| 2,021 | Computation and Language |
Using Context in Neural Machine Translation Training Objectives | We present Neural Machine Translation (NMT) training using document-level
metrics with batch-level documents. Previous sequence-objective approaches to
NMT training focus exclusively on sentence-level metrics like sentence BLEU
which do not correspond to the desired evaluation metric, typically document
BLEU. Meanwhile research into document-level NMT training focuses on data or
model architecture rather than training procedure. We find that each of these
lines of research has a clear space in it for the other, and propose merging
them with a scheme that allows a document-level evaluation metric to be used in
the NMT training objective.
We first sample pseudo-documents from sentence samples. We then approximate
the expected document BLEU gradient with Monte Carlo sampling for use as a cost
function in Minimum Risk Training (MRT). This two-level sampling procedure
gives NMT performance gains over sequence MRT and maximum-likelihood training.
We demonstrate that training is more robust for document-level metrics than
with sequence metrics. We further demonstrate improvements on NMT with TER and
Grammatical Error Correction (GEC) using GLEU, both metrics used at the
document level for evaluations.
| 2,020 | Computation and Language |
Towards A Sign Language Gloss Representation Of Modern Standard Arabic | Over 5% of the world's population (466 million people) has disabling hearing
loss. 4 million are children. They can be hard of hearing or deaf. Deaf people
mostly have profound hearing loss. Which implies very little or no hearing.
Over the world, deaf people often communicate using a sign language with
gestures of both hands and facial expressions. The sign language is a
full-fledged natural language with its own grammar and lexicon. Therefore,
there is a need for translation models from and to sign languages. In this
work, we are interested in the translation of Modern Standard Arabic(MSAr) into
sign language. We generated a gloss representation from MSAr that extracts the
features mandatory for the generation of animation signs. Our approach locates
the most pertinent features that maintain the meaning of the input Arabic
sentence.
| 2,020 | Computation and Language |
To Test Machine Comprehension, Start by Defining Comprehension | Many tasks aim to measure machine reading comprehension (MRC), often focusing
on question types presumed to be difficult. Rarely, however, do task designers
start by considering what systems should in fact comprehend. In this paper we
make two key contributions. First, we argue that existing approaches do not
adequately define comprehension; they are too unsystematic about what content
is tested. Second, we present a detailed definition of comprehension -- a
"Template of Understanding" -- for a widely useful class of texts, namely short
narratives. We then conduct an experiment that strongly suggests existing
systems are not up to the task of narrative understanding as we define it.
| 2,020 | Computation and Language |
What-if I ask you to explain: Explaining the effects of perturbations in
procedural text | We address the task of explaining the effects of perturbations in procedural
text, an important test of process comprehension. Consider a passage describing
a rabbit's life-cycle: humans can easily explain the effect on the rabbit
population if a female rabbit becomes ill -- i.e., the female rabbit would not
become pregnant, and as a result not have babies leading to a decrease in
rabbit population. We present QUARTET, a system that constructs such
explanations from paragraphs, by modeling the explanation task as a multitask
learning problem. QUARTET provides better explanations (based on the sentences
in the procedural text) compared to several strong baselines on a recent
process comprehension benchmark. We also present a surprising secondary effect:
our model also achieves a new SOTA with a 7% absolute F1 improvement on a
downstream QA task. This illustrates that good explanations do not have to come
at the expense of end task performance.
| 2,020 | Computation and Language |
Compose Like Humans: Jointly Improving the Coherence and Novelty for
Modern Chinese Poetry Generation | Chinese poetry is an important part of worldwide culture, and classical and
modern sub-branches are quite different. The former is a unique genre and has
strict constraints, while the latter is very flexible in length, optional to
have rhymes, and similar to modern poetry in other languages. Thus, it requires
more to control the coherence and improve the novelty. In this paper, we
propose a generate-retrieve-then-refine paradigm to jointly improve the
coherence and novelty. In the first stage, a draft is generated given keywords
(i.e., topics) only. The second stage produces a "refining vector" from
retrieval lines. At last, we take into consideration both the draft and the
"refining vector" to generate a new poem. The draft provides future
sentence-level information for a line to be generated. Meanwhile, the "refining
vector" points out the direction of refinement based on impressive words
detection mechanism which can learn good patterns from references and then
create new ones via insertion operation. Experimental results on a collected
large-scale modern Chinese poetry dataset show that our proposed approach can
not only generate more coherent poems, but also improve the diversity and
novelty.
| 2,020 | Computation and Language |
Reward Constrained Interactive Recommendation with Natural Language
Feedback | Text-based interactive recommendation provides richer user feedback and has
demonstrated advantages over traditional interactive recommender systems.
However, recommendations can easily violate preferences of users from their
past natural-language feedback, since the recommender needs to explore new
items for further improvement. To alleviate this issue, we propose a novel
constraint-augmented reinforcement learning (RL) framework to efficiently
incorporate user preferences over time. Specifically, we leverage a
discriminator to detect recommendations violating user historical preference,
which is incorporated into the standard RL objective of maximizing expected
cumulative future rewards. Our proposed framework is general and is further
extended to the task of constrained text generation. Empirical results show
that the proposed method yields consistent improvement relative to standard RL
methods.
| 2,020 | Computation and Language |
From Arguments to Key Points: Towards Automatic Argument Summarization | Generating a concise summary from a large collection of arguments on a given
topic is an intriguing yet understudied problem. We propose to represent such
summaries as a small set of talking points, termed "key points", each scored
according to its salience. We show, by analyzing a large dataset of
crowd-contributed arguments, that a small number of key points per topic is
typically sufficient for covering the vast majority of the arguments.
Furthermore, we found that a domain expert can often predict these key points
in advance. We study the task of argument-to-key point mapping, and introduce a
novel large-scale dataset for this task. We report empirical results for an
extensive set of experiments with this dataset, showing promising performance.
| 2,020 | Computation and Language |
The Paradigm Discovery Problem | This work treats the paradigm discovery problem (PDP), the task of learning
an inflectional morphological system from unannotated sentences. We formalize
the PDP and develop evaluation metrics for judging systems. Using currently
available resources, we construct datasets for the task. We also devise a
heuristic benchmark for the PDP and report empirical results on five diverse
languages. Our benchmark system first makes use of word embeddings and string
similarity to cluster forms by cell and by paradigm. Then, we bootstrap a
neural transducer on top of the clustered data to predict words to realize the
empty paradigm slots. An error analysis of our system suggests clustering by
cell across different inflection classes is the most pressing challenge for
future work. Our code and data are available for public use.
| 2,020 | Computation and Language |
Code and Named Entity Recognition in StackOverflow | There is an increasing interest in studying natural language and computer
code together, as large corpora of programming texts become readily available
on the Internet. For example, StackOverflow currently has over 15 million
programming related questions written by 8.5 million users. Meanwhile, there is
still a lack of fundamental NLP techniques for identifying code tokens or
software-related named entities that appear within natural language sentences.
In this paper, we introduce a new named entity recognition (NER) corpus for the
computer programming domain, consisting of 15,372 sentences annotated with 20
fine-grained entity types. We trained in-domain BERT representations
(BERTOverflow) on 152 million sentences from StackOverflow, which lead to an
absolute increase of +10 F-1 score over off-the-shelf BERT. We also present the
SoftNER model which achieves an overall 79.10 F$_1$ score for code and named
entity recognition on StackOverflow data. Our SoftNER model incorporates a
context-independent code token classifier with corpus-level features to improve
the BERT-based tagging model. Our code and data are available at:
https://github.com/jeniyat/StackOverflowNER/
| 2,020 | Computation and Language |
A Tale of a Probe and a Parser | Measuring what linguistic information is encoded in neural models of language
has become popular in NLP. Researchers approach this enterprise by training
"probes" - supervised models designed to extract linguistic structure from
another model's output. One such probe is the structural probe (Hewitt and
Manning, 2019), designed to quantify the extent to which syntactic information
is encoded in contextualised word representations. The structural probe has a
novel design, unattested in the parsing literature, the precise benefit of
which is not immediately obvious. To explore whether syntactic probes would do
better to make use of existing techniques, we compare the structural probe to a
more traditional parser with an identical lightweight parameterisation. The
parser outperforms structural probe on UUAS in seven of nine analysed
languages, often by a substantial amount (e.g. by 11.1 points in English).
Under a second less common metric, however, there is the opposite trend - the
structural probe outperforms the parser. This begs the question: which metric
should we prefer?
| 2,020 | Computation and Language |
Words aren't enough, their order matters: On the Robustness of Grounding
Visual Referring Expressions | Visual referring expression recognition is a challenging task that requires
natural language understanding in the context of an image. We critically
examine RefCOCOg, a standard benchmark for this task, using a human study and
show that 83.7% of test instances do not require reasoning on linguistic
structure, i.e., words are enough to identify the target object, the word order
doesn't matter. To measure the true progress of existing models, we split the
test set into two sets, one which requires reasoning on linguistic structure
and the other which doesn't. Additionally, we create an out-of-distribution
dataset Ref-Adv by asking crowdworkers to perturb in-domain examples such that
the target object changes. Using these datasets, we empirically show that
existing methods fail to exploit linguistic structure and are 12% to 23% lower
in performance than the established progress for this task. We also propose two
methods, one based on contrastive learning and the other based on multi-task
learning, to increase the robustness of ViLBERT, the current state-of-the-art
model for this task. Our datasets are publicly available at
https://github.com/aws/aws-refcocog-adv
| 2,020 | Computation and Language |
Evaluating Explanation Methods for Neural Machine Translation | Recently many efforts have been devoted to interpreting the black-box NMT
models, but little progress has been made on metrics to evaluate explanation
methods. Word Alignment Error Rate can be used as such a metric that matches
human understanding, however, it can not measure explanation methods on those
target words that are not aligned to any source word. This paper thereby makes
an initial attempt to evaluate explanation methods from an alternative
viewpoint. To this end, it proposes a principled metric based on fidelity in
regard to the predictive behavior of the NMT model. As the exact computation
for this metric is intractable, we employ an efficient approach as its
approximation. On six standard translation tasks, we quantitatively evaluate
several explanation methods in terms of the proposed metric and we reveal some
valuable findings for these explanation methods in our experiments.
| 2,020 | Computation and Language |
Fast and Robust Unsupervised Contextual Biasing for Speech Recognition | Automatic speech recognition (ASR) system is becoming a ubiquitous
technology. Although its accuracy is closing the gap with that of human level
under certain settings, one area that can further improve is to incorporate
user-specific information or context to bias its prediction. A common framework
is to dynamically construct a small language model from the provided contextual
mini corpus and interpolate its score with the main language model during the
decoding process.
Here we propose an alternative approach that does not entail explicit
contextual language model. Instead, we derive the bias score for every word in
the system vocabulary from the training corpus. The method is unique in that 1)
it does not require meta-data or class-label annotation for the context or the
training corpus. 2) The bias score is proportional to the word's
log-probability, thus not only would it bias the provided context, but also
robust against irrelevant context (e.g. user mis-specified or in case where it
is hard to quantify a tight scope). 3) The bias score for the entire vocabulary
is pre-determined during the training stage, thereby eliminating
computationally expensive language model construction during inference.
We show significant improvement in recognition accuracy when the relevant
context is available. Additionally, we also demonstrate that the proposed
method exhibits high tolerance to false-triggering errors in the presence of
irrelevant context.
| 2,020 | Computation and Language |
What is Learned in Visually Grounded Neural Syntax Acquisition | Visual features are a promising signal for learning bootstrap textual models.
However, blackbox learning models make it difficult to isolate the specific
contribution of visual components. In this analysis, we consider the case study
of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent
approach for learning syntax from a visual training signal. By constructing
simplified versions of the model, we isolate the core factors that yield the
model's strong performance. Contrary to what the model might be capable of
learning, we find significantly less expressive versions produce similar
predictions and perform just as well, or even better. We also find that a
simple lexical signal of noun concreteness plays the main role in the model's
predictions as opposed to more complex syntactic reasoning.
| 2,020 | Computation and Language |
ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and
Socially-engaged Conversational Agents | We present ADVISER - an open-source, multi-domain dialog system toolkit that
enables the development of multi-modal (incorporating speech, text and vision),
socially-engaged (e.g. emotion recognition, engagement level prediction and
backchanneling) conversational agents. The final Python-based implementation of
our toolkit is flexible, easy to use, and easy to extend not only for
technically experienced users, such as machine learning researchers, but also
for less technically experienced users, such as linguists or cognitive
scientists, thereby providing a flexible platform for collaborative research.
Link to open-source code: https://github.com/DigitalPhonetics/adviser
| 2,020 | Computation and Language |
Discrete Optimization for Unsupervised Sentence Summarization with
Word-Level Extraction | Automatic sentence summarization produces a shorter version of a sentence,
while preserving its most important information. A good summary is
characterized by language fluency and high information overlap with the source
sentence. We model these two aspects in an unsupervised objective function,
consisting of language modeling and semantic similarity metrics. We search for
a high-scoring summary by discrete optimization. Our proposed method achieves a
new state-of-the art for unsupervised sentence summarization according to ROUGE
scores. Additionally, we demonstrate that the commonly reported ROUGE F1 metric
is sensitive to summary length. Since this is unwillingly exploited in recent
work, we emphasize that future evaluation should explicitly group summarization
systems by output length brackets.
| 2,020 | Computation and Language |
Generating SOAP Notes from Doctor-Patient Conversations Using Modular
Summarization Techniques | Following each patient visit, physicians draft long semi-structured clinical
summaries called SOAP notes. While invaluable to clinicians and researchers,
creating digital SOAP notes is burdensome, contributing to physician burnout.
In this paper, we introduce the first complete pipelines to leverage deep
summarization models to generate these notes based on transcripts of
conversations between physicians and patients. After exploring a spectrum of
methods across the extractive-abstractive spectrum, we propose Cluster2Sent, an
algorithm that (i) extracts important utterances relevant to each summary
section; (ii) clusters together related utterances; and then (iii) generates
one summary sentence per cluster. Cluster2Sent outperforms its purely
abstractive counterpart by 8 ROUGE-1 points, and produces significantly more
factual and coherent sentences as assessed by expert human evaluators. For
reproducibility, we demonstrate similar benefits on the publicly available AMI
dataset. Our results speak to the benefits of structuring summaries into
sections and annotating supporting evidence when constructing summarization
corpora.
| 2,021 | Computation and Language |
Spying on your neighbors: Fine-grained probing of contextual embeddings
for information about surrounding words | Although models using contextual word embeddings have achieved
state-of-the-art results on a host of NLP tasks, little is known about exactly
what information these embeddings encode about the context words that they are
understood to reflect. To address this question, we introduce a suite of
probing tasks that enable fine-grained testing of contextual embeddings for
encoding of information about surrounding words. We apply these tasks to
examine the popular BERT, ELMo and GPT contextual encoders, and find that each
of our tested information types is indeed encoded as contextual information
across tokens, often with near-perfect recoverability-but the encoders vary in
which features they distribute to which tokens, how nuanced their distributions
are, and how robust the encoding of each feature is to distance. We discuss
implications of these results for how different types of models breakdown and
prioritize word-level context information when constructing token embeddings.
| 2,020 | Computation and Language |
Exploring Controllable Text Generation Techniques | Neural controllable text generation is an important area gaining attention
due to its plethora of applications. Although there is a large body of prior
work in controllable text generation, there is no unifying theme. In this work,
we provide a new schema of the pipeline of the generation process by
classifying it into five modules. The control of attributes in the generation
process requires modification of these modules. We present an overview of
different techniques used to perform the modulation of these modules. We also
provide an analysis on the advantages and disadvantages of these techniques. We
further pave ways to develop new architectures based on the combination of the
modules described in this paper.
| 2,020 | Computation and Language |
Understanding Scanned Receipts | Tasking machines with understanding receipts can have important applications
such as enabling detailed analytics on purchases, enforcing expense policies,
and inferring patterns of purchase behavior on large collections of receipts.
In this paper, we focus on the task of Named Entity Linking (NEL) of scanned
receipt line items; specifically, the task entails associating shorthand text
from OCR'd receipts with a knowledge base (KB) of grocery products. For
example, the scanned item "STO BABY SPINACH" should be linked to the catalog
item labeled "Simple Truth Organic Baby Spinach". Experiments that employ a
variety of Information Retrieval techniques in combination with statistical
phrase detection shows promise for effective understanding of scanned receipt
data.
| 2,020 | Computation and Language |
Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? | Algorithmic approaches to interpreting machine learning models have
proliferated in recent years. We carry out human subject tests that are the
first of their kind to isolate the effect of algorithmic explanations on a key
aspect of model interpretability, simulatability, while avoiding important
confounding experimental factors. A model is simulatable when a person can
predict its behavior on new inputs. Through two kinds of simulation tests
involving text and tabular data, we evaluate five explanations methods: (1)
LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a
Composite approach that combines explanations from each method. Clear evidence
of method effectiveness is found in very few cases: LIME improves
simulatability in tabular classification, and our Prototype method is effective
in counterfactual simulation tests. We also collect subjective ratings of
explanations, but we do not find that ratings are predictive of how helpful
explanations are. Our results provide the first reliable and comprehensive
estimates of how explanations influence simulatability across a variety of
explanation methods and data domains. We show that (1) we need to be careful
about the metrics we use to evaluate explanation methods, and (2) there is
significant room for improvement in current methods. All our supporting code,
data, and models are publicly available at:
https://github.com/peterbhase/InterpretableNLP-ACL2020
| 2,020 | Computation and Language |
Exploring Content Selection in Summarization of Novel Chapters | We present a new summarization task, generating summaries of novel chapters
using summary/chapter pairs from online study guides. This is a harder task
than the news summarization task, given the chapter length as well as the
extreme paraphrasing and generalization found in the summaries. We focus on
extractive summarization, which requires the creation of a gold-standard set of
extractive summaries. We present a new metric for aligning reference summary
sentences with chapter sentences to create gold extracts and also experiment
with different alignment methods. Our experiments demonstrate significant
improvement over prior alignment approaches for our task as shown through
automatic metrics and a crowd-sourced pyramid analysis. We make our data
collection scripts available at
https://github.com/manestay/novel-chapter-dataset .
| 2,021 | Computation and Language |
Data Augmentation for Hypernymy Detection | The automatic detection of hypernymy relationships represents a challenging
problem in NLP. The successful application of state-of-the-art supervised
approaches using distributed representations has generally been impeded by the
limited availability of high quality training data. We have developed two novel
data augmentation techniques which generate new training examples from existing
ones. First, we combine the linguistic principles of hypernym transitivity and
intersective modifier-noun composition to generate additional pairs of vectors,
such as "small dog - dog" or "small dog - animal", for which a hypernymy
relationship can be assumed. Second, we use generative adversarial networks
(GANs) to generate pairs of vectors for which the hypernymy relation can also
be assumed. We furthermore present two complementary strategies for extending
an existing dataset by leveraging linguistic resources such as WordNet. Using
an evaluation across 3 different datasets for hypernymy detection and 2
different vector spaces, we demonstrate that both of the proposed automatic
data augmentation and dataset extension strategies substantially improve
classifier performance.
| 2,021 | Computation and Language |
Soft Gazetteers for Low-Resource Named Entity Recognition | Traditional named entity recognition models use gazetteers (lists of
entities) as features to improve performance. Although modern neural network
models do not require such hand-crafted features for strong performance, recent
work has demonstrated their utility for named entity recognition on English
data. However, designing such features for low-resource languages is
challenging, because exhaustive entity gazetteers do not exist in these
languages. To address this problem, we propose a method of "soft gazetteers"
that incorporates ubiquitously available information from English knowledge
bases, such as Wikipedia, into neural named entity recognition models through
cross-lingual entity linking. Our experiments on four low-resource languages
show an average improvement of 4 points in F1 score. Code and data are
available at https://github.com/neulab/soft-gazetteers.
| 2,020 | Computation and Language |
FarsBase-KBP: A Knowledge Base Population System for the Persian
Knowledge Graph | While most of the knowledge bases already support the English language, there
is only one knowledge base for the Persian language, known as FarsBase, which
is automatically created via semi-structured web information. Unlike English
knowledge bases such as Wikidata, which have tremendous community support, the
population of a knowledge base like FarsBase must rely on automatically
extracted knowledge. Knowledge base population can let FarsBase keep growing in
size, as the system continues working. In this paper, we present a knowledge
base population system for the Persian language, which extracts knowledge from
unlabeled raw text, crawled from the Web. The proposed system consists of a set
of state-of-the-art modules such as an entity linking module as well as
information and relation extraction modules designed for FarsBase. Moreover, a
canonicalization system is introduced to link extracted relations to FarsBase
properties. Then, the system uses knowledge fusion techniques with minimal
intervention of human experts to integrate and filter the proper knowledge
instances, extracted by each module. To evaluate the performance of the
presented knowledge base population system, we present the first gold dataset
for benchmarking knowledge base population in the Persian language, which
consisting of 22015 FarsBase triples and verified by human experts. The
evaluation results demonstrate the efficiency of the proposed system.
| 2,020 | Computation and Language |
Fine-grained Financial Opinion Mining: A Survey and Research Agenda | Opinion mining is a prevalent research issue in many domains. In the
financial domain, however, it is still in the early stages. Most of the
researches on this topic only focus on the coarse-grained market sentiment
analysis, i.e., 2-way classification for bullish/bearish. Thanks to the recent
financial technology (FinTech) development, some interdisciplinary researchers
start to involve in the in-depth analysis of investors' opinions. In this
position paper, we first define the financial opinions from both coarse-grained
and fine-grained points of views, and then provide an overview on the issues
already tackled. In addition to listing research issues of the existing topics,
we further propose a road map of fine-grained financial opinion mining for
future researches, and point out several challenges yet to explore. Moreover,
we provide possible directions to deal with the proposed research issues.
| 2,020 | Computation and Language |
Probabilistic Assumptions Matter: Improved Models for
Distantly-Supervised Document-Level Question Answering | We address the problem of extractive question answering using document-level
distant super-vision, pairing questions and relevant documents with answer
strings. We compare previously used probability space and distant super-vision
assumptions (assumptions on the correspondence between the weak answer string
labels and possible answer mention spans). We show that these assumptions
interact, and that different configurations provide complementary benefits. We
demonstrate that a multi-objective model can efficiently combine the advantages
of multiple assumptions and out-perform the best individual formulation. Our
approach outperforms previous state-of-the-art models by 4.3 points in F1 on
TriviaQA-Wiki and 1.7 points in Rouge-L on NarrativeQA summaries.
| 2,020 | Computation and Language |
OpinionDigest: A Simple Framework for Opinion Summarization | We present OpinionDigest, an abstractive opinion summarization framework,
which does not rely on gold-standard summaries for training. The framework uses
an Aspect-based Sentiment Analysis model to extract opinion phrases from
reviews, and trains a Transformer model to reconstruct the original reviews
from these extractions. At summarization time, we merge extractions from
multiple reviews and select the most popular ones. The selected opinions are
used as input to the trained Transformer model, which verbalizes them into an
opinion summary. OpinionDigest can also generate customized summaries, tailored
to specific user needs, by filtering the selected opinions according to their
aspect and/or sentiment. Automatic evaluation on Yelp data shows that our
framework outperforms competitive baselines. Human studies on two corpora
verify that OpinionDigest produces informative summaries and shows promising
customization capabilities.
| 2,020 | Computation and Language |
ExpBERT: Representation Engineering with Natural Language Explanations | Suppose we want to specify the inductive bias that married couples typically
go on honeymoons for the task of extracting pairs of spouses from text. In this
paper, we allow model developers to specify these types of inductive biases as
natural language explanations. We use BERT fine-tuned on MultiNLI to
``interpret'' these explanations with respect to the input sentence, producing
explanation-guided representations of the input. Across three relation
extraction tasks, our method, ExpBERT, matches a BERT baseline but with 3--20x
less labeled data and improves on the baseline by 3--10 F1 points with the same
amount of labeled data.
| 2,020 | Computation and Language |
End-to-end Whispered Speech Recognition with Frequency-weighted
Approaches and Pseudo Whisper Pre-training | Whispering is an important mode of human speech, but no end-to-end
recognition results for it were reported yet, probably due to the scarcity of
available whispered speech data. In this paper, we present several approaches
for end-to-end (E2E) recognition of whispered speech considering the special
characteristics of whispered speech and the scarcity of data. This includes a
frequency-weighted SpecAugment policy and a frequency-divided CNN feature
extractor for better capturing the high-frequency structures of whispered
speech, and a layer-wise transfer learning approach to pre-train a model with
normal or normal-to-whispered converted speech then fine-tune it with whispered
speech to bridge the gap between whispered and normal speech. We achieve an
overall relative reduction of 19.8% in PER and 44.4% in CER on a relatively
small whispered TIMIT corpus. The results indicate as long as we have a good
E2E model pre-trained on normal or pseudo-whispered speech, a relatively small
set of whispered speech may suffice to obtain a reasonably good E2E whispered
speech recognizer.
| 2,021 | Computation and Language |
Dynamically Adjusting Transformer Batch Size by Monitoring Gradient
Direction Change | The choice of hyper-parameters affects the performance of neural models.
While much previous research (Sutskever et al., 2013; Duchi et al., 2011;
Kingma and Ba, 2015) focuses on accelerating convergence and reducing the
effects of the learning rate, comparatively few papers concentrate on the
effect of batch size. In this paper, we analyze how increasing batch size
affects gradient direction, and propose to evaluate the stability of gradients
with their angle change. Based on our observations, the angle change of
gradient direction first tends to stabilize (i.e. gradually decrease) while
accumulating mini-batches, and then starts to fluctuate. We propose to
automatically and dynamically determine batch sizes by accumulating gradients
of mini-batches and performing an optimization step at just the time when the
direction of gradients starts to fluctuate. To improve the efficiency of our
approach for large models, we propose a sampling approach to select gradients
of parameters sensitive to the batch size. Our approach dynamically determines
proper and efficient batch sizes during training. In our experiments on the WMT
14 English to German and English to French tasks, our approach improves the
Transformer with a fixed 25k batch size by +0.73 and +0.82 BLEU respectively.
| 2,020 | Computation and Language |
Neural Syntactic Preordering for Controlled Paraphrase Generation | Paraphrasing natural language sentences is a multifaceted process: it might
involve replacing individual words or short phrases, local rearrangement of
content, or high-level restructuring like topicalization or passivization. Past
approaches struggle to cover this space of paraphrase possibilities in an
interpretable manner. Our work, inspired by pre-ordering literature in machine
translation, uses syntactic transformations to softly "reorder'' the source
sentence and guide our neural paraphrasing model. First, given an input
sentence, we derive a set of feasible syntactic rearrangements using an
encoder-decoder model. This model operates over a partially lexical, partially
syntactic view of the sentence and can reorder big chunks. Next, we use each
proposed rearrangement to produce a sequence of position embeddings, which
encourages our final encoder-decoder paraphrase model to attend to the source
words in a particular order. Our evaluation, both automatic and human, shows
that the proposed system retains the quality of the baseline approaches while
giving a substantial increase in the diversity of the generated paraphrases
| 2,020 | Computation and Language |
Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer | Unsupervised style transfer aims to change the style of an input sentence
while preserving its original content without using parallel training data. In
current dominant approaches, owing to the lack of fine-grained control on the
influence from the target style,they are unable to yield desirable output
sentences. In this paper, we propose a novel attentional sequence-to-sequence
(Seq2seq) model that dynamically exploits the relevance of each output word to
the target style for unsupervised style transfer. Specifically, we first
pretrain a style classifier, where the relevance of each input word to the
original style can be quantified via layer-wise relevance propagation. In a
denoising auto-encoding manner, we train an attentional Seq2seq model to
reconstruct input sentences and repredict word-level previously-quantified
style relevance simultaneously. In this way, this model is endowed with the
ability to automatically predict the style relevance of each output word. Then,
we equip the decoder of this model with a neural style component to exploit the
predicted wordlevel style relevance for better style transfer. Particularly, we
fine-tune this model using a carefully-designed objective function involving
style transfer, style relevance consistency, content preservation and fluency
modeling loss terms. Experimental results show that our proposed model achieves
state-of-the-art performance in terms of both transfer accuracy and content
preservation.
| 2,022 | Computation and Language |
Establishing Baselines for Text Classification in Low-Resource Languages | While transformer-based finetuning techniques have proven effective in tasks
that involve low-resource, low-data environments, a lack of properly
established baselines and benchmark datasets make it hard to compare different
approaches that are aimed at tackling the low-resource setting. In this work,
we provide three contributions. First, we introduce two previously unreleased
datasets as benchmark datasets for text classification and low-resource
multilabel text classification for the low-resource language Filipino. Second,
we pretrain better BERT and DistilBERT models for use within the Filipino
setting. Third, we introduce a simple degradation test that benchmarks a
model's resistance to performance degradation as the number of training samples
are reduced. We analyze our pretrained model's degradation speeds and look
towards the use of this method for comparing models aimed at operating within
the low-resource setting. We release all our models and datasets for the
research community to use.
| 2,020 | Computation and Language |
Self-organizing Pattern in Multilayer Network for Words and Syllables | One of the ultimate goals for linguists is to find universal properties in
human languages. Although words are generally considered as representing
arbitrary mapping between linguistic forms and meanings, we propose a new
universal law that highlights the equally important role of syllables, which is
complementary to Zipf's. By plotting rank-rank frequency distribution of word
and syllable for English and Chinese corpora, visible lines appear and can be
fit to a master curve. We discover the multi-layer network for words and
syllables based on this analysis exhibits the feature of self-organization
which relies heavily on the inclusion of syllables and their connections.
Analytic form for the scaling structure is derived and used to quantify how
Internet slang becomes fashionable, which demonstrates its usefulness as a new
tool to evolutionary linguistics.
| 2,020 | Computation and Language |
Artemis: A Novel Annotation Methodology for Indicative Single Document
Summarization | We describe Artemis (Annotation methodology for Rich, Tractable, Extractive,
Multi-domain, Indicative Summarization), a novel hierarchical annotation
process that produces indicative summaries for documents from multiple domains.
Current summarization evaluation datasets are single-domain and focused on a
few domains for which naturally occurring summaries can be easily found, such
as news and scientific articles. These are not sufficient for training and
evaluation of summarization models for use in document management and
information retrieval systems, which need to deal with documents from multiple
domains. Compared to other annotation methods such as Relative Utility and
Pyramid, Artemis is more tractable because judges don't need to look at all the
sentences in a document when making an importance judgment for one of the
sentences, while providing similarly rich sentence importance annotations. We
describe the annotation process in detail and compare it with other similar
evaluation systems. We also present analysis and experimental results over a
sample set of 532 annotated documents.
| 2,020 | Computation and Language |
IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization | Fine-tuning pre-trained language models (PTLMs), such as BERT and its better
variant RoBERTa, has been a common practice for advancing performance in
natural language understanding (NLU) tasks. Recent advance in representation
learning shows that isotropic (i.e., unit-variance and uncorrelated) embeddings
can significantly improve performance on downstream tasks with faster
convergence and better generalization. The isotropy of the pre-trained
embeddings in PTLMs, however, is relatively under-explored. In this paper, we
analyze the isotropy of the pre-trained [CLS] embeddings of PTLMs with
straightforward visualization, and point out two major issues: high variance in
their standard deviation, and high correlation between different dimensions. We
also propose a new network regularization method, isotropic batch normalization
(IsoBN) to address the issues, towards learning more isotropic representations
in fine-tuning by dynamically penalizing dominating principal components. This
simple yet effective fine-tuning method yields about 1.0 absolute increment on
the average of seven NLU tasks.
| 2,021 | Computation and Language |
Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting | Conversational search plays a vital role in conversational information
seeking. As queries in information seeking dialogues are ambiguous for
traditional ad-hoc information retrieval (IR) systems due to the coreference
and omission resolution problems inherent in natural language dialogue,
resolving these ambiguities is crucial. In this paper, we tackle conversational
passage retrieval (ConvPR), an important component of conversational search, by
addressing query ambiguities with query reformulation integrated into a
multi-stage ad-hoc IR system. Specifically, we propose two conversational query
reformulation (CQR) methods: (1) term importance estimation and (2) neural
query rewriting. For the former, we expand conversational queries using
important terms extracted from the conversational context with frequency-based
signals. For the latter, we reformulate conversational queries into natural,
standalone, human-understandable queries with a pretrained sequence-tosequence
model. Detailed analyses of the two CQR methods are provided quantitatively and
qualitatively, explaining their advantages, disadvantages, and distinct
behaviors. Moreover, to leverage the strengths of both CQR methods, we propose
combining their output with reciprocal rank fusion, yielding state-of-the-art
retrieval effectiveness, 30% improvement in terms of NDCG@3 compared to the
best submission of TREC CAsT 2019.
| 2,021 | Computation and Language |
A Survey on Dialog Management: Recent Advances and Challenges | Dialog management (DM) is a crucial component in a task-oriented dialog
system. Given the dialog history, DM predicts the dialog state and decides the
next action that the dialog agent should take. Recently, dialog policy learning
has been widely formulated as a Reinforcement Learning (RL) problem, and more
works focus on the applicability of DM. In this paper, we survey recent
advances and challenges within three critical topics for DM: (1) improving
model scalability to facilitate dialog system modeling in new scenarios, (2)
dealing with the data scarcity problem for dialog policy learning, and (3)
enhancing the training efficiency to achieve better task-completion performance
. We believe that this survey can shed a light on future research in dialog
management.
| 2,021 | Computation and Language |
Creating a Multimodal Dataset of Images and Text to Study Abusive
Language | In order to study online hate speech, the availability of datasets containing
the linguistic phenomena of interest are of crucial importance. However, when
it comes to specific target groups, for example teenagers, collecting such data
may be problematic due to issues with consent and privacy restrictions.
Furthermore, while text-only datasets of this kind have been widely used,
limitations set by image-based social media platforms like Instagram make it
difficult for researchers to experiment with multimodal hate speech data. We
therefore developed CREENDER, an annotation tool that has been used in school
classes to create a multimodal dataset of images and abusive comments, which we
make freely available under Apache 2.0 license. The corpus, with Italian
comments, has been analysed from different perspectives, to investigate whether
the subject of the images plays a role in triggering a comment. We find that
users judge the same images in different ways, although the presence of a
person in the picture increases the probability to get an offensive comment.
| 2,020 | Computation and Language |
Code-switching patterns can be an effective route to improve performance
of downstream NLP applications: A case study of humour, sarcasm and hate
speech detection | In this paper we demonstrate how code-switching patterns can be utilised to
improve various downstream NLP applications. In particular, we encode different
switching features to improve humour, sarcasm and hate speech detection tasks.
We believe that this simple linguistic observation can also be potentially
helpful in improving other similar NLP applications.
| 2,020 | Computation and Language |
Neural CRF Model for Sentence Alignment in Text Simplification | The success of a text simplification system heavily depends on the quality
and quantity of complex-simple sentence pairs in the training corpus, which are
extracted by aligning sentences between parallel articles. To evaluate and
improve sentence alignment quality, we create two manually annotated
sentence-aligned datasets from two commonly used text simplification corpora,
Newsela and Wikipedia. We propose a novel neural CRF alignment model which not
only leverages the sequential nature of sentences in parallel documents but
also utilizes a neural sentence pair model to capture semantic similarity.
Experiments demonstrate that our proposed approach outperforms all the previous
work on monolingual sentence alignment task by more than 5 points in F1. We
apply our CRF aligner to construct two new text simplification datasets,
Newsela-Auto and Wiki-Auto, which are much larger and of better quality
compared to the existing datasets. A Transformer-based seq2seq model trained on
our datasets establishes a new state-of-the-art for text simplification in both
automatic and human evaluation.
| 2,021 | Computation and Language |
Digraph of Senegal s local languages: issues, challenges and prospects
of their transliteration | The local languages in Senegal, like those of West African countries in
general, are written based on two alphabets: supplemented Arabic alphabet
(called Ajami) and Latin alphabet. Each writing has its own applications. Ajami
writing is generally used by people educated in Koranic schools for
communication, business, literature (religious texts, poetry, etc.),
traditional religious medicine, etc. Writing with Latin characters is used for
localization of ICT (Web, dictionaries, Windows and Google tools translated in
Wolof, etc.), the translation of legal texts (commercial code and constitution
translated in Wolof) and religious ones (Quran and Bible in Wolof), book
edition, etc. To facilitate both populations general access to knowledge, it is
useful to set up transliteration tools between these two scriptures. This work
falls within the framework of the implementation of project for a collaborative
online dictionary Wolof (Nguer E. M., Khoule M, Thiam M. N., Mbaye B. T.,
Thiare O., Cisse M. T., Mangeot M. 2014), which will involve people using Ajami
writing. Our goal will consist, on the one hand in raising the issues related
to the transliteration and the challenges that this will raise, and on the
other one, presenting the perspectives.
| 2,015 | Computation and Language |
It's Easier to Translate out of English than into it: Measuring Neural
Translation Difficulty by Cross-Mutual Information | The performance of neural machine translation systems is commonly evaluated
in terms of BLEU. However, due to its reliance on target language properties
and generation, the BLEU metric does not allow an assessment of which
translation directions are more difficult to model. In this paper, we propose
cross-mutual information (XMI): an asymmetric information-theoretic metric of
machine translation difficulty that exploits the probabilistic nature of most
neural machine translation models. XMI allows us to better evaluate the
difficulty of translating text into the target language while controlling for
the difficulty of the target-side generation component independent of the
translation task. We then present the first systematic and controlled study of
cross-lingual translation difficulties using modern neural translation systems.
Code for replicating our experiments is available online at
https://github.com/e-bug/nmt-difficulty.
| 2,020 | Computation and Language |
CODA-19: Using a Non-Expert Crowd to Annotate Research Aspects on
10,000+ Abstracts in the COVID-19 Open Research Dataset | This paper introduces CODA-19, a human-annotated dataset that codes the
Background, Purpose, Method, Finding/Contribution, and Other sections of 10,966
English abstracts in the COVID-19 Open Research Dataset. CODA-19 was created by
248 crowd workers from Amazon Mechanical Turk within 10 days, and achieved
labeling quality comparable to that of experts. Each abstract was annotated by
nine different workers, and the final labels were acquired by majority vote.
The inter-annotator agreement (Cohen's kappa) between the crowd and the
biomedical expert (0.741) is comparable to inter-expert agreement (0.788).
CODA-19's labels have an accuracy of 82.2% when compared to the biomedical
expert's labels, while the accuracy between experts was 85.0%. Reliable human
annotations help scientists access and integrate the rapidly accelerating
coronavirus literature, and also serve as the battery of AI/NLP research, but
obtaining expert annotations can be slow. We demonstrated that a non-expert
crowd can be rapidly employed at scale to join the fight against COVID-19.
| 2,020 | Computation and Language |
Automated Personalized Feedback Improves Learning Gains in an
Intelligent Tutoring System | We investigate how automated, data-driven, personalized feedback in a
large-scale intelligent tutoring system (ITS) improves student learning
outcomes. We propose a machine learning approach to generate personalized
feedback, which takes individual needs of students into account. We utilize
state-of-the-art machine learning and natural language processing techniques to
provide the students with personalized hints, Wikipedia-based explanations, and
mathematical hints. Our model is used in Korbit, a large-scale dialogue-based
ITS with thousands of students launched in 2019, and we demonstrate that the
personalized feedback leads to considerable improvement in student learning
outcomes and in the subjective evaluation of the feedback.
| 2,020 | Computation and Language |
Contextualizing Hate Speech Classifiers with Post-hoc Explanation | Hate speech classifiers trained on imbalanced datasets struggle to determine
if group identifiers like "gay" or "black" are used in offensive or prejudiced
ways. Such biases manifest in false positives when these identifiers are
present, due to models' inability to learn the contexts which constitute a
hateful usage of identifiers. We extract SOC post-hoc explanations from
fine-tuned BERT classifiers to efficiently detect bias towards identity terms.
Then, we propose a novel regularization technique based on these explanations
that encourages models to learn from the context of group identifiers in
addition to the identifiers themselves. Our approach improved over baselines in
limiting false positives on out-of-domain data while maintaining or improving
in-domain performance. Project page:
https://inklab.usc.edu/contextualize-hate-speech/.
| 2,020 | Computation and Language |
Russian Natural Language Generation: Creation of a Language Modelling
Dataset and Evaluation with Modern Neural Architectures | Generating coherent, grammatically correct, and meaningful text is very
challenging, however, it is crucial to many modern NLP systems. So far,
research has mostly focused on English language, for other languages both
standardized datasets, as well as experiments with state-of-the-art models, are
rare. In this work, we i) provide a novel reference dataset for Russian
language modeling, ii) experiment with popular modern methods for text
generation, namely variational autoencoders, and generative adversarial
networks, which we trained on the new dataset. We evaluate the generated text
regarding metrics such as perplexity, grammatical correctness and lexical
diversity.
| 2,020 | Computation and Language |
Efficient strategies for hierarchical text classification: External
knowledge and auxiliary tasks | In hierarchical text classification, we perform a sequence of inference steps
to predict the category of a document from top to bottom of a given class
taxonomy. Most of the studies have focused on developing novels neural network
architectures to deal with the hierarchical structure, but we prefer to look
for efficient ways to strengthen a baseline model. We first define the task as
a sequence-to-sequence problem. Afterwards, we propose an auxiliary synthetic
task of bottom-up-classification. Then, from external dictionaries, we retrieve
textual definitions for the classes of all the hierarchy's layers, and map them
into the word vector space. We use the class-definition embeddings as an
additional input to condition the prediction of the next layer and in an
adapted beam search. Whereas the modified search did not provide large gains,
the combination of the auxiliary task and the additional input of
class-definitions significantly enhance the classification accuracy. With our
efficient approaches, we outperform previous studies, using a drastically
reduced number of parameters, in two well-known English datasets.
| 2,020 | Computation and Language |
MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering
Models | Retrieval question answering (ReQA) is the task of retrieving a
sentence-level answer to a question from an open corpus (Ahmad et
al.,2019).This paper presents MultiReQA, anew multi-domain ReQA evaluation
suite com-posed of eight retrieval QA tasks drawn from publicly available QA
datasets. We provide the first systematic retrieval based evaluation over these
datasets using two supervised neural models, based on fine-tuning BERT
andUSE-QA models respectively, as well as a surprisingly strong information
retrieval baseline,BM25. Five of these tasks contain both train-ing and test
data, while three contain test data only. Performance on the five tasks with
train-ing data shows that while a general model covering all domains is
achievable, the best performance is often obtained by training exclusively on
in-domain data.
| 2,020 | Computation and Language |
Phonetic and Visual Priors for Decipherment of Informal Romanization | Informal romanization is an idiosyncratic process used by humans in informal
digital communication to encode non-Latin script languages into Latin character
sets found on common keyboards. Character substitution choices differ between
users but have been shown to be governed by the same main principles observed
across a variety of languages---namely, character pairs are often associated
through phonetic or visual similarity. We propose a noisy-channel WFST cascade
model for deciphering the original non-Latin script from observed romanized
text in an unsupervised fashion. We train our model directly on romanized data
from two languages: Egyptian Arabic and Russian. We demonstrate that adding
inductive bias through phonetic and visual priors on character mappings
substantially improves the model's performance on both languages, yielding
results much closer to the supervised skyline. Finally, we introduce a new
dataset of romanized Russian, collected from a Russian social network website
and partially annotated for our experiments.
| 2,020 | Computation and Language |
The Cascade Transformer: an Application for Efficient Answer Sentence
Selection | Large transformer-based language models have been shown to be very effective
in many classification tasks. However, their computational complexity prevents
their use in applications requiring the classification of a large set of
candidates. While previous works have investigated approaches to reduce model
size, relatively little attention has been paid to techniques to improve batch
throughput during inference. In this paper, we introduce the Cascade
Transformer, a simple yet effective technique to adapt transformer-based models
into a cascade of rankers. Each ranker is used to prune a subset of candidates
in a batch, thus dramatically increasing throughput at inference time. Partial
encodings from the transformer model are shared among rerankers, providing
further speed-up. When compared to a state-of-the-art transformer model, our
approach reduces computation by 37% with almost no impact on accuracy, as
measured on two English Question Answering datasets.
| 2,020 | Computation and Language |
Speak to your Parser: Interactive Text-to-SQL with Natural Language
Feedback | We study the task of semantic parse correction with natural language
feedback. Given a natural language utterance, most semantic parsing systems
pose the problem as one-shot translation where the utterance is mapped to a
corresponding logical form. In this paper, we investigate a more interactive
scenario where humans can further interact with the system by providing
free-form natural language feedback to correct the system when it generates an
inaccurate interpretation of an initial utterance. We focus on natural language
to SQL systems and construct, SPLASH, a dataset of utterances, incorrect SQL
interpretations and the corresponding natural language feedback. We compare
various reference models for the correction task and show that incorporating
such a rich form of feedback can significantly improve the overall semantic
parsing accuracy while retaining the flexibility of natural language
interaction. While we estimated human correction accuracy is 81.5%, our best
model achieves only 25.1%, which leaves a large gap for improvement in future
research. SPLASH is publicly available at https://aka.ms/Splash_dataset.
| 2,020 | Computation and Language |
Crossing Variational Autoencoders for Answer Retrieval | Answer retrieval is to find the most aligned answer from a large set of
candidates given a question. Learning vector representations of
questions/answers is the key factor. Question-answer alignment and
question/answer semantics are two important signals for learning the
representations. Existing methods learned semantic representations with dual
encoders or dual variational auto-encoders. The semantic information was
learned from language models or question-to-question (answer-to-answer)
generative processes. However, the alignment and semantics were too separate to
capture the aligned semantics between question and answer. In this work, we
propose to cross variational auto-encoders by generating questions with aligned
answers and generating answers with aligned questions. Experiments show that
our method outperforms the state-of-the-art answer retrieval method on SQuAD.
| 2,020 | Computation and Language |
Moving Down the Long Tail of Word Sense Disambiguation with
Gloss-Informed Biencoders | A major obstacle in Word Sense Disambiguation (WSD) is that word senses are
not uniformly distributed, causing existing models to generally perform poorly
on senses that are either rare or unseen during training. We propose a
bi-encoder model that independently embeds (1) the target word with its
surrounding context and (2) the dictionary definition, or gloss, of each sense.
The encoders are jointly optimized in the same representation space, so that
sense disambiguation can be performed by finding the nearest sense embedding
for each target word embedding. Our system outperforms previous
state-of-the-art models on English all-words WSD; these gains predominantly
come from improved performance on rare senses, leading to a 31.1% error
reduction on less frequent senses over prior work. This demonstrates that rare
senses can be more effectively disambiguated by modeling their definitions.
| 2,020 | Computation and Language |
Building A User-Centric and Content-Driven Socialbot | To build Sounding Board, we develop a system architecture that is capable of
accommodating dialog strategies that we designed for socialbot conversations.
The architecture consists of a multi-dimensional language understanding module
for analyzing user utterances, a hierarchical dialog management framework for
dialog context tracking and complex dialog control, and a language generation
process that realizes the response plan and makes adjustments for speech
synthesis. Additionally, we construct a new knowledge base to power the
socialbot by collecting social chat content from a variety of sources. An
important contribution of the system is the synergy between the knowledge base
and the dialog management, i.e., the use of a graph structure to organize the
knowledge base that makes dialog control very efficient in bringing related
content to the discussion. Using the data collected from Sounding Board during
the competition, we carry out in-depth analyses of socialbot conversations and
user ratings which provide valuable insights in evaluation methods for
socialbots. We additionally investigate a new approach for system evaluation
and diagnosis that allows scoring individual dialog segments in the
conversation. Finally, observing that socialbots suffer from the issue of
shallow conversations about topics associated with unstructured data, we study
the problem of enabling extended socialbot conversations grounded on a
document. To bring together machine reading and dialog control techniques, a
graph-based document representation is proposed, together with methods for
automatically constructing the graph. Using the graph-based representation,
dialog control can be carried out by retrieving nodes or moving along edges in
the graph. To illustrate the usage, a mixed-initiative dialog strategy is
designed for socialbot conversations on news articles.
| 2,020 | Computation and Language |
A Top-Down Neural Architecture towards Text-Level Parsing of Discourse
Rhetorical Structure | Due to its great importance in deep natural language understanding and
various down-stream applications, text-level parsing of discourse rhetorical
structure (DRS) has been drawing more and more attention in recent years.
However, all the previous studies on text-level discourse parsing adopt
bottom-up approaches, which much limit the DRS determination on local
information and fail to well benefit from global information of the overall
discourse. In this paper, we justify from both computational and perceptive
points-of-view that the top-down architecture is more suitable for text-level
DRS parsing. On the basis, we propose a top-down neural architecture toward
text-level DRS parsing. In particular, we cast discourse parsing as a recursive
split point ranking task, where a split point is classified to different levels
according to its rank and the elementary discourse units (EDUs) associated with
it are arranged accordingly. In this way, we can determine the complete DRS as
a hierarchical tree structure via an encoder-decoder with an internal stack.
Experimentation on both the English RST-DT corpus and the Chinese CDTB corpus
shows the great effectiveness of our proposed top-down approach towards
text-level DRS parsing.
| 2,021 | Computation and Language |
Shape of synth to come: Why we should use synthetic data for English
surface realization | The Surface Realization Shared Tasks of 2018 and 2019 were Natural Language
Generation shared tasks with the goal of exploring approaches to surface
realization from Universal-Dependency-like trees to surface strings for several
languages. In the 2018 shared task there was very little difference in the
absolute performance of systems trained with and without additional,
synthetically created data, and a new rule prohibiting the use of synthetic
data was introduced for the 2019 shared task. Contrary to the findings of the
2018 shared task, we show, in experiments on the English 2018 dataset, that the
use of synthetic data can have a substantial positive effect - an improvement
of almost 8 BLEU points for a previously state-of-the-art system. We analyse
the effects of synthetic data, and we argue that its use should be encouraged
rather than prohibited so that future research efforts continue to explore
systems that can take advantage of such data.
| 2,020 | Computation and Language |
Learning to Understand Child-directed and Adult-directed Speech | Speech directed to children differs from adult-directed speech in linguistic
aspects such as repetition, word choice, and sentence length, as well as in
aspects of the speech signal itself, such as prosodic and phonemic variation.
Human language acquisition research indicates that child-directed speech helps
language learners. This study explores the effect of child-directed speech when
learning to extract semantic information from speech directly. We compare the
task performance of models trained on adult-directed speech (ADS) and
child-directed speech (CDS). We find indications that CDS helps in the initial
stages of learning, but eventually, models trained on ADS reach comparable task
performance, and generalize better. The results suggest that this is at least
partially due to linguistic rather than acoustic properties of the two
registers, as we see the same pattern when looking at models trained on
acoustically comparable synthetic speech.
| 2,021 | Computation and Language |
Unsupervised Neural Aspect Search with Related Terms Extraction | The tasks of aspect identification and term extraction remain challenging in
natural language processing. While supervised methods achieve high scores, it
is hard to use them in real-world applications due to the lack of labelled
datasets. Unsupervised approaches outperform these methods on several tasks,
but it is still a challenge to extract both an aspect and a corresponding term,
particularly in the multi-aspect setting. In this work, we present a novel
unsupervised neural network with convolutional multi-attention mechanism, that
allows extracting pairs (aspect, term) simultaneously, and demonstrate the
effectiveness on the real-world dataset. We apply a special loss aimed to
improve the quality of multi-aspect extraction. The experimental study
demonstrates, what with this loss we increase the precision not only on this
joint setting but also on aspect prediction only.
| 2,020 | Computation and Language |
An Empirical Study of Multi-Task Learning on BERT for Biomedical Text
Mining | Multi-task learning (MTL) has achieved remarkable success in natural language
processing applications. In this work, we study a multi-task learning model
with multiple decoders on varieties of biomedical and clinical natural language
processing tasks such as text similarity, relation extraction, named entity
recognition, and text inference. Our empirical results demonstrate that the MTL
fine-tuned models outperform state-of-the-art transformer models (e.g., BERT
and its variants) by 2.0% and 1.3% in biomedical and clinical domains,
respectively. Pairwise MTL further demonstrates more details about which tasks
can improve or decrease others. This is particularly helpful in the context
that researchers are in the hassle of choosing a suitable model for new
problems. The code and models are publicly available at
https://github.com/ncbi-nlp/bluebert
| 2,020 | Computation and Language |
Digraphie des langues ouest africaines : Latin2Ajami : un algorithme de
translitteration automatique | The national languages of Senegal, like those of West Africa country in
general, are written with two alphabets : the Latin alphabet that draws its
strength from official decreesm and the completed Arabic script (Ajami),
widespread and well integrated, that has little institutional support. This
digraph created two worlds ignoring each other. Indeed, Ajami writing is
generally used daily by populations from Koranic schools, while writing with
the Latin alphabet is used by people from the public school. To solve this
problem, it is useful to establish transliteration tools between these two
scriptures. Preliminary work (Nguer, Bao-Diop, Fall, khoule, 2015) was
performed to locate the problems, challenges and prospects. This present work,
making it subsequently fell into this. Its objective is the study and creation
of a transliteration algorithm from latin towards Ajami.
| 2,020 | Computation and Language |
TAG : Type Auxiliary Guiding for Code Comment Generation | Existing leading code comment generation approaches with the
structure-to-sequence framework ignores the type information of the
interpretation of the code, e.g., operator, string, etc. However, introducing
the type information into the existing framework is non-trivial due to the
hierarchical dependence among the type information. In order to address the
issues above, we propose a Type Auxiliary Guiding encoder-decoder framework for
the code comment generation task which considers the source code as an N-ary
tree with type information associated with each node. Specifically, our
framework is featured with a Type-associated Encoder and a Type-restricted
Decoder which enables adaptive summarization of the source code. We further
propose a hierarchical reinforcement learning method to resolve the training
difficulties of our proposed framework. Extensive evaluations demonstrate the
state-of-the-art performance of our framework with both the auto-evaluated
metrics and case studies.
| 2,020 | Computation and Language |
TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State
Tracking | Task-oriented dialog systems rely on dialog state tracking (DST) to monitor
the user's goal during the course of an interaction. Multi-domain and
open-vocabulary settings complicate the task considerably and demand scalable
solutions. In this paper we present a new approach to DST which makes use of
various copy mechanisms to fill slots with values. Our model has no need to
maintain a list of candidate values. Instead, all values are extracted from the
dialog context on-the-fly. A slot is filled by one of three copy mechanisms:
(1) Span prediction may extract values directly from the user input; (2) a
value may be copied from a system inform memory that keeps track of the
system's inform operations; (3) a value may be copied over from a different
slot that is already contained in the dialog state to resolve coreferences
within and across domains. Our approach combines the advantages of span-based
slot filling methods with memory methods to avoid the use of value picklists
altogether. We argue that our strategy simplifies the DST task while at the
same time achieving state of the art performance on various popular evaluation
sets including Multiwoz 2.1, where we achieve a joint goal accuracy beyond 55%.
| 2,020 | Computation and Language |
Review of Text Style Transfer Based on Deep Learning | Text style transfer is a hot issue in recent natural language
processing,which mainly studies the text to adapt to different specific
situations, audiences and purposes by making some changes. The style of the
text usually includes many aspects such as morphology, grammar, emotion,
complexity, fluency, tense, tone and so on. In the traditional text style
transfer model, the text style is generally relied on by experts knowledge and
hand-designed rules, but with the application of deep learning in the field of
natural language processing, the text style transfer method based on deep
learning Started to be heavily researched. In recent years, text style transfer
is becoming a hot issue in natural language processing research. This article
summarizes the research on the text style transfer model based on deep learning
in recent years, and summarizes, analyzes and compares the main research
directions and progress. In addition, the article also introduces public data
sets and evaluation indicators commonly used for text style transfer. Finally,
the existing characteristics of the text style transfer model are summarized,
and the future development trend of the text style transfer model based on deep
learning is analyzed and forecasted.
| 2,021 | Computation and Language |
Harvesting and Refining Question-Answer Pairs for Unsupervised QA | Question Answering (QA) has shown great success thanks to the availability of
large-scale datasets and the effectiveness of neural models. Recent research
works have attempted to extend these successes to the settings with few or no
labeled data available. In this work, we introduce two approaches to improve
unsupervised QA. First, we harvest lexically and syntactically divergent
questions from Wikipedia to automatically construct a corpus of question-answer
pairs (named as RefQA). Second, we take advantage of the QA model to extract
more appropriate answers, which iteratively refines data over RefQA. We conduct
experiments on SQuAD 1.1, and NewsQA by fine-tuning BERT without access to
manually annotated data. Our approach outperforms previous unsupervised
approaches by a large margin and is competitive with early supervised models.
We also show the effectiveness of our approach in the few-shot learning
setting.
| 2,020 | Computation and Language |
Multitask Models for Supervised Protests Detection in Texts | The CLEF 2019 ProtestNews Lab tasks participants to identify text relating to
political protests within larger corpora of news data. Three tasks include
article classification, sentence detection, and event extraction. I apply
multitask neural networks capable of producing predictions for two and three of
these tasks simultaneously. The multitask framework allows the model to learn
relevant features from the training data of all three tasks. This paper
demonstrates performance near or above the reported state-of-the-art for
automated political event coding though noted differences in research design
make direct comparisons difficult.
| 2,019 | Computation and Language |
Seeing the Forest and the Trees: Detection and Cross-Document
Coreference Resolution of Militarized Interstate Disputes | Previous efforts to automate the detection of social and political events in
text have primarily focused on identifying events described within single
sentences or documents. Within a corpus of documents, these automated systems
are unable to link event references -- recognize singular events across
multiple sentences or documents. A separate literature in computational
linguistics on event coreference resolution attempts to link known events to
one another within (and across) documents. I provide a data set for evaluating
methods to identify certain political events in text and to link related texts
to one another based on shared events. The data set, Headlines of War, is built
on the Militarized Interstate Disputes data set and offers headlines classified
by dispute status and headline pairs labeled with coreference indicators.
Additionally, I introduce a model capable of accomplishing both tasks. The
multi-task convolutional neural network is shown to be capable of recognizing
events and event coreferences given the headlines' texts and publication dates.
| 2,020 | Computation and Language |
What are the Goals of Distributional Semantics? | Distributional semantic models have become a mainstay in NLP, providing
useful features for downstream tasks. However, assessing long-term progress
requires explicit long-term goals. In this paper, I take a broad linguistic
perspective, looking at how well current models can deal with various semantic
challenges. Given stark differences between models proposed in different
subfields, a broad perspective is needed to see how we could integrate them. I
conclude that, while linguistic insights can guide the design of model
architectures, future progress will require balancing the often conflicting
demands of linguistic expressiveness and computational tractability.
| 2,020 | Computation and Language |
PeTra: A Sparsely Supervised Memory Model for People Tracking | We propose PeTra, a memory-augmented neural network designed to track
entities in its memory slots. PeTra is trained using sparse annotation from the
GAP pronoun resolution dataset and outperforms a prior memory model on the task
while using a simpler architecture. We empirically compare key modeling
choices, finding that we can simplify several aspects of the design of the
memory module while retaining strong performance. To measure the people
tracking capability of memory models, we (a) propose a new diagnostic
evaluation based on counting the number of unique entities in text, and (b)
conduct a small scale human evaluation to compare evidence of people tracking
in the memory logs of PeTra relative to a previous approach. PeTra is highly
effective in both evaluations, demonstrating its ability to track people in its
memory despite being trained with limited annotation.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.