Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine
Translation | We propose Semantic Parser Localizer (SPL), a toolkit that leverages Neural
Machine Translation (NMT) systems to localize a semantic parser for a new
language. Our methodology is to (1) generate training data automatically in the
target language by augmenting machine-translated datasets with local entities
scraped from public websites, (2) add a few-shot boost of human-translated
sentences and train a novel XLMR-LSTM semantic parser, and (3) test the model
on natural utterances curated using human translators.
We assess the effectiveness of our approach by extending the current
capabilities of Schema2QA, a system for English Question Answering (QA) on the
open web, to 10 new languages for the restaurants and hotels domains. Our
models achieve an overall test accuracy ranging between 61% and 69% for the
hotels domain and between 64% and 78% for restaurants domain, which compares
favorably to 69% and 80% obtained for English parser trained on gold English
data and a few examples from validation set. We show our approach outperforms
the previous state-of-the-art methodology by more than 30% for hotels and 40%
for restaurants with localized ontologies for the subset of languages tested.
Our methodology enables any software developer to add a new language
capability to a QA system for a new domain, leveraging machine translation, in
less than 24 hours.
| 2,020 | Computation and Language |
Hierarchical Evidence Set Modeling for Automated Fact Extraction and
Verification | Automated fact extraction and verification is a challenging task that
involves finding relevant evidence sentences from a reliable corpus to verify
the truthfulness of a claim. Existing models either (i) concatenate all the
evidence sentences, leading to the inclusion of redundant and noisy
information; or (ii) process each claim-evidence sentence pair separately and
aggregate all of them later, missing the early combination of related sentences
for more accurate claim verification. Unlike the prior works, in this paper, we
propose Hierarchical Evidence Set Modeling (HESM), a framework to extract
evidence sets (each of which may contain multiple evidence sentences), and
verify a claim to be supported, refuted or not enough info, by encoding and
attending the claim and evidence sets at different levels of hierarchy. Our
experimental results show that HESM outperforms 7 state-of-the-art methods for
fact extraction and claim verification. Our source code is available at
https://github.com/ShyamSubramanian/HESM.
| 2,020 | Computation and Language |
SJTU-NICT's Supervised and Unsupervised Neural Machine Translation
Systems for the WMT20 News Translation Task | In this paper, we introduced our joint team SJTU-NICT 's participation in the
WMT 2020 machine translation shared task. In this shared task, we participated
in four translation directions of three language pairs: English-Chinese,
English-Polish on supervised machine translation track, German-Upper Sorbian on
low-resource and unsupervised machine translation tracks. Based on different
conditions of language pairs, we have experimented with diverse neural machine
translation (NMT) techniques: document-enhanced NMT, XLM pre-trained language
model enhanced NMT, bidirectional translation as a pre-training, reference
language based UNMT, data-dependent gaussian prior objective, and BT-BLEU
collaborative filtering self-training. We also used the TF-IDF algorithm to
filter the training set to obtain a domain more similar set with the test set
for finetuning. In our submissions, the primary systems won the first place on
English to Chinese, Polish to English, and German to Upper Sorbian translation
directions.
| 2,020 | Computation and Language |
Document-Level Definition Detection in Scholarly Documents: Existing
Models, Error Analyses, and Future Directions | The task of definition detection is important for scholarly papers, because
papers often make use of technical terminology that may be unfamiliar to
readers. Despite prior work on definition detection, current approaches are far
from being accurate enough to use in real-world applications. In this paper, we
first perform in-depth error analysis of the current best performing definition
detection system and discover major causes of errors. Based on this analysis,
we develop a new definition detection system, HEDDEx, that utilizes syntactic
features, transformer encoders, and heuristic filters, and evaluate it on a
standard sentence-level benchmark. Because current benchmarks evaluate randomly
sampled sentences, we propose an alternative evaluation that assesses every
sentence within a document. This allows for evaluating recall in addition to
precision. HEDDEx outperforms the leading system on both the sentence-level and
the document-level tasks, by 12.7 F1 points and 14.4 F1 points, respectively.
We note that performance on the high-recall document-level task is much lower
than in the standard evaluation approach, due to the necessity of incorporation
of document structure as features. We discuss remaining challenges in
document-level definition detection, ideas for improvements, and potential
issues for the development of reading aid applications.
| 2,020 | Computation and Language |
CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural
Summarization Systems | Neural network-based models augmented with unsupervised pre-trained knowledge
have achieved impressive performance on text summarization. However, most
existing evaluation methods are limited to an in-domain setting, where
summarizers are trained and evaluated on the same dataset. We argue that this
approach can narrow our understanding of the generalization ability for
different summarization systems. In this paper, we perform an in-depth analysis
of characteristics of different datasets and investigate the performance of
different summarization models under a cross-dataset setting, in which a
summarizer trained on one corpus will be evaluated on a range of out-of-domain
corpora. A comprehensive study of 11 representative summarization systems on 5
datasets from different domains reveals the effect of model architectures and
generation ways (i.e. abstractive and extractive) on model generalization
ability. Further, experimental results shed light on the limitations of
existing summarizers. Brief introduction and supplementary code can be found in
https://github.com/zide05/CDEvalSumm.
| 2,020 | Computation and Language |
Plan ahead: Self-Supervised Text Planning for Paragraph Completion Task | Despite the recent success of contextualized language models on various NLP
tasks, language model itself cannot capture textual coherence of a long,
multi-sentence document (e.g., a paragraph). Humans often make structural
decisions on what and how to say about before making utterances. Guiding
surface realization with such high-level decisions and structuring text in a
coherent way is essentially called a planning process. Where can the model
learn such high-level coherence? A paragraph itself contains various forms of
inductive coherence signals called self-supervision in this work, such as
sentence orders, topical keywords, rhetorical structures, and so on. Motivated
by that, this work proposes a new paragraph completion task PARCOM; predicting
masked sentences in a paragraph. However, the task suffers from predicting and
selecting appropriate topical content with respect to the given context. To
address that, we propose a self-supervised text planner SSPlanner that predicts
what to say first (content prediction), then guides the pretrained language
model (surface realization) using the predicted content. SSPlanner outperforms
the baseline generation models on the paragraph completion task in both
automatic and human evaluation. We also find that a combination of noun and
verb types of keywords is the most effective for content selection. As more
number of content keywords are provided, overall generation quality also
increases.
| 2,020 | Computation and Language |
PHICON: Improving Generalization of Clinical Text De-identification
Models via Data Augmentation | De-identification is the task of identifying protected health information
(PHI) in the clinical text. Existing neural de-identification models often fail
to generalize to a new dataset. We propose a simple yet effective data
augmentation method PHICON to alleviate the generalization issue. PHICON
consists of PHI augmentation and Context augmentation, which creates augmented
training corpora by replacing PHI entities with named-entities sampled from
external sources, and by changing background context with synonym replacement
or random word insertion, respectively. Experimental results on the i2b2 2006
and 2014 de-identification challenge datasets show that PHICON can help three
selected de-identification models boost F1-score (by at most 8.6%) on
cross-dataset test setting. We also discuss how much augmentation to use and
how each augmentation method influences the performance.
| 2,020 | Computation and Language |
Safe Reinforcement Learning with Natural Language Constraints | While safe reinforcement learning (RL) holds great promise for many practical
applications like robotics or autonomous cars, current approaches require
specifying constraints in mathematical form. Such specifications demand domain
expertise, limiting the adoption of safe RL. In this paper, we propose learning
to interpret natural language constraints for safe RL. To this end, we first
introduce HazardWorld, a new multi-task benchmark that requires an agent to
optimize reward while not violating constraints specified in free-form text. We
then develop an agent with a modular architecture that can interpret and adhere
to such textual constraints while learning new tasks. Our model consists of (1)
a constraint interpreter that encodes textual constraints into spatial and
temporal representations of forbidden states, and (2) a policy network that
uses these representations to produce a policy achieving minimal constraint
violations during training. Across different domains in HazardWorld, we show
that our method achieves higher rewards (up to11x) and fewer constraint
violations (by 1.8x) compared to existing approaches. However, in terms of
absolute performance, HazardWorld still poses significant challenges for agents
to learn efficiently, motivating the need for future work.
| 2,021 | Computation and Language |
A General Model of Conversational Dynamics and an Example Application in
Serious Illness Communication | Conversation has been a primary means for the exchange of information since
ancient times. Understanding patterns of information flow in conversations is a
critical step in assessing and improving communication quality. In this paper,
we describe COnversational DYnamics Model (CODYM) analysis, a novel approach
for studying patterns of information flow in conversations. CODYMs are Markov
Models that capture sequential dependencies in the lengths of speaker turns.
The proposed method is automated and scalable, and preserves the privacy of the
conversational participants. The primary function of CODYM analysis is to
quantify and visualize patterns of information flow, concisely summarized over
sequential turns from one or more conversations. Our approach is general and
complements existing methods, providing a new tool for use in the analysis of
any type of conversation. As an important first application, we demonstrate the
model on transcribed conversations between palliative care clinicians and
seriously ill patients. These conversations are dynamic and complex, taking
place amidst heavy emotions, and include difficult topics such as end-of-life
preferences and patient values. We perform a versatile set of CODYM analyses
that (a) establish the validity of the model by confirming known patterns of
conversational turn-taking and word usage, (b) identify normative patterns of
information flow in serious illness conversations, and (c) show how these
patterns vary across narrative time and differ under expressions of anger, fear
and sadness. Potential applications of CODYMs range from assessment and
training of effective healthcare communication to comparing conversational
dynamics across language and culture, with the prospect of identifying
universal similarities and unique "fingerprints" of information flow.
| 2,021 | Computation and Language |
fairseq S2T: Fast Speech-to-Text Modeling with fairseq | We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T)
modeling tasks such as end-to-end speech recognition and speech-to-text
translation. It follows fairseq's careful design for scalability and
extensibility. We provide end-to-end workflows from data pre-processing, model
training to offline (online) inference. We implement state-of-the-art
RNN-based, Transformer-based as well as Conformer-based models and open-source
detailed training recipes. Fairseq's machine translation models and language
models can be seamlessly integrated into S2T workflows for multi-task learning
or transfer learning. Fairseq S2T documentation and examples are available at
https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text.
| 2,022 | Computation and Language |
Learning Adaptive Language Interfaces through Decomposition | Our goal is to create an interactive natural language interface that
efficiently and reliably learns from users to complete tasks in simulated
robotics settings. We introduce a neural semantic parsing system that learns
new high-level abstractions through decomposition: users interactively teach
the system by breaking down high-level utterances describing novel behavior
into low-level steps that it can understand. Unfortunately, existing methods
either rely on grammars which parse sentences with limited flexibility, or
neural sequence-to-sequence models that do not learn efficiently or reliably
from individual examples. Our approach bridges this gap, demonstrating the
flexibility of modern neural systems, as well as the one-shot reliable
generalization of grammar-based methods. Our crowdsourced interactive
experiments suggest that over time, users complete complex tasks more
efficiently while using our system by leveraging what they just taught. At the
same time, getting users to trust the system enough to be incentivized to teach
high-level utterances is still an ongoing challenge. We end with a discussion
of some of the obstacles we need to overcome to fully realize the potential of
the interactive paradigm.
| 2,020 | Computation and Language |
Lexically Cohesive Neural Machine Translation with Copy Mechanism | Lexically cohesive translations preserve consistency in word choices in
document-level translation. We employ a copy mechanism into a context-aware
neural machine translation model to allow copying words from previous
translation outputs. Different from previous context-aware neural machine
translation models that handle all the discourse phenomena implicitly, our
model explicitly addresses the lexical cohesion problem by boosting the
probabilities to output words consistently. We conduct experiments on Japanese
to English translation using an evaluation dataset for discourse translation.
The results showed that the proposed model significantly improved lexical
cohesion compared to previous context-aware models.
| 2,020 | Computation and Language |
Detecting Foodborne Illness Complaints in Multiple Languages Using
English Annotations Only | Health departments have been deploying text classification systems for the
early detection of foodborne illness complaints in social media documents such
as Yelp restaurant reviews. Current systems have been successfully applied for
documents in English and, as a result, a promising direction is to increase
coverage and recall by considering documents in additional languages, such as
Spanish or Chinese. Training previous systems for more languages, however,
would be expensive, as it would require the manual annotation of many documents
for each new target language. To address this challenge, we consider
cross-lingual learning and train multilingual classifiers using only the
annotations for English-language reviews. Recent zero-shot approaches based on
pre-trained multi-lingual BERT (mBERT) have been shown to effectively align
languages for aspects such as sentiment. Interestingly, we show that those
approaches are less effective for capturing the nuances of foodborne illness,
our public health application of interest. To improve performance without extra
annotations, we create artificial training documents in the target language
through machine translation and train mBERT jointly for the source (English)
and target language. Furthermore, we show that translating labeled documents to
multiple languages leads to additional performance improvements for some target
languages. We demonstrate the benefits of our approach through extensive
experiments with Yelp restaurant reviews in seven languages. Our classifiers
identify foodborne illness complaints in multilingual reviews from the Yelp
Challenge dataset, which highlights the potential of our general approach for
deployment in health departments.
| 2,020 | Computation and Language |
Connecting the Dots Between Fact Verification and Fake News Detection | Fact verification models have enjoyed a fast advancement in the last two
years with the development of pre-trained language models like BERT and the
release of large scale datasets such as FEVER. However, the challenging problem
of fake news detection has not benefited from the improvement of fact
verification models, which is closely related to fake news detection. In this
paper, we propose a simple yet effective approach to connect the dots between
fact verification and fake news detection. Our approach first employs a text
summarization model pre-trained on news corpora to summarize the long news
article into a short claim. Then we use a fact verification model pre-trained
on the FEVER dataset to detect whether the input news article is real or fake.
Our approach makes use of the recent success of fact verification models and
enables zero-shot fake news detection, alleviating the need of large-scale
training data to train fake news detection models. Experimental results on
FakenewsNet, a benchmark dataset for fake news detection, demonstrate the
effectiveness of our proposed approach.
| 2,020 | Computation and Language |
Machine Translation of Mathematical Text | We have implemented a machine translation system, the PolyMath Translator,
for LaTeX documents containing mathematical text. The current implementation
translates English LaTeX to French LaTeX, attaining a BLEU score of 53.5 on a
held-out test corpus of mathematical sentences. It produces LaTeX documents
that can be compiled to PDF without further editing. The system first converts
the body of an input LaTeX document into English sentences containing math
tokens, using the pandoc universal document converter to parse LaTeX input. We
have trained a Transformer-based translator model, using OpenNMT, on a combined
corpus containing a small proportion of domain-specific sentences. Our full
system uses both this Transformer model and Google Translate, the latter being
used as a backup to better handle linguistic features that do not appear in our
training dataset. If the Transformer model does not have confidence in its
translation, as determined by a high perplexity score, then we use Google
Translate with a custom glossary. This backup was used 26% of the time on our
test corpus of mathematical sentences. The PolyMath Translator is available as
a web service at www.polymathtrans.ai.
| 2,020 | Computation and Language |
Controllable Multi-Character Psychology-Oriented Story Generation | Story generation, which aims to generate a long and coherent story
automatically based on the title or an input sentence, is an important research
area in the field of natural language generation. There is relatively little
work on story generation with appointed emotions. Most existing works focus on
using only one specific emotion to control the generation of a whole story and
ignore the emotional changes in the characters in the course of the story. In
our work, we aim to design an emotional line for each character that considers
multiple emotions common in psychological theories, with the goal of generating
stories with richer emotional changes in the characters. To the best of our
knowledge, this work is first to focuses on characters' emotional lines in
story generation. We present a novel model-based attention mechanism that we
call SoCP (Storytelling of multi-Character Psychology). We show that the
proposed model can generate stories considering the changes in the
psychological state of different characters. To take into account the
particularity of the model, in addition to commonly used evaluation
indicators(BLEU, ROUGE, etc.), we introduce the accuracy rate of psychological
state control as a novel evaluation metric. The new indicator reflects the
effect of the model on the psychological state control of story characters.
Experiments show that with SoCP, the generated stories follow the psychological
state for each character according to both automatic and human evaluations.
| 2,021 | Computation and Language |
Towards Accurate and Reliable Energy Measurement of NLP Models | Accurate and reliable measurement of energy consumption is critical for
making well-informed design choices when choosing and training large scale NLP
models. In this work, we show that existing software-based energy measurements
are not accurate because they do not take into account hardware differences and
how resource utilization affects energy consumption. We conduct energy
measurement experiments with four different models for a question answering
task. We quantify the error of existing software-based energy measurements by
using a hardware power meter that provides highly accurate energy measurements.
Our key takeaway is the need for a more accurate energy estimation model that
takes into account hardware variabilities and the non-linear relationship
between resource utilization and energy consumption. We release the code and
data at https://github.com/csarron/sustainlp2020-energy.
| 2,020 | Computation and Language |
Few-shot Learning for Multi-label Intent Detection | In this paper, we study the few-shot multi-label classification for user
intent detection. For multi-label intent detection, state-of-the-art work
estimates label-instance relevance scores and uses a threshold to select
multiple associated intent labels. To determine appropriate thresholds with
only a few examples, we first learn universal thresholding experience on
data-rich domains, and then adapt the thresholds to certain few-shot domains
with a calibration based on nonparametric learning. For better calculation of
label-instance relevance score, we introduce label name embedding as anchor
points in representation space, which refines representations of different
classes to be well-separated from each other. Experiments on two datasets show
that the proposed model significantly outperforms strong baselines in both
one-shot and five-shot settings.
| 2,020 | Computation and Language |
Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations | Contextualized word representations, such as ELMo and BERT, were shown to
perform well on various semantic and syntactic tasks. In this work, we tackle
the task of unsupervised disentanglement between semantics and structure in
neural language representations: we aim to learn a transformation of the
contextualized vectors, that discards the lexical semantics, but keeps the
structural information. To this end, we automatically generate groups of
sentences which are structurally similar but semantically different, and use
metric-learning approach to learn a transformation that emphasizes the
structural component that is encoded in the vectors. We demonstrate that our
transformation clusters vectors in space by structural properties, rather than
by lexical semantics. Finally, we demonstrate the utility of our distilled
representations by showing that they outperform the original contextualized
representations in a few-shot parsing setting.
| 2,021 | Computation and Language |
Automated Prediction of Medieval Arabic Diacritics | This study uses a character level neural machine translation approach trained
on a long short-term memory-based bi-directional recurrent neural network
architecture for diacritization of Medieval Arabic. The results improve from
the online tool used as a baseline. A diacritization model have been published
openly through an easy to use Python package available on PyPi and Zenodo. We
have found that context size should be considered when optimizing a feasible
prediction model.
| 2,020 | Computation and Language |
Weakly Supervised Medication Regimen Extraction from Medical
Conversations | Automated Medication Regimen (MR) extraction from medical conversations can
not only improve recall and help patients follow through with their care plan,
but also reduce the documentation burden for doctors. In this paper, we focus
on extracting spans for frequency, route and change, corresponding to
medications discussed in the conversation. We first describe a unique dataset
of annotated doctor-patient conversations and then present a weakly supervised
model architecture that can perform span extraction using noisy classification
data. The model utilizes an attention bottleneck inside a classification model
to perform the extraction. We experiment with several variants of attention
scoring and projection functions and propose a novel transformer-based
attention scoring function (TAScore). The proposed combination of TAScore and
Fusedmax projection achieves a 10 point increase in Longest Common Substring F1
compared to the baseline of additive scoring plus softmax projection.
| 2,020 | Computation and Language |
TransQuest at WMT2020: Sentence-Level Direct Assessment | This paper presents the team TransQuest's participation in Sentence-Level
Direct Assessment shared task in WMT 2020. We introduce a simple QE framework
based on cross-lingual transformers, and we use it to implement and evaluate
two different neural architectures. The proposed methods achieve
state-of-the-art results surpassing the results obtained by OpenKiwi, the
baseline used in the shared task. We further fine tune the QE framework by
performing ensemble and data augmentation. Our approach is the winning solution
in all of the language pairs according to the WMT 2020 official results.
| 2,020 | Computation and Language |
Multilingual Offensive Language Identification with Cross-lingual
Embeddings | Offensive content is pervasive in social media and a reason for concern to
companies and government organizations. Several studies have been recently
published investigating methods to detect the various forms of such content
(e.g. hate speech, cyberbulling, and cyberaggression). The clear majority of
these studies deal with English partially because most annotated datasets
available contain English data. In this paper, we take advantage of English
data available by applying cross-lingual contextual word embeddings and
transfer learning to make predictions in languages with less resources. We
project predictions on comparable data in Bengali, Hindi, and Spanish and we
report results of 0.8415 F1 macro for Bengali, 0.8568 F1 macro for Hindi, and
0.7513 F1 macro for Spanish. Finally, we show that our approach compares
favorably to the best systems submitted to recent shared tasks on these three
languages, confirming the robustness of cross-lingual contextual embeddings and
transfer learning for this task.
| 2,020 | Computation and Language |
InfoMiner at WNUT-2020 Task 2: Transformer-based Covid-19 Informative
Tweet Extraction | Identifying informative tweets is an important step when building information
extraction systems based on social media. WNUT-2020 Task 2 was organised to
recognise informative tweets from noise tweets. In this paper, we present our
approach to tackle the task objective using transformers. Overall, our approach
achieves 10th place in the final rankings scoring 0.9004 F1 score for the test
set.
| 2,020 | Computation and Language |
Incremental Processing in the Age of Non-Incremental Encoders: An
Empirical Assessment of Bidirectional Models for Incremental NLU | While humans process language incrementally, the best language encoders
currently used in NLP do not. Both bidirectional LSTMs and Transformers assume
that the sequence that is to be encoded is available in full, to be processed
either forwards and backwards (BiLSTMs) or as a whole (Transformers). We
investigate how they behave under incremental interfaces, when partial output
must be provided based on partial input seen up to a certain time step, which
may happen in interactive systems. We test five models on various NLU datasets
and compare their performance using three incremental evaluation metrics. The
results support the possibility of using bidirectional encoders in incremental
mode while retaining most of their non-incremental quality. The
"omni-directional" BERT model, which achieves better non-incremental
performance, is impacted more by the incremental access. This can be alleviated
by adapting the training regime (truncated training), or the testing procedure,
by delaying the output until some right context is available or by
incorporating hypothetical right contexts generated by a language model like
GPT-2.
| 2,020 | Computation and Language |
Neural Machine Translation Doesn't Translate Gender Coreference Right
Unless You Make It | Neural Machine Translation (NMT) has been shown to struggle with grammatical
gender that is dependent on the gender of human referents, which can cause
gender bias effects. Many existing approaches to this problem seek to control
gender inflection in the target language by explicitly or implicitly adding a
gender feature to the source sentence, usually at the sentence level.
In this paper we propose schemes for incorporating explicit word-level gender
inflection tags into NMT. We explore the potential of this gender-inflection
controlled translation when the gender feature can be determined from a human
reference, or when a test sentence can be automatically gender-tagged,
assessing on English-to-Spanish and English-to-German translation.
We find that simple existing approaches can over-generalize a gender-feature
to multiple entities in a sentence, and suggest effective alternatives in the
form of tagged coreference adaptation data. We also propose an extension to
assess translations of gender-neutral entities from English given a
corresponding linguistic convention, such as a non-binary inflection, in the
target language.
| 2,020 | Computation and Language |
Addressing Exposure Bias With Document Minimum Risk Training: Cambridge
at the WMT20 Biomedical Translation Task | The 2020 WMT Biomedical translation task evaluated Medline abstract
translations. This is a small-domain translation task, meaning limited relevant
training data with very distinct style and vocabulary. Models trained on such
data are susceptible to exposure bias effects, particularly when training
sentence pairs are imperfect translations of each other. This can result in
poor behaviour during inference if the model learns to neglect the source
sentence.
The UNICAM entry addresses this problem during fine-tuning using a robust
variant on Minimum Risk Training. We contrast this approach with data-filtering
to remove `problem' training examples. Under MRT fine-tuning we obtain good
results for both directions of English-German and English-Spanish biomedical
translation. In particular we achieve the best English-to-Spanish translation
result and second-best Spanish-to-English result, despite using only single
models with no ensembling.
| 2,020 | Computation and Language |
We Can Detect Your Bias: Predicting the Political Ideology of News
Articles | We explore the task of predicting the leading political ideology or bias of
news articles. First, we collect and release a large dataset of 34,737 articles
that were manually annotated for political ideology -left, center, or right-,
which is well-balanced across both topics and media. We further use a
challenging experimental setup where the test examples come from media that
were not seen during training, which prevents the model from learning to detect
the source of the target news article instead of predicting its political
ideology. From a modeling perspective, we propose an adversarial media
adaptation, as well as a specially adapted triplet loss. We further add
background information about the source, and we show that it is quite helpful
for improving article-level prediction. Our experimental results show very
sizable improvements over using state-of-the-art pre-trained Transformers in
this challenging setup.
| 2,020 | Computation and Language |
Do Language Embeddings Capture Scales? | Pretrained Language Models (LMs) have been shown to possess significant
linguistic, common sense, and factual knowledge. One form of knowledge that has
not been studied yet in this context is information about the scalar magnitudes
of objects. We show that pretrained language models capture a significant
amount of this information but are short of the capability required for general
common-sense reasoning. We identify contextual information in pre-training and
numeracy as two key factors affecting their performance and show that a simple
method of canonicalizing numbers can have a significant effect on the results.
| 2,020 | Computation and Language |
A Knowledge-Driven Approach to Classifying Object and Attribute
Coreferences in Opinion Mining | Classifying and resolving coreferences of objects (e.g., product names) and
attributes (e.g., product aspects) in opinionated reviews is crucial for
improving the opinion mining performance. However, the task is challenging as
one often needs to consider domain-specific knowledge (e.g., iPad is a tablet
and has aspect resolution) to identify coreferences in opinionated reviews.
Also, compiling a handcrafted and curated domain-specific knowledge base for
each domain is very time consuming and arduous. This paper proposes an approach
to automatically mine and leverage domain-specific knowledge for classifying
objects and attribute coreferences. The approach extracts domain-specific
knowledge from unlabeled review data and trains a knowledgeaware neural
coreference classification model to leverage (useful) domain knowledge together
with general commonsense knowledge for the task. Experimental evaluation on
realworld datasets involving five domains (product types) shows the
effectiveness of the approach.
| 2,021 | Computation and Language |
Learning Which Features Matter: RoBERTa Acquires a Preference for
Linguistic Generalizations (Eventually) | One reason pretraining on self-supervised linguistic tasks is effective is
that it teaches models features that are helpful for language understanding.
However, we want pretrained models to learn not only to represent linguistic
features, but also to use those features preferentially during fine-turning.
With this goal in mind, we introduce a new English-language diagnostic set
called MSGS (the Mixed Signals Generalization Set), which consists of 20
ambiguous binary classification tasks that we use to test whether a pretrained
model prefers linguistic or surface generalizations during fine-tuning. We
pretrain RoBERTa models from scratch on quantities of data ranging from 1M to
1B words and compare their performance on MSGS to the publicly available
RoBERTa-base. We find that models can learn to represent linguistic features
with little pretraining data, but require far more data to learn to prefer
linguistic generalizations over surface ones. Eventually, with about 30B words
of pretraining data, RoBERTa-base does demonstrate a linguistic bias with some
regularity. We conclude that while self-supervised pretraining is an effective
way to learn helpful inductive biases, there is likely room to improve the rate
at which models learn which features matter.
| 2,020 | Computation and Language |
Quantitative Argument Summarization and Beyond: Cross-Domain Key Point
Analysis | When summarizing a collection of views, arguments or opinions on some topic,
it is often desirable not only to extract the most salient points, but also to
quantify their prevalence. Work on multi-document summarization has
traditionally focused on creating textual summaries, which lack this
quantitative aspect. Recent work has proposed to summarize arguments by mapping
them to a small set of expert-generated key points, where the salience of each
key point corresponds to the number of its matching arguments. The current work
advances key point analysis in two important respects: first, we develop a
method for automatic extraction of key points, which enables fully automatic
analysis, and is shown to achieve performance comparable to a human expert.
Second, we demonstrate that the applicability of key point analysis goes well
beyond argumentation data. Using models trained on publicly available
argumentation datasets, we achieve promising results in two additional domains:
municipal surveys and user reviews. An additional contribution is an in-depth
evaluation of argument-to-key point matching models, where we substantially
outperform previous results.
| 2,020 | Computation and Language |
MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase
Grounding | Phrase localization is a task that studies the mapping from textual phrases
to regions of an image. Given difficulties in annotating phrase-to-object
datasets at scale, we develop a Multimodal Alignment Framework (MAF) to
leverage more widely-available caption-image datasets, which can then be used
as a form of weak supervision. We first present algorithms to model
phrase-object relevance by leveraging fine-grained visual representations and
visually-aware language representations. By adopting a contrastive objective,
our method uses information in caption-image pairs to boost the performance in
weakly-supervised scenarios. Experiments conducted on the widely-adopted
Flickr30k dataset show a significant improvement over existing
weakly-supervised methods. With the help of the visually-aware language
representations, we can also improve the previous best unsupervised result by
5.56%. We conduct ablation studies to show that both our novel model and our
weakly-supervised strategies significantly contribute to our strong results.
| 2,020 | Computation and Language |
A BERT-based Distractor Generation Scheme with Multi-tasking and
Negative Answer Training Strategies | In this paper, we investigate the following two limitations for the existing
distractor generation (DG) methods. First, the quality of the existing DG
methods are still far from practical use. There is still room for DG quality
improvement. Second, the existing DG designs are mainly for single distractor
generation. However, for practical MCQ preparation, multiple distractors are
desired. Aiming at these goals, in this paper, we present a new distractor
generation scheme with multi-tasking and negative answer training strategies
for effectively generating \textit{multiple} distractors. The experimental
results show that (1) our model advances the state-of-the-art result from 28.65
to 39.81 (BLEU 1 score) and (2) the generated multiple distractors are diverse
and show strong distracting power for multiple choice question.
| 2,020 | Computation and Language |
VMSMO: Learning to Generate Multimodal Summary for Video-based News
Articles | A popular multimedia news format nowadays is providing users with a lively
video and a corresponding news article, which is employed by influential news
media including CNN, BBC, and social media including Twitter and Weibo. In such
a case, automatically choosing a proper cover frame of the video and generating
an appropriate textual summary of the article can help editors save time, and
readers make the decision more effectively. Hence, in this paper, we propose
the task of Video-based Multimodal Summarization with Multimodal Output (VMSMO)
to tackle such a problem. The main challenge in this task is to jointly model
the temporal dependency of video with semantic meaning of article. To this end,
we propose a Dual-Interaction-based Multimodal Summarizer (DIMS), consisting of
a dual interaction module and multimodal generator. In the dual interaction
module, we propose a conditional self-attention mechanism that captures local
semantic information within video and a global-attention mechanism that handles
the semantic relationship between news text and video from a high level.
Extensive experiments conducted on a large-scale real-world VMSMO dataset show
that DIMS achieves the state-of-the-art performance in terms of both automatic
metrics and human evaluations.
| 2,020 | Computation and Language |
Gradient-based Analysis of NLP Models is Manipulable | Gradient-based analysis methods, such as saliency map visualizations and
adversarial input perturbations, have found widespread use in interpreting
neural NLP models due to their simplicity, flexibility, and most importantly,
their faithfulness. In this paper, however, we demonstrate that the gradients
of a model are easily manipulable, and thus bring into question the reliability
of gradient-based analyses. In particular, we merge the layers of a target
model with a Facade that overwhelms the gradients without affecting the
predictions. This Facade can be trained to have gradients that are misleading
and irrelevant to the task, such as focusing only on the stop words in the
input. On a variety of NLP tasks (text classification, NLI, and QA), we show
that our method can manipulate numerous gradient-based analysis techniques:
saliency maps, input reduction, and adversarial perturbations all identify
unimportant or targeted tokens as being highly important. The code and a
tutorial of this paper is available at http://ucinlp.github.io/facade.
| 2,020 | Computation and Language |
It's not a Non-Issue: Negation as a Source of Error in Machine
Translation | As machine translation (MT) systems progress at a rapid pace, questions of
their adequacy linger. In this study we focus on negation, a universal, core
property of human language that significantly affects the semantics of an
utterance. We investigate whether translating negation is an issue for modern
MT systems using 17 translation directions as test bed. Through thorough
analysis, we find that indeed the presence of negation can significantly impact
downstream quality, in some cases resulting in quality reductions of more than
60%. We also provide a linguistically motivated analysis that directly explains
the majority of our findings. We release our annotations and code to replicate
our analysis here: https://github.com/mosharafhossain/negation-mt.
| 2,020 | Computation and Language |
OCNLI: Original Chinese Natural Language Inference | Despite the tremendous recent progress on natural language inference (NLI),
driven largely by large-scale investment in new datasets (e.g., SNLI, MNLI) and
advances in modeling, most progress has been limited to English due to a lack
of reliable datasets for most of the world's languages. In this paper, we
present the first large-scale NLI dataset (consisting of ~56,000 annotated
sentence pairs) for Chinese called the Original Chinese Natural Language
Inference dataset (OCNLI). Unlike recent attempts at extending NLI to other
languages, our dataset does not rely on any automatic translation or non-expert
annotation. Instead, we elicit annotations from native speakers specializing in
linguistics. We follow closely the annotation protocol used for MNLI, but
create new strategies for eliciting diverse hypotheses. We establish several
baseline results on our dataset using state-of-the-art pre-trained models for
Chinese, and find even the best performing models to be far outpaced by human
performance (~12% absolute performance gap), making it a challenging new
resource that we hope will help to accelerate progress in Chinese NLU. To the
best of our knowledge, this is the first human-elicited MNLI-style corpus for a
non-English language.
| 2,020 | Computation and Language |
Collective Wisdom: Improving Low-resource Neural Machine Translation
using Adaptive Knowledge Distillation | Scarcity of parallel sentence-pairs poses a significant hurdle for training
high-quality Neural Machine Translation (NMT) models in bilingually
low-resource scenarios. A standard approach is transfer learning, which
involves taking a model trained on a high-resource language-pair and
fine-tuning it on the data of the low-resource MT condition of interest.
However, it is not clear generally which high-resource language-pair offers the
best transfer learning for the target MT setting. Furthermore, different
transferred models may have complementary semantic and/or syntactic strengths,
hence using only one model may be sub-optimal. In this paper, we tackle this
problem using knowledge distillation, where we propose to distill the knowledge
of ensemble of teacher models to a single student model. As the quality of
these teacher models varies, we propose an effective adaptive knowledge
distillation approach to dynamically adjust the contribution of the teacher
models during the distillation process. Experiments on transferring from a
collection of six language pairs from IWSLT to five low-resource language-pairs
from TED Talks demonstrate the effectiveness of our approach, achieving up to
+0.9 BLEU score improvement compared to strong baselines.
| 2,020 | Computation and Language |
COGS: A Compositional Generalization Challenge Based on Semantic
Interpretation | Natural language is characterized by compositionality: the meaning of a
complex expression is constructed from the meanings of its constituent parts.
To facilitate the evaluation of the compositional abilities of language
processing architectures, we introduce COGS, a semantic parsing dataset based
on a fragment of English. The evaluation portion of COGS contains multiple
systematic gaps that can only be addressed by compositional generalization;
these include new combinations of familiar syntactic structures, or new
combinations of familiar words and familiar structures. In experiments with
Transformers and LSTMs, we found that in-distribution accuracy on the COGS test
set was near-perfect (96--99%), but generalization accuracy was substantially
lower (16--35%) and showed high sensitivity to random seed ($\pm$6--8%). These
findings indicate that contemporary standard NLP models are limited in their
compositional generalization capacity, and position COGS as a good way to
measure progress.
| 2,020 | Computation and Language |
Unseen Target Stance Detection with Adversarial Domain Generalization | Although stance detection has made great progress in the past few years, it
is still facing the problem of unseen targets. In this study, we investigate
the domain difference between targets and thus incorporate attention-based
conditional encoding with adversarial domain generalization to perform unseen
target stance detection. Experimental results show that our approach achieves
new state-of-the-art performance on the SemEval-2016 dataset, demonstrating the
importance of domain difference between targets in unseen target stance
detection.
| 2,020 | Computation and Language |
Evaluating Factuality in Generation with Dependency-level Entailment | Despite significant progress in text generation models, a serious limitation
is their tendency to produce text that is factually inconsistent with
information in the input. Recent work has studied whether textual entailment
systems can be used to identify factual errors; however, these sentence-level
entailment models are trained to solve a different problem than generation
filtering and they do not localize which part of a generation is non-factual.
In this paper, we propose a new formulation of entailment that decomposes it at
the level of dependency arcs. Rather than focusing on aggregate decisions, we
instead ask whether the semantic relationship manifested by individual
dependency arcs in the generated output is supported by the input. Human
judgments on this task are difficult to obtain; we therefore propose a method
to automatically create data based on existing entailment or paraphrase
corpora. Experiments show that our dependency arc entailment model trained on
this data can identify factual inconsistencies in paraphrasing and
summarization better than sentence-level methods or those based on question
generation, while additionally localizing the erroneous parts of the
generation.
| 2,020 | Computation and Language |
Feature Extraction of Text for Deep Learning Algorithms: Application on
Fake News Detection | Feature extraction is an important process of machine learning and deep
learning, as the process make algorithms function more efficiently, and also
accurate. In natural language processing used in deception detection such as
fake news detection, several ways of feature extraction in statistical aspect
had been introduced (e.g. N-gram). In this research, it will be shown that by
using deep learning algorithms and alphabet frequencies of the original text of
a news without any information about the sequence of the alphabet can actually
be used to classify fake news and trustworthy ones in high accuracy (85\%). As
this pre-processing method makes the data notably compact but also include the
feature that is needed for the classifier, it seems that alphabet frequencies
contains some useful features for understanding complex context or meaning of
the original text.
| 2,020 | Computation and Language |
A Sentiment-Controllable Topic-to-Essay Generator with Topic Knowledge
Graph | Generating a vivid, novel, and diverse essay with only several given topic
words is a challenging task of natural language generation. In previous work,
there are two problems left unsolved: neglect of sentiment beneath the text and
insufficient utilization of topic-related knowledge. Therefore, we propose a
novel Sentiment-Controllable topic-to-essay generator with a Topic Knowledge
Graph enhanced decoder, named SCTKG, which is based on the conditional
variational autoencoder (CVAE) framework. We firstly inject the sentiment
information into the generator for controlling sentiment for each sentence,
which leads to various generated essays. Then we design a Topic Knowledge Graph
enhanced decoder. Unlike existing models that use knowledge entities
separately, our model treats the knowledge graph as a whole and encodes more
structured, connected semantic information in the graph to generate a more
relevant essay. Experimental results show that our SCTKG can generate sentiment
controllable essays and outperform the state-of-the-art approach in terms of
topic relevance, fluency, and diversity on both automatic and human evaluation.
| 2,020 | Computation and Language |
Pre-trained Language Model Based Active Learning for Sentence Matching | Active learning is able to significantly reduce the annotation cost for
data-driven techniques. However, previous active learning approaches for
natural language processing mainly depend on the entropy-based uncertainty
criterion, and ignore the characteristics of natural language. In this paper,
we propose a pre-trained language model based active learning approach for
sentence matching. Differing from previous active learning, it can provide
linguistic criteria to measure instances and help select more efficient
instances for annotation. Experiments demonstrate our approach can achieve
greater accuracy with fewer labeled training instances.
| 2,020 | Computation and Language |
FILM: A Fast, Interpretable, and Low-rank Metric Learning Approach for
Sentence Matching | Detection of semantic similarity plays a vital role in sentence matching. It
requires to learn discriminative representations of natural language. Recently,
owing to more and more sophisticated model architecture, impressive progress
has been made, along with a time-consuming training process and
not-interpretable inference. To alleviate this problem, we explore a metric
learning approach, named FILM (Fast, Interpretable, and Low-rank Metric
learning) to efficiently find a high discriminative projection of the
high-dimensional data. We construct this metric learning problem as a manifold
optimization problem and solve it with the Cayley transformation method with
the Barzilai-Borwein step size. In experiments, we apply FILM with triplet loss
minimization objective to the Quora Challenge and Semantic Textual Similarity
(STS) Task. The results demonstrate that the FILM method achieves superior
performance as well as the fastest computation speed, which is consistent with
our theoretical analysis of time complexity.
| 2,020 | Computation and Language |
Toward Cross-Lingual Definition Generation for Language Learners | Generating dictionary definitions automatically can prove useful for language
learners. However, it's still a challenging task of cross-lingual definition
generation. In this work, we propose to generate definitions in English for
words in various languages. To achieve this, we present a simple yet effective
approach based on publicly available pretrained language models. In this
approach, models can be directly applied to other languages after trained on
the English dataset. We demonstrate the effectiveness of this approach on
zero-shot definition generation. Experiments and manual analyses on newly
constructed datasets show that our models have a strong cross-lingual transfer
ability and can generate fluent English definitions for Chinese words. We
further measure the lexical complexity of generated and reference definitions.
The results show that the generated definitions are much simpler, which is more
suitable for language learners.
| 2,020 | Computation and Language |
The National Corpus of Contemporary Welsh: Project Report | Y Corpws
Cenedlaethol Cymraeg Cyfoes: Adroddiad y Prosiect | This report provides an overview of the CorCenCC project and the online
corpus resource that was developed as a result of work on the project. The
report lays out the theoretical underpinnings of the research, demonstrating
how the project has built on and extended this theory. We also raise and
discuss some of the key operational questions that arose during the course of
the project, outlining the ways in which they were answered, the impact of
these decisions on the resource that has been produced and the longer-term
contribution they will make to practices in corpus-building. Finally, we
discuss some of the applications and the utility of the work, outlining the
impact that CorCenCC is set to have on a range of different individuals and
user groups.
| 2,020 | Computation and Language |
Improving Low Resource Code-switched ASR using Augmented Code-switched
TTS | Building Automatic Speech Recognition (ASR) systems for code-switched speech
has recently gained renewed attention due to the widespread use of speech
technologies in multilingual communities worldwide. End-to-end ASR systems are
a natural modeling choice due to their ease of use and superior performance in
monolingual settings. However, it is well known that end-to-end systems require
large amounts of labeled speech. In this work, we investigate improving
code-switched ASR in low resource settings via data augmentation using
code-switched text-to-speech (TTS) synthesis. We propose two targeted
techniques to effectively leverage TTS speech samples: 1) Mixup, an existing
technique to create new training samples via linear interpolation of existing
samples, applied to TTS and real speech samples, and 2) a new loss function,
used in conjunction with TTS samples, to encourage code-switched predictions.
We report significant improvements in ASR performance achieving absolute word
error rate (WER) reductions of up to 5%, and measurable improvement in code
switching using our proposed techniques on a Hindi-English code-switched ASR
task.
| 2,020 | Computation and Language |
Joint Semantic Analysis with Document-Level Cross-Task Coherence Rewards | Coreference resolution and semantic role labeling are NLP tasks that capture
different aspects of semantics, indicating respectively, which expressions
refer to the same entity, and what semantic roles expressions serve in the
sentence. However, they are often closely interdependent, and both generally
necessitate natural language understanding. Do they form a coherent abstract
representation of documents? We present a neural network architecture for joint
coreference resolution and semantic role labeling for English, and train graph
neural networks to model the 'coherence' of the combined shallow semantic
graph. Using the resulting coherence score as a reward for our joint semantic
analyzer, we use reinforcement learning to encourage global coherence over the
document and between semantic annotations. This leads to improvements on both
tasks in multiple datasets from different domains, and across a range of
encoders of different expressivity, calling, we believe, for a more holistic
approach to semantics in NLP.
| 2,020 | Computation and Language |
Carbon to Diamond: An Incident Remediation Assistant System From Site
Reliability Engineers' Conversations in Hybrid Cloud Operations | Conversational channels are changing the landscape of hybrid cloud service
management. These channels are becoming important avenues for Site Reliability
Engineers (SREs) %Subject Matter Experts (SME) to collaboratively work together
to resolve an incident or issue. Identifying segmented conversations and
extracting key insights or artefacts from them can help engineers to improve
the efficiency of the incident remediation process by using information
retrieval mechanisms for similar incidents. However, it has been empirically
observed that due to the semi-formal behavior of such conversations (human
language) they are very unique in nature and also contain lot of
domain-specific terms. This makes it difficult to use the standard natural
language processing frameworks directly, which are popularly used in standard
NLP tasks. %It is important to identify the correct keywords and artefacts like
symptoms, issue etc., present in the conversation chats. In this paper, we
build a framework that taps into the conversational channels and uses various
learning methods to (a) understand and extract key artefacts from conversations
like diagnostic steps and resolution actions taken, and (b) present an approach
to identify past conversations about similar issues. Experimental results on
our dataset show the efficacy of our proposed method.
| 2,020 | Computation and Language |
Meta-Context Transformers for Domain-Specific Response Generation | Despite the tremendous success of neural dialogue models in recent years, it
suffers a lack of relevance, diversity, and some times coherence in generated
responses. Lately, transformer-based models, such as GPT-2, have revolutionized
the landscape of dialogue generation by capturing the long-range structures
through language modeling. Though these models have exhibited excellent
language coherence, they often lack relevance and terms when used for
domain-specific response generation. In this paper, we present DSRNet (Domain
Specific Response Network), a transformer-based model for dialogue response
generation by reinforcing domain-specific attributes. In particular, we extract
meta attributes from context and infuse them with the context utterances for
better attention over domain-specific key terms and relevance. We study the use
of DSRNet in a multi-turn multi-interlocutor environment for domain-specific
response generation. In our experiments, we evaluate DSRNet on Ubuntu dialogue
datasets, which are mainly composed of various technical domain related
dialogues for IT domain issue resolutions and also on CamRest676 dataset, which
contains restaurant domain conversations. Trained with maximum likelihood
objective, our model shows significant improvement over the state-of-the-art
for multi-turn dialogue systems supported by better BLEU and semantic
similarity (BertScore) scores. Besides, we also observe that the responses
produced by our model carry higher relevance due to the presence of
domain-specific key attributes that exhibit better overlap with the attributes
of the context. Our analysis shows that the performance improvement is mostly
due to the infusion of key terms along with dialogues which result in better
attention over domain-relevant terms. Other contributing factors include joint
modeling of dialogue context with the domain-specific meta attributes and
topics.
| 2,020 | Computation and Language |
Counterfactual Variable Control for Robust and Interpretable Question
Answering | Deep neural network based question answering (QA) models are neither robust
nor explainable in many cases. For example, a multiple-choice QA model, tested
without any input of question, is surprisingly "capable" to predict the most of
correct options. In this paper, we inspect such spurious "capability" of QA
models using causal inference. We find the crux is the shortcut correlation,
e.g., unrobust word alignment between passage and options learned by the
models. We propose a novel approach called Counterfactual Variable Control
(CVC) that explicitly mitigates any shortcut correlation and preserves the
comprehensive reasoning for robust QA. Specifically, we leverage multi-branch
architecture that allows us to disentangle robust and shortcut correlations in
the training process of QA. We then conduct two novel CVC inference methods (on
trained models) to capture the effect of comprehensive reasoning as the final
prediction. For evaluation, we conduct extensive experiments using two BERT
backbones on both multi-choice and span-extraction QA benchmarks. The results
show that our CVC achieves high robustness against a variety of adversarial
attacks in QA while maintaining good interpretation ability.
| 2,020 | Computation and Language |
Social Commonsense Reasoning with Multi-Head Knowledge Attention | Social Commonsense Reasoning requires understanding of text, knowledge about
social events and their pragmatic implications, as well as commonsense
reasoning skills. In this work we propose a novel multi-head knowledge
attention model that encodes semi-structured commonsense inference rules and
learns to incorporate them in a transformer-based reasoning cell. We assess the
model's performance on two tasks that require different reasoning skills:
Abductive Natural Language Inference and Counterfactual Invariance Prediction
as a new task. We show that our proposed model improves performance over strong
state-of-the-art models (i.e., RoBERTa) across both reasoning tasks. Notably we
are, to the best of our knowledge, the first to demonstrate that a model that
learns to perform counterfactual reasoning helps predicting the best
explanation in an abductive reasoning task. We validate the robustness of the
model's reasoning capabilities by perturbing the knowledge and provide
qualitative analysis on the model's knowledge incorporation capabilities.
| 2,020 | Computation and Language |
MultiWOZ 2.3: A multi-domain task-oriented dialogue dataset enhanced
with annotation corrections and co-reference annotation | Task-oriented dialogue systems have made unprecedented progress with multiple
state-of-the-art (SOTA) models underpinned by a number of publicly available
MultiWOZ datasets. Dialogue state annotations are error-prone, leading to
sub-optimal performance. Various efforts have been put in rectifying the
annotation errors presented in the original MultiWOZ dataset. In this paper, we
introduce MultiWOZ 2.3, in which we differentiate incorrect annotations in
dialogue acts from dialogue states, identifying a lack of co-reference when
publishing the updated dataset. To ensure consistency between dialogue acts and
dialogue states, we implement co-reference features and unify annotations of
dialogue acts and dialogue states. We update the state of the art performance
of natural language understanding and dialogue state tracking on MultiWOZ 2.3,
where the results show significant improvements than on previous versions of
MultiWOZ datasets (2.0-2.2).
| 2,021 | Computation and Language |
The elephant in the interpretability room: Why use attention as
explanation when we have saliency methods? | There is a recent surge of interest in using attention as explanation of
model predictions, with mixed evidence on whether attention can be used as
such. While attention conveniently gives us one weight per input token and is
easily extracted, it is often unclear toward what goal it is used as
explanation. We find that often that goal, whether explicitly stated or not, is
to find out what input tokens are the most relevant to a prediction, and that
the implied user for the explanation is a model developer. For this goal and
user, we argue that input saliency methods are better suited, and that there
are no compelling reasons to use attention, despite the coincidence that it
provides a weight for each input. With this position paper, we hope to shift
some of the recent focus on attention to saliency methods, and for authors to
clearly state the goal and user for their explanations.
| 2,020 | Computation and Language |
Load What You Need: Smaller Versions of Multilingual BERT | Pre-trained Transformer-based models are achieving state-of-the-art results
on a variety of Natural Language Processing data sets. However, the size of
these models is often a drawback for their deployment in real production
applications. In the case of multilingual models, most of the parameters are
located in the embeddings layer. Therefore, reducing the vocabulary size should
have an important impact on the total number of parameters. In this paper, we
propose to generate smaller models that handle fewer number of languages
according to the targeted corpora. We present an evaluation of smaller versions
of multilingual BERT on the XNLI data set, but we believe that this method may
be applied to other multilingual transformers. The obtained results confirm
that we can generate smaller models that keep comparable results, while
reducing up to 45% of the total number of parameters. We compared our models
with DistilmBERT (a distilled version of multilingual BERT) and showed that
unlike language reduction, distillation induced a 1.7% to 6% drop in the
overall accuracy on the XNLI data set. The presented models and code are
publicly available.
| 2,020 | Computation and Language |
Contextual Modulation for Relation-Level Metaphor Identification | Identifying metaphors in text is very challenging and requires comprehending
the underlying comparison. The automation of this cognitive process has gained
wide attention lately. However, the majority of existing approaches concentrate
on word-level identification by treating the task as either single-word
classification or sequential labelling without explicitly modelling the
interaction between the metaphor components. On the other hand, while existing
relation-level approaches implicitly model this interaction, they ignore the
context where the metaphor occurs. In this work, we address these limitations
by introducing a novel architecture for identifying relation-level metaphoric
expressions of certain grammatical relations based on contextual modulation. In
a methodology inspired by works in visual reasoning, our approach is based on
conditioning the neural network computation on the deep contextualised features
of the candidate expressions using feature-wise linear modulation. We
demonstrate that the proposed architecture achieves state-of-the-art results on
benchmark datasets. The proposed methodology is generic and could be applied to
other textual classification problems that benefit from contextual interaction.
| 2,020 | Computation and Language |
Predicting Clinical Trial Results by Implicit Evidence Integration | Clinical trials provide essential guidance for practicing Evidence-Based
Medicine, though often accompanying with unendurable costs and risks. To
optimize the design of clinical trials, we introduce a novel Clinical Trial
Result Prediction (CTRP) task. In the CTRP framework, a model takes a
PICO-formatted clinical trial proposal with its background as input and
predicts the result, i.e. how the Intervention group compares with the
Comparison group in terms of the measured Outcome in the studied Population.
While structured clinical evidence is prohibitively expensive for manual
collection, we exploit large-scale unstructured sentences from medical
literature that implicitly contain PICOs and results as evidence. Specifically,
we pre-train a model to predict the disentangled results from such implicit
evidence and fine-tune the model with limited data on the downstream datasets.
Experiments on the benchmark Evidence Integration dataset show that the
proposed model outperforms the baselines by large margins, e.g., with a 10.7%
relative gain over BioBERT in macro-F1. Moreover, the performance improvement
is also validated on another dataset composed of clinical trials related to
COVID-19.
| 2,020 | Computation and Language |
Improving Compositional Generalization in Semantic Parsing | Generalization of models to out-of-distribution (OOD) data has captured
tremendous attention recently. Specifically, compositional generalization,
i.e., whether a model generalizes to new structures built of components
observed during training, has sparked substantial interest. In this work, we
investigate compositional generalization in semantic parsing, a natural
test-bed for compositional generalization, as output programs are constructed
from sub-components. We analyze a wide variety of models and propose multiple
extensions to the attention module of the semantic parser, aiming to improve
compositional generalization. We find that the following factors improve
compositional generalization: (a) using contextual representations, such as
ELMo and BERT, (b) informing the decoder what input tokens have previously been
attended to, (c) training the decoder attention to agree with pre-computed
token alignments, and (d) downsampling examples corresponding to frequent
program templates. While we substantially reduce the gap between
in-distribution and OOD generalization, performance on OOD compositions is
still substantially lower.
| 2,020 | Computation and Language |
From Hero to Z\'eroe: A Benchmark of Low-Level Adversarial Attacks | Adversarial attacks are label-preserving modifications to inputs of machine
learning classifiers designed to fool machines but not humans. Natural Language
Processing (NLP) has mostly focused on high-level attack scenarios such as
paraphrasing input texts. We argue that these are less realistic in typical
application scenarios such as in social media, and instead focus on low-level
attacks on the character-level. Guided by human cognitive abilities and human
robustness, we propose the first large-scale catalogue and benchmark of
low-level adversarial attacks, which we dub Z\'eroe, encompassing nine
different attack modes including visual and phonetic adversaries. We show that
RoBERTa, NLP's current workhorse, fails on our attacks. Our dataset provides a
benchmark for testing robustness of future more human-like NLP models.
| 2,020 | Computation and Language |
Modelling Lexical Ambiguity with Density Matrices | Words can have multiple senses. Compositional distributional models of
meaning have been argued to deal well with finer shades of meaning variation
known as polysemy, but are not so well equipped to handle word senses that are
etymologically unrelated, or homonymy. Moving from vectors to density matrices
allows us to encode a probability distribution over different senses of a word,
and can also be accommodated within a compositional distributional model of
meaning. In this paper we present three new neural models for learning density
matrices from a corpus, and test their ability to discriminate between word
senses on a range of compositional datasets. When paired with a particular
composition method, our best model outperforms existing vector-based
compositional models as well as strong sentence encoders.
| 2,020 | Computation and Language |
Reformulating Unsupervised Style Transfer as Paraphrase Generation | Modern NLP defines the task of style transfer as modifying the style of a
given sentence without appreciably changing its semantics, which implies that
the outputs of style transfer systems should be paraphrases of their inputs.
However, many existing systems purportedly designed for style transfer
inherently warp the input's meaning through attribute transfer, which changes
semantic properties such as sentiment. In this paper, we reformulate
unsupervised style transfer as a paraphrase generation problem, and present a
simple methodology based on fine-tuning pretrained language models on
automatically generated paraphrase data. Despite its simplicity, our method
significantly outperforms state-of-the-art style transfer systems on both human
and automatic evaluations. We also survey 23 style transfer papers and discover
that existing automatic metrics can be easily gamed and propose fixed variants.
Finally, we pivot to a more real-world style transfer setting by collecting a
large dataset of 15M sentences in 11 diverse styles, which we use for an
in-depth analysis of our system.
| 2,020 | Computation and Language |
HUJI-KU at MRP~2020: Two Transition-based Neural Parsers | This paper describes the HUJI-KU system submission to the shared task on
Cross-Framework Meaning Representation Parsing (MRP) at the 2020 Conference for
Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR
parser, which were, respectively, the baseline system and winning system in the
2019 MRP shared task. Both are transition-based parsers using BERT
contextualized embeddings. We generalized TUPA to support the newly-added MRP
frameworks and languages, and experimented with multitask learning with the
HIT-SCIR parser. We reached 4th place in both the cross-framework and
cross-lingual tracks.
| 2,020 | Computation and Language |
Structural Supervision Improves Few-Shot Learning and Syntactic
Generalization in Neural Language Models | Humans can learn structural properties about a word from minimal experience,
and deploy their learned syntactic representations uniformly in different
grammatical contexts. We assess the ability of modern neural language models to
reproduce this behavior in English and evaluate the effect of structural
supervision on learning outcomes. First, we assess few-shot learning
capabilities by developing controlled experiments that probe models' syntactic
nominal number and verbal argument structure generalizations for tokens seen as
few as two times during training. Second, we assess invariance properties of
learned representation: the ability of a model to transfer syntactic
generalizations from a base context (e.g., a simple declarative active-voice
sentence) to a transformed context (e.g., an interrogative sentence). We test
four models trained on the same dataset: an n-gram baseline, an LSTM, and two
LSTM-variants trained with explicit structural supervision (Dyer et al.,2016;
Charniak et al., 2016). We find that in most cases, the neural models are able
to induce the proper syntactic generalizations after minimal exposure, often
from just two examples during training, and that the two structurally
supervised models generalize more accurately than the LSTM model. All neural
models are able to leverage information learned in base contexts to drive
expectations in transformed contexts, indicating that they have learned some
invariance properties of syntax.
| 2,020 | Computation and Language |
Probing Pretrained Language Models for Lexical Semantics | The success of large pretrained language models (LMs) such as BERT and
RoBERTa has sparked interest in probing their representations, in order to
unveil what types of knowledge they implicitly capture. While prior research
focused on morphosyntactic, semantic, and world knowledge, it remains unclear
to which extent LMs also derive lexical type-level knowledge from words in
context. In this work, we present a systematic empirical analysis across six
typologically diverse languages and five different lexical tasks, addressing
the following questions: 1) How do different lexical knowledge extraction
strategies (monolingual versus multilingual source LM, out-of-context versus
in-context encoding, inclusion of special tokens, and layer-wise averaging)
impact performance? How consistent are the observed effects across tasks and
languages? 2) Is lexical knowledge stored in few parameters, or is it scattered
throughout the network? 3) How do these representations fare against
traditional static word vectors in lexical tasks? 4) Does the lexical
information emerging from independently trained monolingual LMs display latent
similarities? Our main results indicate patterns and best practices that hold
universally, but also point to prominent variations across languages and tasks.
Moreover, we validate the claim that lower Transformer layers carry more
type-level lexical knowledge, but also show that this knowledge is distributed
across multiple layers.
| 2,020 | Computation and Language |
On the Complementary Nature of Knowledge Graph Embedding, Fine Grain
Entity Types, and Language Modeling | We demonstrate the complementary natures of neural knowledge graph embedding,
fine-grain entity type prediction, and neural language modeling. We show that a
language model-inspired knowledge graph embedding approach yields both improved
knowledge graph embeddings and fine-grain entity type representations. Our work
also shows that jointly modeling both structured knowledge tuples and language
improves both.
| 2,020 | Computation and Language |
EFSG: Evolutionary Fooling Sentences Generator | Large pre-trained language representation models (LMs) have recently
collected a huge number of successes in many NLP tasks.
In 2018 BERT, and later its successors (e.g. RoBERTa), obtained
state-of-the-art results in classical benchmark tasks, such as GLUE benchmark.
After that, works about adversarial attacks have been published to test their
generalization proprieties and robustness.
In this work, we design Evolutionary Fooling Sentences Generator (EFSG), a
model- and task-agnostic adversarial attack algorithm built using an
evolutionary approach to generate false-positive sentences for binary
classification tasks.
We successfully apply EFSG to CoLA and MRPC tasks, on BERT and RoBERTa,
comparing performances. Results prove the presence of weak spots in
state-of-the-art LMs.
We finally test adversarial training as a data augmentation defence approach
against EFSG, obtaining stronger improved models with no loss of accuracy when
tested on the original datasets.
| 2,020 | Computation and Language |
Using Type Information to Improve Entity Coreference Resolution | Coreference resolution (CR) is an essential part of discourse analysis. Most
recently, neural approaches have been proposed to improve over SOTA models from
earlier paradigms. So far none of the published neural models leverage external
semantic knowledge such as type information. This paper offers the first such
model and evaluation, demonstrating modest gains in accuracy by introducing
either gold standard or predicted types. In the proposed approach, type
information serves both to (1) improve mention representation and (2) create a
soft type consistency check between coreference candidate mentions. Our
evaluation covers two different grain sizes of types over four different
benchmark corpora.
| 2,020 | Computation and Language |
Contextualize Knowledge Bases with Transformer for End-to-end
Task-Oriented Dialogue Systems | Incorporating knowledge bases (KB) into end-to-end task-oriented dialogue
systems is challenging, since it requires to properly represent the entity of
KB, which is associated with its KB context and dialogue context. The existing
works represent the entity with only perceiving a part of its KB context, which
can lead to the less effective representation due to the information loss, and
adversely favor KB reasoning and response generation. To tackle this issue, we
explore to fully contextualize the entity representation by dynamically
perceiving all the relevant entities} and dialogue history. To achieve this, we
propose a COntext-aware Memory Enhanced Transformer framework (COMET), which
treats the KB as a sequence and leverages a novel Memory Mask to enforce the
entity to only focus on its relevant entities and dialogue history, while
avoiding the distraction from the irrelevant entities. Through extensive
experiments, we show that our COMET framework can achieve superior performance
over the state of the arts.
| 2,021 | Computation and Language |
Layer-wise Guided Training for BERT: Learning Incrementally Refined
Document Representations | Although BERT is widely used by the NLP community, little is known about its
inner workings. Several attempts have been made to shed light on certain
aspects of BERT, often with contradicting conclusions. A much raised concern
focuses on BERT's over-parameterization and under-utilization issues. To this
end, we propose o novel approach to fine-tune BERT in a structured manner.
Specifically, we focus on Large Scale Multilabel Text Classification (LMTC)
where documents are assigned with one or more labels from a large predefined
set of hierarchically organized labels. Our approach guides specific BERT
layers to predict labels from specific hierarchy levels. Experimenting with two
LMTC datasets we show that this structured fine-tuning approach not only yields
better classification results but also leads to better parameter utilization.
| 2,020 | Computation and Language |
Human-centric Dialog Training via Offline Reinforcement Learning | How can we train a dialog model to produce better conversations by learning
from human feedback, without the risk of humans teaching it harmful chat
behaviors? We start by hosting models online, and gather human feedback from
real-time, open-ended conversations, which we then use to train and improve the
models using offline reinforcement learning (RL). We identify implicit
conversational cues including language similarity, elicitation of laughter,
sentiment, and more, which indicate positive human feedback, and embed these in
multiple reward functions. A well-known challenge is that learning an RL policy
in an offline setting usually fails due to the lack of ability to explore and
the tendency to make over-optimistic estimates of future reward. These problems
become even harder when using RL for language models, which can easily have a
20,000 action vocabulary and many possible reward functions. We solve the
challenge by developing a novel class of offline RL algorithms. These
algorithms use KL-control to penalize divergence from a pre-trained prior
language model, and use a new strategy to make the algorithm pessimistic,
instead of optimistic, in the face of uncertainty. We test the resulting dialog
model with ratings from 80 users in an open-domain setting and find it achieves
significant improvements over existing deep offline RL approaches. The novel
offline RL method is viable for improving any existing generative dialog model
using a static dataset of human feedback.
| 2,020 | Computation and Language |
Exemplar-Controllable Paraphrasing and Translation using Bitext | Most prior work on exemplar-based syntactically controlled paraphrase
generation relies on automatically-constructed large-scale paraphrase datasets,
which are costly to create. We sidestep this prerequisite by adapting models
from prior work to be able to learn solely from bilingual text (bitext).
Despite only using bitext for training, and in near zero-shot conditions, our
single proposed model can perform four tasks: controlled paraphrase generation
in both languages and controlled machine translation in both language
directions. To evaluate these tasks quantitatively, we create three novel
evaluation datasets. Our experimental results show that our models achieve
competitive results on controlled paraphrase generation and strong performance
on controlled machine translation. Analysis shows that our models learn to
disentangle semantics and syntax in their latent representations, but still
suffer from semantic drift.
| 2,021 | Computation and Language |
Controlled Hallucinations: Learning to Generate Faithfully from Noisy
Data | Neural text generation (data- or text-to-text) demonstrates remarkable
performance when training data is abundant which for many applications is not
the case. To collect a large corpus of parallel data, heuristic rules are often
used but they inevitably let noise into the data, such as phrases in the output
which cannot be explained by the input. Consequently, models pick up on the
noise and may hallucinate--generate fluent but unsupported text. Our
contribution is a simple but powerful technique to treat such hallucinations as
a controllable aspect of the generated text, without dismissing any input and
without modifying the model architecture. On the WikiBio corpus (Lebret et al.,
2016), a particularly noisy dataset, we demonstrate the efficacy of the
technique both in an automatic and in a human evaluation.
| 2,020 | Computation and Language |
Gradient Vaccine: Investigating and Improving Multi-task Optimization in
Massively Multilingual Models | Massively multilingual models subsuming tens or even hundreds of languages
pose great challenges to multi-task optimization. While it is a common practice
to apply a language-agnostic procedure optimizing a joint multilingual task
objective, how to properly characterize and take advantage of its underlying
problem structure for improving optimization efficiency remains under-explored.
In this paper, we attempt to peek into the black-box of multilingual
optimization through the lens of loss function geometry. We find that gradient
similarity measured along the optimization trajectory is an important signal,
which correlates well with not only language proximity but also the overall
model performance. Such observation helps us to identify a critical limitation
of existing gradient-based multi-task learning methods, and thus we derive a
simple and scalable optimization procedure, named Gradient Vaccine, which
encourages more geometrically aligned parameter updates for close tasks.
Empirically, our method obtains significant model performance gains on
multilingual machine translation and XTREME benchmark tasks for multilingual
language models. Our work reveals the importance of properly measuring and
utilizing language proximity in multilingual optimization, and has broader
implications for multi-task learning beyond multilingual modeling.
| 2,020 | Computation and Language |
Multi-Stage Pre-training for Low-Resource Domain Adaptation | Transfer learning techniques are particularly useful in NLP tasks where a
sizable amount of high-quality annotated data is difficult to obtain. Current
approaches directly adapt a pre-trained language model (LM) on in-domain text
before fine-tuning to downstream tasks. We show that extending the vocabulary
of the LM with domain-specific terms leads to further gains. To a bigger
effect, we utilize structure in the unlabeled data to create auxiliary
synthetic tasks, which helps the LM transfer to downstream tasks. We apply
these approaches incrementally on a pre-trained Roberta-large LM and show
considerable performance gain on three tasks in the IT domain: Extractive
Reading Comprehension, Document Ranking and Duplicate Question Detection.
| 2,020 | Computation and Language |
Back to the Future: Unsupervised Backprop-based Decoding for
Counterfactual and Abductive Commonsense Reasoning | Abductive and counterfactual reasoning, core abilities of everyday human
cognition, require reasoning about what might have happened at time t, while
conditioning on multiple contexts from the relative past and future. However,
simultaneous incorporation of past and future contexts using generative
language models (LMs) can be challenging, as they are trained either to
condition only on the past context or to perform narrowly scoped
text-infilling. In this paper, we propose DeLorean, a new unsupervised decoding
algorithm that can flexibly incorporate both the past and future contexts using
only off-the-shelf, left-to-right language models and no supervision. The key
intuition of our algorithm is incorporating the future through
back-propagation, during which, we only update the internal representation of
the output while fixing the model parameters. By alternating between forward
and backward propagation, DeLorean can decode the output representation that
reflects both the left and right contexts. We demonstrate that our approach is
general and applicable to two nonmonotonic reasoning tasks: abductive text
generation and counterfactual story revision, where DeLorean outperforms a
range of unsupervised and some supervised methods, based on automatic and human
evaluation.
| 2,021 | Computation and Language |
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs | Recent years have brought about a renewed interest in commonsense
representation and reasoning in the field of natural language understanding.
The development of new commonsense knowledge graphs (CSKG) has been central to
these advances as their diverse facts can be used and referenced by machine
learning models for tackling new and challenging tasks. At the same time, there
remain questions about the quality and coverage of these resources due to the
massive scale required to comprehensively encompass general commonsense
knowledge.
In this work, we posit that manually constructed CSKGs will never achieve the
coverage necessary to be applicable in all situations encountered by NLP
agents. Therefore, we propose a new evaluation framework for testing the
utility of KGs based on how effectively implicit knowledge representations can
be learned from them.
With this new goal, we propose ATOMIC 2020, a new CSKG of general-purpose
commonsense knowledge containing knowledge that is not readily available in
pretrained language models. We evaluate its properties in comparison with other
leading CSKGs, performing the first large-scale pairwise study of commonsense
knowledge resources. Next, we show that ATOMIC 2020 is better suited for
training knowledge models that can generate accurate, representative knowledge
for new, unseen entities and events. Finally, through human evaluation, we show
that the few-shot performance of GPT-3 (175B parameters), while impressive,
remains ~12 absolute points lower than a BART-based knowledge model trained on
ATOMIC 2020 despite using over 430x fewer parameters.
| 2,021 | Computation and Language |
Towards Induction of Structured Phoneme Inventories | This extended abstract surveying the work on phonological typology was
prepared for "SIGTYP 2020: The Second Workshop on Computational Research in
Linguistic Typology" to be held at EMNLP 2020.
| 2,020 | Computation and Language |
Perceptimatic: A human speech perception benchmark for unsupervised
subword modelling | In this paper, we present a data set and methods to compare speech processing
models and human behaviour on a phone discrimination task. We provide
Perceptimatic, an open data set which consists of French and English speech
stimuli, as well as the results of 91 English- and 93 French-speaking
listeners. The stimuli test a wide range of French and English contrasts, and
are extracted directly from corpora of natural running read speech, used for
the 2017 Zero Resource Speech Challenge. We provide a method to compare humans'
perceptual space with models' representational space, and we apply it to models
previously submitted to the Challenge. We show that, unlike unsupervised models
and supervised multilingual models, a standard supervised monolingual HMM-GMM
phone recognition system, while good at discriminating phones, yields a
representational space very different from that of human native listeners.
| 2,020 | Computation and Language |
The Zero Resource Speech Challenge 2020: Discovering discrete subword
and word units | We present the Zero Resource Speech Challenge 2020, which aims at learning
speech representations from raw audio signals without any labels. It combines
the data sets and metrics from two previous benchmarks (2017 and 2019) and
features two tasks which tap into two levels of speech representation. The
first task is to discover low bit-rate subword representations that optimize
the quality of speech synthesis; the second one is to discover word-like units
from unsegmented raw speech. We present the results of the twenty submitted
models and discuss the implications of the main findings for unsupervised
speech learning.
| 2,020 | Computation and Language |
The Extraordinary Failure of Complement Coercion Crowdsourcing | Crowdsourcing has eased and scaled up the collection of linguistic annotation
in recent years. In this work, we follow known methodologies of collecting
labeled data for the complement coercion phenomenon. These are constructions
with an implied action -- e.g., "I started a new book I bought last week",
where the implied action is reading. We aim to collect annotated data for this
phenomenon by reducing it to either of two known tasks: Explicit Completion and
Natural Language Inference. However, in both cases, crowdsourcing resulted in
low agreement scores, even though we followed the same methodologies as in
previous work. Why does the same process fail to yield high agreement scores?
We specify our modeling schemes, highlight the differences with previous work
and provide some insights about the task and possible explanations for the
failure. We conclude that specific phenomena require tailored solutions, not
only in specialized algorithms, but also in data collection methods.
| 2,020 | Computation and Language |
NEMO: Frequentist Inference Approach to Constrained Linguistic Typology
Feature Prediction in SIGTYP 2020 Shared Task | This paper describes the NEMO submission to SIGTYP 2020 shared task which
deals with prediction of linguistic typological features for multiple languages
using the data derived from World Atlas of Language Structures (WALS). We
employ frequentist inference to represent correlations between typological
features and use this representation to train simple multi-class estimators
that predict individual features. We describe two submitted ridge
regression-based configurations which ranked second and third overall in the
constrained task. Our best configuration achieved the micro-averaged accuracy
score of 0.66 on 149 test languages.
| 2,022 | Computation and Language |
SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search | With worldwide concerns surrounding the Severe Acute Respiratory Syndrome
Coronavirus 2 (SARS-CoV-2), there is a rapidly growing body of scientific
literature on the virus. Clinicians, researchers, and policy-makers need to be
able to search these articles effectively. In this work, we present a zero-shot
ranking algorithm that adapts to COVID-related scientific literature. Our
approach filters training data from another collection down to medical-related
queries, uses a neural re-ranking model pre-trained on scientific text
(SciBERT), and filters the target document collection. This approach ranks top
among zero-shot methods on the TREC COVID Round 1 leaderboard, and exhibits a
P@5 of 0.80 and an nDCG@10 of 0.68 when evaluated on both Round 1 and 2
judgments. Despite not relying on TREC-COVID data, our method outperforms
models that do. As one of the first search methods to thoroughly evaluate
COVID-19 search, we hope that this serves as a strong baseline and helps in the
global crisis.
| 2,020 | Computation and Language |
Chatbot Interaction with Artificial Intelligence: Human Data
Augmentation with T5 and Language Transformer Ensemble for Text
Classification | In this work, we present the Chatbot Interaction with Artificial Intelligence
(CI-AI) framework as an approach to the training of deep learning chatbots for
task classification. The intelligent system augments human-sourced data via
artificial paraphrasing in order to generate a large set of training data for
further classical, attention, and language transformation-based learning
approaches for Natural Language Processing. Human beings are asked to
paraphrase commands and questions for task identification for further execution
of a machine. The commands and questions are split into training and validation
sets. A total of 483 responses were recorded. Secondly, the training set is
paraphrased by the T5 model in order to augment it with further data. Seven
state-of-the-art transformer-based text classification algorithms (BERT,
DistilBERT, RoBERTa, DistilRoBERTa, XLM, XLM-RoBERTa, and XLNet) are
benchmarked for both sets after fine-tuning on the training data for two
epochs. We find that all models are improved when training data is augmented by
the T5 model, with an average increase of classification accuracy by 4.01%. The
best result was the RoBERTa model trained on T5 augmented data which achieved
98.96% classification accuracy. Finally, we found that an ensemble of the five
best-performing transformer models via Logistic Regression of output label
predictions led to an accuracy of 99.59% on the dataset of human responses. A
highly-performing model allows the intelligent system to interpret human
commands at the social-interaction level through a chatbot-like interface (e.g.
"Robot, can we have a conversation?") and allows for better accessibility to AI
by non-technical users.
| 2,020 | Computation and Language |
Vulgaris: Analysis of a Corpus for Middle-Age Varieties of Italian
Language | Italian is a Romance language that has its roots in Vulgar Latin. The birth
of the modern Italian started in Tuscany around the 14th century, and it is
mainly attributed to the works of Dante Alighieri, Francesco Petrarca and
Giovanni Boccaccio, who are among the most acclaimed authors of the medieval
age in Tuscany. However, Italy has been characterized by a high variety of
dialects, which are often loosely related to each other, due to the past
fragmentation of the territory. Italian has absorbed influences from many of
these dialects, as also from other languages due to dominion of portions of the
country by other nations, such as Spain and France. In this work we present
Vulgaris, a project aimed at studying a corpus of Italian textual resources
from authors of different regions, ranging in a time period between 1200 and
1600. Each composition is associated to its author, and authors are also
grouped in families, i.e. sharing similar stylistic/chronological
characteristics. Hence, the dataset is not only a valuable resource for
studying the diachronic evolution of Italian and the differences between its
dialects, but it is also useful to investigate stylistic aspects between single
authors. We provide a detailed statistical analysis of the data, and a
corpus-driven study in dialectology and diachronic varieties.
| 2,020 | Computation and Language |
Improving Text Generation with Student-Forcing Optimal Transport | Neural language models are often trained with maximum likelihood estimation
(MLE), where the next word is generated conditioned on the ground-truth word
tokens. During testing, however, the model is instead conditioned on previously
generated tokens, resulting in what is termed exposure bias. To reduce this gap
between training and testing, we propose using optimal transport (OT) to match
the sequences generated in these two modes. An extension is further proposed to
improve the OT learning, based on the structural and contextual information of
the text sequences. The effectiveness of the proposed method is validated on
machine translation, text summarization, and text generation tasks.
| 2,020 | Computation and Language |
Look It Up: Bilingual Dictionaries Improve Neural Machine Translation | Despite advances in neural machine translation (NMT) quality, rare words
continue to be problematic. For humans, the solution to the rare-word problem
has long been dictionaries, but dictionaries cannot be straightforwardly
incorporated into NMT. In this paper, we describe a new method for "attaching"
dictionary definitions to rare words so that the network can learn the best way
to use them. We demonstrate improvements of up to 1.8 BLEU using bilingual
dictionaries.
| 2,022 | Computation and Language |
Gender Coreference and Bias Evaluation at WMT 2020 | Gender bias in machine translation can manifest when choosing gender
inflections based on spurious gender correlations. For example, always
translating doctors as men and nurses as women. This can be particularly
harmful as models become more popular and deployed within commercial systems.
Our work presents the largest evidence for the phenomenon in more than 19
systems submitted to the WMT over four diverse target languages: Czech, German,
Polish, and Russian. To achieve this, we use WinoMT, a recent automatic test
suite which examines gender coreference and bias when translating from English
to languages with grammatical gender. We extend WinoMT to handle two new
languages tested in WMT: Polish and Czech. We find that all systems
consistently use spurious correlations in the data rather than meaningful
contextual information.
| 2,020 | Computation and Language |
End-to-End Synthetic Data Generation for Domain Adaptation of Question
Answering Systems | We propose an end-to-end approach for synthetic QA data generation. Our model
comprises a single transformer-based encoder-decoder network that is trained
end-to-end to generate both answers and questions. In a nutshell, we feed a
passage to the encoder and ask the decoder to generate a question and an answer
token-by-token. The likelihood produced in the generation process is used as a
filtering score, which avoids the need for a separate filtering model. Our
generator is trained by fine-tuning a pretrained LM using maximum likelihood
estimation. The experimental results indicate significant improvements in the
domain adaptation of QA models outperforming current state-of-the-art methods.
| 2,020 | Computation and Language |
Dual-mode ASR: Unify and Improve Streaming ASR with Full-context
Modeling | Streaming automatic speech recognition (ASR) aims to emit each hypothesized
word as quickly and accurately as possible, while full-context ASR waits for
the completion of a full speech utterance before emitting completed hypotheses.
In this work, we propose a unified framework, Dual-mode ASR, to train a single
end-to-end ASR model with shared weights for both streaming and full-context
speech recognition. We show that the latency and accuracy of streaming ASR
significantly benefit from weight sharing and joint training of full-context
ASR, especially with inplace knowledge distillation during the training. The
Dual-mode ASR framework can be applied to recent state-of-the-art
convolution-based and transformer-based ASR networks. We present extensive
experiments with two state-of-the-art ASR networks, ContextNet and Conformer,
on two datasets, a widely used public dataset LibriSpeech and a large-scale
dataset MultiDomain. Experiments and ablation studies demonstrate that
Dual-mode ASR not only simplifies the workflow of training and deploying
streaming and full-context ASR models, but also significantly improves both
emission latency and recognition accuracy of streaming ASR. With Dual-mode ASR,
we achieve new state-of-the-art streaming ASR results on both LibriSpeech and
MultiDomain in terms of accuracy and latency.
| 2,021 | Computation and Language |
Measuring and Reducing Gendered Correlations in Pre-trained Models | Pre-trained models have revolutionized natural language understanding.
However, researchers have found they can encode artifacts undesired in many
applications, such as professions correlating with one gender more than
another. We explore such gendered correlations as a case study for how to
address unintended correlations in pre-trained models. We define metrics and
reveal that it is possible for models with similar accuracy to encode
correlations at very different rates. We show how measured correlations can be
reduced with general-purpose techniques, and highlight the trade offs different
strategies have. With these results, we make recommendations for training
robust models: (1) carefully evaluate unintended correlations, (2) be mindful
of seemingly innocuous configuration differences, and (3) focus on general
mitigations.
| 2,021 | Computation and Language |
Improving Self-supervised Pre-training via a Fully-Explored Masked
Language Model | Masked Language Model (MLM) framework has been widely adopted for
self-supervised language pre-training. In this paper, we argue that randomly
sampled masks in MLM would lead to undesirably large gradient variance. Thus,
we theoretically quantify the gradient variance via correlating the gradient
covariance with the Hamming distance between two different masks (given a
certain text sequence). To reduce the variance due to the sampling of masks, we
propose a fully-explored masking strategy, where a text sequence is divided
into a certain number of non-overlapping segments. Thereafter, the tokens
within one segment are masked for training. We prove, from a theoretical
perspective, that the gradients derived from this new masking schema have a
smaller variance and can lead to more efficient self-supervised training. We
conduct extensive experiments on both continual pre-training and general
pre-training from scratch. Empirical results confirm that this new masking
strategy can consistently outperform standard random masking. Detailed
efficiency analysis and ablation studies further validate the advantages of our
fully-explored masking strategy under the MLM framework.
| 2,020 | Computation and Language |
Towards Machine Translation for the Kurdish Language | Machine translation is the task of translating texts from one language to
another using computers. It has been one of the major tasks in natural language
processing and computational linguistics and has been motivating to facilitate
human communication. Kurdish, an Indo-European language, has received little
attention in this realm due to the language being less-resourced. Therefore, in
this paper, we are addressing the main issues in creating a machine translation
system for the Kurdish language, with a focus on the Sorani dialect. We
describe the available scarce parallel data suitable for training a neural
machine translation model for Sorani Kurdish-English translation. We also
discuss some of the major challenges in Kurdish language translation and
demonstrate how fundamental text processing tasks, such as tokenization, can
improve translation performance.
| 2,020 | Computation and Language |
TextHide: Tackling Data Privacy in Language Understanding Tasks | An unsolved challenge in distributed or federated learning is to effectively
mitigate privacy risks without slowing down training or reducing accuracy. In
this paper, we propose TextHide aiming at addressing this challenge for natural
language understanding tasks. It requires all participants to add a simple
encryption step to prevent an eavesdropping attacker from recovering private
text data. Such an encryption step is efficient and only affects the task
performance slightly. In addition, TextHide fits well with the popular
framework of fine-tuning pre-trained language models (e.g., BERT) for any
sentence or sentence-pair task. We evaluate TextHide on the GLUE benchmark, and
our experiments show that TextHide can effectively defend attacks on shared
gradients or representations and the averaged accuracy reduction is only
$1.9\%$. We also present an analysis of the security of TextHide using a
conjecture about the computational intractability of a mathematical problem.
Our code is available at https://github.com/Hazelsuko07/TextHide
| 2,020 | Computation and Language |
BioMegatron: Larger Biomedical Domain Language Model | There has been an influx of biomedical domain-specific language models,
showing language models pre-trained on biomedical text perform better on
biomedical domain benchmarks than those trained on general domain text corpora
such as Wikipedia and Books. Yet, most works do not study the factors affecting
each domain language application deeply. Additionally, the study of model size
on domain-specific models has been mostly missing. We empirically study and
evaluate several factors that can affect performance on domain language
applications, such as the sub-word vocabulary set, model size, pre-training
corpus, and domain transfer. We show consistent improvements on benchmarks with
our larger BioMegatron model trained on a larger domain corpus, contributing to
our understanding of domain language model applications. We demonstrate
noticeable improvements over the previous state-of-the-art (SOTA) on standard
biomedical NLP benchmarks of named entity recognition, relation extraction, and
question answering. Model checkpoints and code are available at
[https://ngc.nvidia.com] and [https://github.com/NVIDIA/NeMo].
| 2,020 | Computation and Language |
Zero-shot Entity Linking with Efficient Long Range Sequence Modeling | This paper considers the problem of zero-shot entity linking, in which a link
in the test time may not present in training. Following the prevailing
BERT-based research efforts, we find a simple yet effective way is to expand
the long-range sequence modeling. Unlike many previous methods, our method does
not require expensive pre-training of BERT with long position embedding.
Instead, we propose an efficient position embeddings initialization method
called Embedding-repeat, which initializes larger position embeddings based on
BERT-Base. On Wikia's zero-shot EL dataset, our method improves the SOTA from
76.06% to 79.08%, and for its long data, the corresponding improvement is from
74.57% to 82.14%. Our experiments suggest the effectiveness of long-range
sequence modeling without retraining the BERT model.
| 2,022 | Computation and Language |
Are Some Words Worth More than Others? | Current evaluation metrics for language modeling and generation rely heavily
on the accuracy of predicted (or generated) words as compared to a reference
ground truth. While important, token-level accuracy only captures one aspect of
a language model's behavior, and ignores linguistic properties of words that
may allow some mis-predicted tokens to be useful in practice. Furthermore,
statistics directly tied to prediction accuracy (including perplexity) may be
confounded by the Zipfian nature of written language, as the majority of the
prediction attempts will occur with frequently-occurring types. A model's
performance may vary greatly between high- and low-frequency words, which in
practice could lead to failure modes such as repetitive and dull generated text
being produced by a downstream consumer of a language model. To address this,
we propose two new intrinsic evaluation measures within the framework of a
simple word prediction task that are designed to give a more holistic picture
of a language model's performance. We evaluate several commonly-used large
English language models using our proposed metrics, and demonstrate that our
approach reveals functional differences in performance between the models that
are obscured by more traditional metrics.
| 2,020 | Computation and Language |
Supertagging Combinatory Categorial Grammar with Attentive Graph
Convolutional Networks | Supertagging is conventionally regarded as an important task for combinatory
categorial grammar (CCG) parsing, where effective modeling of contextual
information is highly important to this task. However, existing studies have
made limited efforts to leverage contextual features except for applying
powerful encoders (e.g., bi-LSTM). In this paper, we propose attentive graph
convolutional networks to enhance neural CCG supertagging through a novel
solution of leveraging contextual information. Specifically, we build the graph
from chunks (n-grams) extracted from a lexicon and apply attention over the
graph, so that different word pairs from the contexts within and across chunks
are weighted in the model and facilitate the supertagging accordingly. The
experiments performed on the CCGbank demonstrate that our approach outperforms
all previous studies in terms of both supertagging and parsing. Further
analyses illustrate the effectiveness of each component in our approach to
discriminatively learn from word pairs to enhance CCG supertagging.
| 2,020 | Computation and Language |
ReviewRobot: Explainable Paper Review Generation based on Knowledge
Synthesis | To assist human review process, we build a novel ReviewRobot to automatically
assign a review score and write comments for multiple categories such as
novelty and meaningful comparison. A good review needs to be knowledgeable,
namely that the comments should be constructive and informative to help improve
the paper; and explainable by providing detailed evidence. ReviewRobot achieves
these goals via three steps: (1) We perform domain-specific Information
Extraction to construct a knowledge graph (KG) from the target paper under
review, a related work KG from the papers cited by the target paper, and a
background KG from a large collection of previous papers in the domain. (2) By
comparing these three KGs, we predict a review score and detailed structured
knowledge as evidence for each review category. (3) We carefully select and
generalize human review sentences into templates, and apply these templates to
transform the review scores and evidence into natural language comments.
Experimental results show that our review score predictor reaches 71.4%-100%
accuracy. Human assessment by domain experts shows that 41.7%-70.5% of the
comments generated by ReviewRobot are valid and constructive, and better than
human-written ones for 20% of the time. Thus, ReviewRobot can serve as an
assistant for paper reviewers, program chairs and authors.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.