Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Tanbih: Get To Know What You Are Reading | We introduce Tanbih, a news aggregator with intelligent analysis tools to
help readers understanding what's behind a news story. Our system displays news
grouped into events and generates media profiles that show the general
factuality of reporting, the degree of propagandistic content,
hyper-partisanship, leading political ideology, general frame of reporting, and
stance with respect to various claims and topics of a news outlet. In addition,
we automatically analyse each article to detect whether it is propagandistic
and to determine its stance with respect to a number of controversial topics.
| 2,019 | Computation and Language |
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods | For AI systems to garner widespread public acceptance, we must develop
methods capable of explaining the decisions of black-box models such as neural
networks. In this work, we identify two issues of current explanatory methods.
First, we show that two prevalent perspectives on explanations ---
feature-additivity and feature-selection --- lead to fundamentally different
instance-wise explanations. In the literature, explainers from different
perspectives are currently being directly compared, despite their distinct
explanation goals. The second issue is that current post-hoc explainers are
either validated under simplistic scenarios (on simple models such as linear
regression, or on models trained on syntactic datasets), or, when applied to
real-world neural networks, explainers are commonly validated under the
assumption that the learned models behave reasonably. However, neural networks
often rely on unreasonable correlations, even when producing correct decisions.
We introduce a verification framework for explanatory methods under the
feature-selection perspective. Our framework is based on a non-trivial neural
network architecture trained on a real-world task, and for which we are able to
provide guarantees on its inner workings. We validate the efficacy of our
evaluation by showing the failure modes of current explainers. We aim for this
framework to provide a publicly available, off-the-shelf evaluation when the
feature-selection perspective on explanations is needed.
| 2,019 | Computation and Language |
Contrastive Language Adaptation for Cross-Lingual Stance Detection | We study cross-lingual stance detection, which aims to leverage labeled data
in one language to identify the relative perspective (or stance) of a given
document with respect to a claim in a different target language. In particular,
we introduce a novel contrastive language adaptation approach applied to memory
networks, which ensures accurate alignment of stances in the source and target
languages, and can effectively deal with the challenge of limited labeled data
in the target language. The evaluation results on public benchmark datasets and
comparison against current state-of-the-art approaches demonstrate the
effectiveness of our approach.
| 2,019 | Computation and Language |
Learning from Fact-checkers: Analysis and Generation of Fact-checking
Language | In fighting against fake news, many fact-checking systems comprised of
human-based fact-checking sites (e.g., snopes.com and politifact.com) and
automatic detection systems have been developed in recent years. However,
online users still keep sharing fake news even when it has been debunked. It
means that early fake news detection may be insufficient and we need another
complementary approach to mitigate the spread of misinformation. In this paper,
we introduce a novel application of text generation for combating fake news. In
particular, we (1) leverage online users named \emph{fact-checkers}, who cite
fact-checking sites as credible evidences to fact-check information in public
discourse; (2) analyze linguistic characteristics of fact-checking tweets; and
(3) propose and build a deep learning framework to generate responses with
fact-checking intention to increase the fact-checkers' engagement in
fact-checking activities. Our analysis reveals that the fact-checkers tend to
refute misinformation and use formal language (e.g. few swear words and
Internet slangs). Our framework successfully generates relevant responses, and
outperforms competing models by achieving up to 30\% improvements. Our
qualitative study also confirms that the superiority of our generated responses
compared with responses generated from the existing models.
| 2,019 | Computation and Language |
On Dimensional Linguistic Properties of the Word Embedding Space | Word embeddings have become a staple of several natural language processing
tasks, yet much remains to be understood about their properties. In this work,
we analyze word embeddings in terms of their principal components and arrive at
a number of novel and counterintuitive observations. In particular, we
characterize the utility of variance explained by the principal components as a
proxy for downstream performance. Furthermore, through syntactic probing of the
principal embedding space, we show that the syntactic information captured by a
principal component does not correlate with the amount of variance it explains.
Consequently, we investigate the limitations of variance based embedding
post-processing and demonstrate that such post-processing is counter-productive
in sentence classification and machine translation tasks. Finally, we offer a
few precautionary guidelines on applying variance based embedding
post-processing and explain why non-isotropic geometry might be integral to
word embedding performance.
| 2,020 | Computation and Language |
A Machine Learning Analysis of the Features in Deceptive and Credible
News | Fake news is a type of pervasive propaganda that spreads misinformation
online, taking advantage of social media's extensive reach to manipulate public
perception. Over the past three years, fake news has become a focal discussion
point in the media due to its impact on the 2016 U.S. presidential election.
Fake news can have severe real-world implications: in 2016, a man walked into a
pizzeria carrying a rifle because he read that Hillary Clinton was harboring
children as sex slaves. This project presents a high accuracy (87%) machine
learning classifier that determines the validity of news based on the word
distributions and specific linguistic and stylistic differences in the first
few sentences of an article. This can help readers identify the validity of an
article by looking for specific features in the opening lines aiding them in
making informed decisions. Using a dataset of 2,107 articles from 30 different
websites, this project establishes an understanding of the variations between
fake and credible news by examining the model, dataset, and features. This
classifier appears to use the differences in word distribution, levels of tone
authenticity, and frequency of adverbs, adjectives, and nouns. The
differentiation in the features of these articles can be used to improve future
classifiers. This classifier can also be further applied directly to browsers
as a Google Chrome extension or as a filter for social media outlets or news
websites to reduce the spread of misinformation.
| 2,019 | Computation and Language |
On the Limits of Learning to Actively Learn Semantic Representations | One of the goals of natural language understanding is to develop models that
map sentences into meaning representations. However, training such models
requires expensive annotation of complex structures, which hinders their
adoption. Learning to actively-learn (LTAL) is a recent paradigm for reducing
the amount of labeled data by learning a policy that selects which samples
should be labeled. In this work, we examine LTAL for learning semantic
representations, such as QA-SRL. We show that even an oracle policy that is
allowed to pick examples that maximize performance on the test set (and
constitutes an upper bound on the potential of LTAL), does not substantially
improve performance compared to a random policy. We investigate factors that
could explain this finding and show that a distinguishing characteristic of
successful applications of LTAL is the interaction between optimization and the
oracle policy selection process. In successful applications of LTAL, the
examples selected by the oracle policy do not substantially depend on the
optimization procedure, while in our setup the stochastic nature of
optimization strongly affects the examples selected by the oracle. We conclude
that the current applicability of LTAL for improving data efficiency in
learning semantic meaning representations is limited.
| 2,019 | Computation and Language |
How Transformer Revitalizes Character-based Neural Machine Translation:
An Investigation on Japanese-Vietnamese Translation Systems | While translating between East Asian languages, many works have discovered
clear advantages of using characters as the translation unit. Unfortunately,
traditional recurrent neural machine translation systems hinder the practical
usage of those character-based systems due to their architectural limitations.
They are unfavorable in handling extremely long sequences as well as highly
restricted in parallelizing the computations. In this paper, we demonstrate
that the new transformer architecture can perform character-based translation
better than the recurrent one. We conduct experiments on a low-resource
language pair: Japanese-Vietnamese. Our models considerably outperform the
state-of-the-art systems which employ word-based recurrent architectures.
| 2,019 | Computation and Language |
Joint Diacritization, Lemmatization, Normalization, and Fine-Grained
Morphological Tagging | Semitic languages can be highly ambiguous, having several interpretations of
the same surface forms, and morphologically rich, having many morphemes that
realize several morphological features. This is further exacerbated for
dialectal content, which is more prone to noise and lacks a standard
orthography. The morphological features can be lexicalized, like lemmas and
diacritized forms, or non-lexicalized, like gender, number, and part-of-speech
tags, among others. Joint modeling of the lexicalized and non-lexicalized
features can identify more intricate morphological patterns, which provide
better context modeling, and further disambiguate ambiguous lexical choices.
However, the different modeling granularity can make joint modeling more
difficult. Our approach models the different features jointly, whether
lexicalized (on the character-level), where we also model surface form
normalization, or non-lexicalized (on the word-level). We use Arabic as a test
case, and achieve state-of-the-art results for Modern Standard Arabic, with 20%
relative error reduction, and Egyptian Arabic (a dialectal variant of Arabic),
with 11% reduction.
| 2,019 | Computation and Language |
Mapping Natural-language Problems to Formal-language Solutions Using
Structured Neural Representations | Generating formal-language programs represented by relational tuples, such as
Lisp programs or mathematical operations, to solve problems stated in natural
language is a challenging task because it requires explicitly capturing
discrete symbolic structural information implicit in the input. However, most
general neural sequence models do not explicitly capture such structural
information, limiting their performance on these tasks. In this paper, we
propose a new encoder-decoder model based on a structured neural
representation, Tensor Product Representations (TPRs), for mapping
Natural-language problems to Formal-language solutions, called TP-N2F. The
encoder of TP-N2F employs TPR `binding' to encode natural-language symbolic
structure in vector space and the decoder uses TPR `unbinding' to generate, in
symbolic space, a sequential program represented by relational tuples, each
consisting of a relation (or operation) and a number of arguments. TP-N2F
considerably outperforms LSTM-based seq2seq models on two benchmarks and
creates new state-of-the-art results. Ablation studies show that improvements
can be attributed to the use of structured TPRs explicitly in both the encoder
and decoder. Analysis of the learned structures shows how TPRs enhance the
interpretability of TP-N2F.
| 2,020 | Computation and Language |
Text Level Graph Neural Network for Text Classification | Recently, researches have explored the graph neural network (GNN) techniques
on text classification, since GNN does well in handling complex structures and
preserving global information. However, previous methods based on GNN are
mainly faced with the practical problems of fixed corpus level graph structure
which do not support online testing and high memory consumption. To tackle the
problems, we propose a new GNN based model that builds graphs for each input
text with global parameters sharing instead of a single graph for the whole
corpus. This method removes the burden of dependence between an individual text
and entire corpus which support online testing, but still preserve global
information. Besides, we build graphs by much smaller windows in the text,
which not only extract more local features but also significantly reduce the
edge numbers as well as memory consumption. Experiments show that our model
outperforms existing models on several text classification datasets even with
consuming less memory.
| 2,019 | Computation and Language |
Multilingual Dialogue Generation with Shared-Private Memory | Existing dialog systems are all monolingual, where features shared among
different languages are rarely explored. In this paper, we introduce a novel
multilingual dialogue system. Specifically, we augment the sequence to sequence
framework with improved shared-private memory. The shared memory learns common
features among different languages and facilitates a cross-lingual transfer to
boost dialogue systems, while the private memory is owned by each separate
language to capture its unique feature. Experiments conducted on Chinese and
English conversation corpora of different scales show that our proposed
architecture outperforms the individually learned model with the help of the
other language, where the improvement is particularly distinct when the
training data is limited.
| 2,019 | Computation and Language |
Named Entity Recognition -- Is there a glass ceiling? | Recent developments in Named Entity Recognition (NER) have resulted in better
and better models. However, is there a glass ceiling? Do we know which types of
errors are still hard or even impossible to correct? In this paper, we present
a detailed analysis of the types of errors in state-of-the-art machine learning
(ML) methods. Our study reveals the weak and strong points of the Stanford,
CMU, FLAIR, ELMO and BERT models, as well as their shared limitations. We also
introduce new techniques for improving annotation, for training processes and
for checking a model's quality and stability. Presented results are based on
the CoNLL 2003 data set for the English language. A new enriched semantic
annotation of errors for this data set and new diagnostic data sets are
attached in the supplementary materials.
| 2,019 | Computation and Language |
Fine-Grained Analysis of Propaganda in News Articles | Propaganda aims at influencing people's mindset with the purpose of advancing
a specific agenda. Previous work has addressed propaganda detection at the
document level, typically labelling all articles from a propagandistic news
outlet as propaganda. Such noisy gold labels inevitably affect the quality of
any learning system trained on them. A further issue with most existing systems
is the lack of explainability. To overcome these limitations, we propose a
novel task: performing fine-grained analysis of texts by detecting all
fragments that contain propaganda techniques as well as their type. In
particular, we create a corpus of news articles manually annotated at the
fragment level with eighteen propaganda techniques and we propose a suitable
evaluation measure. We further design a novel multi-granularity neural network,
and we show that it outperforms several strong BERT-based baselines.
| 2,019 | Computation and Language |
Domain Differential Adaptation for Neural Machine Translation | Neural networks are known to be data hungry and domain sensitive, but it is
nearly impossible to obtain large quantities of labeled data for every domain
we are interested in. This necessitates the use of domain adaptation
strategies. One common strategy encourages generalization by aligning the
global distribution statistics between source and target domains, but one
drawback is that the statistics of different domains or tasks are inherently
divergent, and smoothing over these differences can lead to sub-optimal
performance. In this paper, we propose the framework of {\it Domain
Differential Adaptation (DDA)}, where instead of smoothing over these
differences we embrace them, directly modeling the difference between domains
using models in a related task. We then use these learned domain differentials
to adapt models for the target task accordingly. Experimental results on domain
adaptation for neural machine translation demonstrate the effectiveness of this
strategy, achieving consistent improvements over other alternative adaptation
strategies in multiple experimental settings.
| 2,019 | Computation and Language |
Why Attention? Analyzing and Remedying BiLSTM Deficiency in Modeling
Cross-Context for NER | State-of-the-art approaches of NER have used sequence-labeling BiLSTM as a
core module. This paper formally shows the limitation of BiLSTM in modeling
cross-context patterns. Two types of simple cross-structures -- self-attention
and Cross-BiLSTM -- are shown to effectively remedy the problem. On both
OntoNotes 5.0 and WNUT 2017, clear and consistent improvements are achieved
over bare-bone models, up to 8.7% on some of the multi-token mentions. In-depth
analyses across several aspects of the improvements, especially the
identification of multi-token mentions, are further given.
| 2,020 | Computation and Language |
Multi-hop Question Answering via Reasoning Chains | Multi-hop question answering requires models to gather information from
different parts of a text to answer a question. Most current approaches learn
to address this task in an end-to-end way with neural networks, without
maintaining an explicit representation of the reasoning process. We propose a
method to extract a discrete reasoning chain over the text, which consists of a
series of sentences leading to the answer. We then feed the extracted chains to
a BERT-based QA model to do final answer prediction. Critically, we do not rely
on gold annotated chains or "supporting facts:" at training time, we derive
pseudogold reasoning chains using heuristics based on named entity recognition
and coreference resolution. Nor do we rely on these annotations at test time,
as our model learns to extract chains from raw text alone. We test our approach
on two recently proposed large multi-hop question answering datasets: WikiHop
and HotpotQA, and achieve state-of-art performance on WikiHop and strong
performance on HotpotQA. Our analysis shows the properties of chains that are
crucial for high performance: in particular, modeling extraction sequentially
is important, as is dealing with each candidate sentence in a context-aware
way. Furthermore, human evaluation shows that our extracted chains allow humans
to give answers with high confidence, indicating that these are a strong
intermediate abstraction for this task.
| 2,021 | Computation and Language |
Compositional Generalization for Primitive Substitutions | Compositional generalization is a basic mechanism in human language learning,
but current neural networks lack such ability. In this paper, we conduct
fundamental research for encoding compositionality in neural networks.
Conventional methods use a single representation for the input sentence, making
it hard to apply prior knowledge of compositionality. In contrast, our approach
leverages such knowledge with two representations, one generating attention
maps, and the other mapping attended input words to output symbols. We reduce
the entropy in each representation to improve generalization. Our experiments
demonstrate significant improvements over the conventional methods in five NLP
tasks including instruction learning and machine translation. In the SCAN
domain, it boosts accuracies from 14.0% to 98.8% in Jump task, and from 92.0%
to 99.7% in TurnLeft task. It also beats human performance on a few-shot
learning task. We hope the proposed approach can help ease future research
towards human-level compositional language learning.
| 2,019 | Computation and Language |
BERT for Evidence Retrieval and Claim Verification | Motivated by the promising performance of pre-trained language models, we
investigate BERT in an evidence retrieval and claim verification pipeline for
the FEVER fact extraction and verification challenge. To this end, we propose
to use two BERT models, one for retrieving potential evidence sentences
supporting or rejecting claims, and another for verifying claims based on the
predicted evidence sets. To train the BERT retrieval system, we use pointwise
and pairwise loss functions, and examine the effect of hard negative mining. A
second BERT model is trained to classify the samples as supported, refuted, and
not enough information. Our system achieves a new state of the art recall of
87.1 for retrieving top five sentences out of the FEVER documents consisting of
50K Wikipedia pages, and scores second in the official leaderboard with the
FEVER score of 69.7.
| 2,019 | Computation and Language |
Controllable Sentence Simplification | Text simplification aims at making a text easier to read and understand by
simplifying grammar and structure while keeping the underlying information
identical. It is often considered an all-purpose generic task where the same
simplification is suitable for all; however multiple audiences can benefit from
simplified text in different ways. We adapt a discrete parametrization
mechanism that provides explicit control on simplification systems based on
Sequence-to-Sequence models. As a result, users can condition the
simplifications returned by a model on attributes such as length, amount of
paraphrasing, lexical complexity and syntactic complexity. We also show that
carefully chosen values of these attributes allow out-of-the-box
Sequence-to-Sequence models to outperform their standard counterparts on
simplification benchmarks. Our model, which we call ACCESS (as shorthand for
AudienCe-CEntric Sentence Simplification), establishes the state of the art at
41.87 SARI on the WikiLarge test set, a +1.42 improvement over the best
previously reported score.
| 2,020 | Computation and Language |
Improving Relation Extraction with Knowledge-attention | While attention mechanisms have been proven to be effective in many NLP
tasks, majority of them are data-driven. We propose a novel knowledge-attention
encoder which incorporates prior knowledge from external lexical resources into
deep neural networks for relation extraction task. Furthermore, we present
three effective ways of integrating knowledge-attention with self-attention to
maximize the utilization of both knowledge and data. The proposed relation
extraction system is end-to-end and fully attention-based. Experiment results
show that the proposed knowledge-attention mechanism has complementary
strengths with self-attention, and our integrated models outperform existing
CNN, RNN, and self-attention based models. State-of-the-art performance is
achieved on TACRED, a complex and large-scale relation extraction dataset.
| 2,020 | Computation and Language |
MaskParse@Deskin at SemEval-2019 Task 1: Cross-lingual UCCA Semantic
Parsing using Recursive Masked Sequence Tagging | This paper describes our recursive system for SemEval-2019 \textit{ Task 1:
Cross-lingual Semantic Parsing with UCCA}. Each recursive step consists of two
parts. We first perform semantic parsing using a sequence tagger to estimate
the probabilities of the UCCA categories in the sentence. Then, we apply a
decoding policy which interprets these probabilities and builds the graph
nodes. Parsing is done recursively, we perform a first inference on the
sentence to extract the main scenes and links and then we recursively apply our
model on the sentence using a masking feature that reflects the decisions made
in previous steps. Process continues until the terminal nodes are reached. We
choose a standard neural tagger and we focused on our recursive parsing
strategy and on the cross lingual transfer problem to develop a robust model
for the French language, using only few training samples.
| 2,019 | Computation and Language |
Adapting a FrameNet Semantic Parser for Spoken Language Understanding
Using Adversarial Learning | This paper presents a new semantic frame parsing model, based on Berkeley
FrameNet, adapted to process spoken documents in order to perform information
extraction from broadcast contents. Building upon previous work that had shown
the effectiveness of adversarial learning for domain generalization in the
context of semantic parsing of encyclopedic written documents, we propose to
extend this approach to elocutionary style generalization. The underlying
question throughout this study is whether adversarial learning can be used to
combine data from different sources and train models on a higher level of
abstraction in order to increase their robustness to lexical and stylistic
variations as well as automatic speech recognition errors. The proposed
strategy is evaluated on a French corpus of encyclopedic written documents and
a smaller corpus of radio podcast transcriptions, both annotated with a
FrameNet paradigm. We show that adversarial learning increases all models
generalization capabilities both on manual and automatic speech transcription
as well as on encyclopedic data.
| 2,019 | Computation and Language |
On Leveraging the Visual Modality for Neural Machine Translation | Leveraging the visual modality effectively for Neural Machine Translation
(NMT) remains an open problem in computational linguistics. Recently, Caglayan
et al. posit that the observed gains are limited mainly due to the very simple,
short, repetitive sentences of the Multi30k dataset (the only multimodal MT
dataset available at the time), which renders the source text sufficient for
context. In this work, we further investigate this hypothesis on a new large
scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times
longer mean sentence length than Multi30k and no repetition. We propose and
evaluate three novel fusion techniques, each of which is designed to ensure the
utilization of visual context at different stages of the Sequence-to-Sequence
transduction pipeline, even under full linguistic context. However, we still
obtain only marginal gains under full linguistic context and posit that visual
embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for
How2) do not lend themselves to increasing the discriminativeness between the
vocabulary elements at token level prediction in NMT. We demonstrate this
qualitatively by analyzing attention distribution and quantitatively through
Principal Component Analysis, arriving at the conclusion that it is the quality
of the visual embeddings rather than the length of sentences, which need to be
improved in existing MMT datasets.
| 2,019 | Computation and Language |
Adversarial reconstruction for Multi-modal Machine Translation | Even with the growing interest in problems at the intersection of Computer
Vision and Natural Language, grounding (i.e. identifying) the components of a
structured description in an image still remains a challenging task. This
contribution aims to propose a model which learns grounding by reconstructing
the visual features for the Multi-modal translation task. Previous works have
partially investigated standard approaches such as regression methods to
approximate the reconstruction of a visual input. In this paper, we propose a
different and novel approach which learns grounding by adversarial feedback. To
do so, we modulate our network following the recent promising adversarial
architectures and evaluate how the adversarial response from a visual
reconstruction as an auxiliary task helps the model in its learning. We report
the highest scores in term of BLEU and METEOR metrics on the different
datasets.
| 2,019 | Computation and Language |
Language is Power: Representing States Using Natural Language in
Reinforcement Learning | Recent advances in reinforcement learning have shown its potential to tackle
complex real-life tasks. However, as the dimensionality of the task increases,
reinforcement learning methods tend to struggle. To overcome this, we explore
methods for representing the semantic information embedded in the state. While
previous methods focused on information in its raw form (e.g., raw visual
input), we propose to represent the state using natural language. Language can
represent complex scenarios and concepts, making it a favorable candidate for
representation. Empirical evidence, within the domain of ViZDoom, suggests that
natural language based agents are more robust, converge faster and perform
better than vision based agents, showing the benefit of using natural language
representations for reinforcement learning.
| 2,020 | Computation and Language |
Parallel Iterative Edit Models for Local Sequence Transduction | We present a Parallel Iterative Edit (PIE) model for the problem of local
sequence transduction arising in tasks like Grammatical error correction (GEC).
Recent approaches are based on the popular encoder-decoder (ED) model for
sequence to sequence learning. The ED model auto-regressively captures full
dependency among output tokens but is slow due to sequential decoding. The PIE
model does parallel decoding, giving up the advantage of modelling full
dependency in the output, yet it achieves accuracy competitive with the ED
model for four reasons: 1.~predicting edits instead of tokens, 2.~labeling
sequences instead of generating sequences, 3.~iteratively refining predictions
to capture dependencies, and 4.~factorizing logits over edits and their token
argument to harness pre-trained language models like BERT. Experiments on tasks
spanning GEC, OCR correction and spell correction demonstrate that the PIE
model is an accurate and significantly faster alternative for local sequence
transduction.
| 2,020 | Computation and Language |
Correlations between Word Vector Sets | Similarity measures based purely on word embeddings are comfortably competing
with much more sophisticated deep learning and expert-engineered systems on
unsupervised semantic textual similarity (STS) tasks. In contrast to commonly
used geometric approaches, we treat a single word embedding as e.g. 300
observations from a scalar random variable. Using this paradigm, we first
illustrate that similarities derived from elementary pooling operations and
classic correlation coefficients yield excellent results on standard STS
benchmarks, outperforming many recently proposed methods while being much
faster and trivial to implement. Next, we demonstrate how to avoid pooling
operations altogether and compare sets of word embeddings directly via
correlation operators between reproducing kernel Hilbert spaces. Just like
cosine similarity is used to compare individual word vectors, we introduce a
novel application of the centered kernel alignment (CKA) as a natural
generalisation of squared cosine similarity for sets of word vectors. Likewise,
CKA is very easy to implement and enjoys very strong empirical results.
| 2,019 | Computation and Language |
Commonsense Knowledge Base Completion with Structural and Semantic
Context | Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and
ConceptNet) poses unique challenges compared to the much studied conventional
knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form
text to represent nodes, resulting in orders of magnitude more nodes compared
to conventional KBs (18x more nodes in ATOMIC compared to Freebase
(FB15K-237)). Importantly, this implies significantly sparser graph structures
- a major challenge for existing KB completion methods that assume densely
connected graphs over a relatively smaller set of nodes. In this paper, we
present novel KB completion models that can address these challenges by
exploiting the structural and semantic context of nodes. Specifically, we
investigate two key ideas: (1) learning from local graph structure, using graph
convolutional networks and automatic graph densification and (2) transfer
learning from pre-trained language models to knowledge graphs for enhanced
contextual representation of knowledge. We describe our method to incorporate
information from both these sources in a joint model and provide the first
empirical results for KB completion on ATOMIC and evaluation with ranking
metrics on ConceptNet. Our results demonstrate the effectiveness of language
model representations in boosting link prediction performance and the
advantages of learning from local graph structure (+1.5 points in MRR for
ConceptNet) when training on subgraphs for computational efficiency. Further
analysis on model predictions shines light on the types of commonsense
knowledge that language models capture well.
| 2,019 | Computation and Language |
A Case Study on Combining ASR and Visual Features for Generating
Instructional Video Captions | Instructional videos get high-traffic on video sharing platforms, and prior
work suggests that providing time-stamped, subtask annotations (e.g., "heat the
oil in the pan") improves user experiences. However, current automatic
annotation methods based on visual features alone perform only slightly better
than constant prediction. Taking cues from prior work, we show that we can
improve performance significantly by considering automatic speech recognition
(ASR) tokens as input. Furthermore, jointly modeling ASR tokens and visual
features results in higher performance compared to training individually on
either modality. We find that unstated background information is better
explained by visual features, whereas fine-grained distinctions (e.g., "add
oil" vs. "add olive oil") are disambiguated more easily via ASR tokens.
| 2,019 | Computation and Language |
Improving Neural Machine Translation Robustness via Data Augmentation:
Beyond Back Translation | Neural Machine Translation (NMT) models have been proved strong when
translating clean texts, but they are very sensitive to noise in the input.
Improving NMT models robustness can be seen as a form of "domain" adaption to
noise. The recently created Machine Translation on Noisy Text task corpus
provides noisy-clean parallel data for a few language pairs, but this data is
very limited in size and diversity. The state-of-the-art approaches are heavily
dependent on large volumes of back-translated data. This paper has two main
contributions: Firstly, we propose new data augmentation methods to extend
limited noisy data and further improve NMT robustness to noise while keeping
the models small. Secondly, we explore the effect of utilizing noise from
external data in the form of speech transcripts and show that it could help
robustness.
| 2,019 | Computation and Language |
Gunrock: A Social Bot for Complex and Engaging Long Conversations | Gunrock is the winner of the 2018 Amazon Alexa Prize, as evaluated by
coherence and engagement from both real users and Amazon-selected expert
conversationalists. We focus on understanding complex sentences and having
in-depth conversations in open domains. In this paper, we introduce some
innovative system designs and related validation analysis. Overall, we found
that users produce longer sentences to Gunrock, which are directly related to
users' engagement (e.g., ratings, number of turns). Additionally, users'
backstory queries about Gunrock are positively correlated to user satisfaction.
Finally, we found dialog flows that interleave facts and personal opinions and
stories lead to better user satisfaction.
| 2,019 | Computation and Language |
Make Up Your Mind! Adversarial Generation of Inconsistent Natural
Language Explanations | To increase trust in artificial intelligence systems, a promising research
direction consists of designing neural models capable of generating natural
language explanations for their predictions. In this work, we show that such
models are nonetheless prone to generating mutually inconsistent explanations,
such as "Because there is a dog in the image" and "Because there is no dog in
the [same] image", exposing flaws in either the decision-making process of the
model or in the generation of the explanations. We introduce a simple yet
effective adversarial framework for sanity checking models against the
generation of inconsistent natural language explanations. Moreover, as part of
the framework, we address the problem of adversarial attacks with full target
sequences, a scenario that was not previously addressed in sequence-to-sequence
attacks. Finally, we apply our framework on a state-of-the-art neural natural
language inference model that provides natural language explanations for its
predictions. Our framework shows that this model is capable of generating a
significant number of inconsistent explanations.
| 2,020 | Computation and Language |
Capturing Argument Interaction in Semantic Role Labeling with Capsule
Networks | Semantic role labeling (SRL) involves extracting propositions (i.e.
predicates and their typed arguments) from natural language sentences.
State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not
model non-local interaction between arguments. We propose a new approach to
modeling these interactions while maintaining efficient inference.
Specifically, we use Capsule Networks: each proposition is encoded as a tuple
of \textit{capsules}, one capsule per argument type (i.e. role). These tuples
serve as embeddings of entire propositions. In every network layer, the
capsules interact with each other and with representations of words in the
sentence. Each iteration results in updated proposition embeddings and updated
predictions about the SRL structure. Our model substantially outperforms the
non-refinement baseline model on all 7 CoNLL-2019 languages and achieves
state-of-the-art results on 5 languages (including English) for dependency SRL.
We analyze the types of mistakes corrected by the refinement procedure. For
example, each role is typically (but not always) filled with at most one
argument. Whereas enforcing this approximate constraint is not useful with the
modern SRL system, iterative procedure corrects the mistakes by capturing this
intuition in a flexible and context-sensitive way.
| 2,019 | Computation and Language |
SesameBERT: Attention for Anywhere | Fine-tuning with pre-trained models has achieved exceptional results for many
language tasks. In this study, we focused on one such self-attention network
model, namely BERT, which has performed well in terms of stacking layers across
diverse language-understanding benchmarks. However, in many downstream tasks,
information between layers is ignored by BERT for fine-tuning. In addition,
although self-attention networks are well-known for their ability to capture
global dependencies, room for improvement remains in terms of emphasizing the
importance of local contexts. In light of these advantages and disadvantages,
this paper proposes SesameBERT, a generalized fine-tuning method that (1)
enables the extraction of global information among all layers through Squeeze
and Excitation and (2) enriches local information by capturing neighboring
contexts via Gaussian blurring. Furthermore, we demonstrated the effectiveness
of our approach in the HANS dataset, which is used to determine whether models
have adopted shallow heuristics instead of learning underlying generalizations.
The experiments revealed that SesameBERT outperformed BERT with respect to GLUE
benchmark and the HANS evaluation set.
| 2,019 | Computation and Language |
Riposte! A Large Corpus of Counter-Arguments | Constructive feedback is an effective method for improving critical thinking
skills. Counter-arguments (CAs), one form of constructive feedback, have been
proven to be useful for critical thinking skills. However, little work has been
done for constructing a large-scale corpus of them which can drive research on
automatic generation of CAs for fallacious micro-level arguments (i.e. a single
claim and premise pair). In this work, we cast providing constructive feedback
as a natural language processing task and create Riposte!, a corpus of CAs,
towards this goal. Produced by crowdworkers, Riposte! contains over 18k CAs. We
instruct workers to first identify common fallacy types and produce a CA which
identifies the fallacy. We analyze how workers create CAs and construct a
baseline model based on our analysis.
| 2,019 | Computation and Language |
CONAN -- COunter NArratives through Nichesourcing: a Multilingual
Dataset of Responses to Fight Online Hate Speech | Although there is an unprecedented effort to provide adequate responses in
terms of laws and policies to hate content on social media platforms, dealing
with hatred online is still a tough problem. Tackling hate speech in the
standard way of content deletion or user suspension may be charged with
censorship and overblocking. One alternate strategy, that has received little
attention so far by the research community, is to actually oppose hate content
with counter-narratives (i.e. informed textual responses). In this paper, we
describe the creation of the first large-scale, multilingual, expert-based
dataset of hate speech/counter-narrative pairs. This dataset has been built
with the effort of more than 100 operators from three different NGOs that
applied their training and expertise to the task. Together with the collected
data we also provide additional annotations about expert demographics, hate and
response type, and data augmentation through translation and paraphrasing.
Finally, we provide initial experiments to assess the quality of our data.
| 2,019 | Computation and Language |
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task | In this paper, we propose a new approach to learn multimodal multilingual
embeddings for matching images and their relevant captions in two languages. We
combine two existing objective functions to make images and captions close in a
joint embedding space while adapting the alignment of word embeddings between
existing languages in our model. We show that our approach enables better
generalization, achieving state-of-the-art performance in text-to-image and
image-to-text retrieval task, and caption-caption similarity task. Two
multimodal multilingual datasets are used for evaluation: Multi30k with German
and English captions and Microsoft-COCO with English and Japanese captions.
| 2,020 | Computation and Language |
One-To-Many Multilingual End-to-end Speech Translation | Nowadays, training end-to-end neural models for spoken language translation
(SLT) still has to confront with extreme data scarcity conditions. The existing
SLT parallel corpora are indeed orders of magnitude smaller than those
available for the closely related tasks of automatic speech recognition (ASR)
and machine translation (MT), which usually comprise tens of millions of
instances. To cope with data paucity, in this paper we explore the
effectiveness of transfer learning in end-to-end SLT by presenting a
multilingual approach to the task. Multilingual solutions are widely studied in
MT and usually rely on ``\textit{target forcing}'', in which multilingual
parallel data are combined to train a single model by prepending to the input
sequences a language token that specifies the target language. However, when
tested in speech translation, our experiments show that MT-like \textit{target
forcing}, used as is, is not effective in discriminating among the target
languages. Thus, we propose a variant that uses target-language embeddings to
shift the input representations in different portions of the space according to
the language, so to better support the production of output in the desired
target language. Our experiments on end-to-end SLT from English into six
languages show important improvements when translating into similar languages,
especially when these are supported by scarce data. Further improvements are
obtained when using English ASR data as an additional language (up to $+2.5$
BLEU points).
| 2,019 | Computation and Language |
An Interactive Machine Translation Framework for Modernizing Historical
Documents | Due to the nature of human language, historical documents are hard to
comprehend by contemporary people. This limits their accessibility to scholars
specialized in the time period in which the documents were written.
Modernization aims at breaking this language barrier by generating a new
version of a historical document, written in the modern version of the
document's original language. However, while it is able to increase the
document's comprehension, modernization is still far from producing an
error-free version. In this work, we propose a collaborative framework in which
a scholar can work together with the machine to generate the new version. We
tested our approach on a simulated environment, achieving significant
reductions of the human effort needed to produce the modernized version of the
document.
| 2,019 | Computation and Language |
In Search for Linear Relations in Sentence Embedding Spaces | We present an introductory investigation into continuous-space vector
representations of sentences. We acquire pairs of very similar sentences
differing only by a small alterations (such as change of a noun, adding an
adjective, noun or punctuation) from datasets for natural language inference
using a simple pattern method. We look into how such a small change within the
sentence text affects its representation in the continuous space and how such
alterations are reflected by some of the popular sentence embedding models. We
found that vector differences of some embeddings actually reflect small changes
within a sentence.
| 2,019 | Computation and Language |
Linguistically Informed Relation Extraction and Neural Architectures for
Nested Named Entity Recognition in BioNLP-OST 2019 | Named Entity Recognition (NER) and Relation Extraction (RE) are essential
tools in distilling knowledge from biomedical literature. This paper presents
our findings from participating in BioNLP Shared Tasks 2019. We addressed Named
Entity Recognition including nested entities extraction, Entity Normalization
and Relation Extraction. Our proposed approach of Named Entities can be
generalized to different languages and we have shown it's effectiveness for
English and Spanish text. We investigated linguistic features, hybrid loss
including ranking and Conditional Random Fields (CRF), multi-task objective and
token-level ensembling strategy to improve NER. We employed dictionary based
fuzzy and semantic search to perform Entity Normalization. Finally, our RE
system employed Support Vector Machine (SVM) with linguistic features.
Our NER submission (team:MIC-CIS) ranked first in BB-2019 norm+NER task with
standard error rate (SER) of 0.7159 and showed competitive performance on
PharmaCo NER task with F1-score of 0.8662. Our RE system ranked first in the
SeeDev-binary Relation Extraction Task with F1-score of 0.3738.
| 2,019 | Computation and Language |
When Specialization Helps: Using Pooled Contextualized Embeddings to
Detect Chemical and Biomedical Entities in Spanish | The recognition of pharmacological substances, compounds and proteins is an
essential preliminary work for the recognition of relations between chemicals
and other biomedically relevant units. In this paper, we describe an approach
to Task 1 of the PharmaCoNER Challenge, which involves the recognition of
mentions of chemicals and drugs in Spanish medical texts. We train a
state-of-the-art BiLSTM-CRF sequence tagger with stacked Pooled Contextualized
Embeddings, word and sub-word embeddings using the open-source framework FLAIR.
We present a new corpus composed of articles and papers from Spanish health
science journals, termed the Spanish Health Corpus, and use it to train
domain-specific embeddings which we incorporate in our model training. We
achieve a result of 89.76% F1-score using pre-trained embeddings and are able
to improve these results to 90.52% F1-score using specialized embeddings.
| 2,019 | Computation and Language |
Generating Highly Relevant Questions | The neural seq2seq based question generation (QG) is prone to generating
generic and undiversified questions that are poorly relevant to the given
passage and target answer. In this paper, we propose two methods to address the
issue. (1) By a partial copy mechanism, we prioritize words that are
morphologically close to words in the input passage when generating questions;
(2) By a QA-based reranker, from the n-best list of question candidates, we
select questions that are preferred by both the QA and QG model. Experiments
and analyses demonstrate that the proposed two methods substantially improve
the relevance of generated questions to passages and answers.
| 2,019 | Computation and Language |
Federated Learning of N-gram Language Models | We propose algorithms to train production-quality n-gram language models
using federated learning. Federated learning is a distributed computation
platform that can be used to train global models for portable devices such as
smart phones. Federated learning is especially relevant for applications
handling privacy-sensitive data, such as virtual keyboards, because training is
performed without the users' data ever leaving their devices. While the
principles of federated learning are fairly generic, its methodology assumes
that the underlying models are neural networks. However, virtual keyboards are
typically powered by n-gram language models for latency reasons.
We propose to train a recurrent neural network language model using the
decentralized FederatedAveraging algorithm and to approximate this federated
model server-side with an n-gram model that can be deployed to devices for fast
inference. Our technical contributions include ways of handling large
vocabularies, algorithms to correct capitalization errors in user data, and
efficient finite state transducer algorithms to convert word language models to
word-piece language models and vice versa. The n-gram language models trained
with federated learning are compared to n-grams trained with traditional
server-based algorithms using A/B tests on tens of millions of users of virtual
keyboard. Results are presented for two languages, American English and
Brazilian Portuguese. This work demonstrates that high-quality n-gram language
models can be trained directly on client mobile devices without sensitive
training data ever leaving the devices.
| 2,019 | Computation and Language |
Overcoming the Rare Word Problem for Low-Resource Language Pairs in
Neural Machine Translation | Among the six challenges of neural machine translation (NMT) coined by (Koehn
and Knowles, 2017), rare-word problem is considered the most severe one,
especially in translation of low-resource languages. In this paper, we propose
three solutions to address the rare words in neural machine translation
systems. First, we enhance source context to predict the target words by
connecting directly the source embeddings to the output of the attention
component in NMT. Second, we propose an algorithm to learn morphology of
unknown words for English in supervised way in order to minimize the adverse
effect of rare-word problem. Finally, we exploit synonymous relation from the
WordNet to overcome out-of-vocabulary (OOV) problem of NMT. We evaluate our
approaches on two low-resource language pairs: English-Vietnamese and
Japanese-Vietnamese. In our experiments, we have achieved significant
improvements of up to roughly +1.0 BLEU points in both language pairs.
| 2,019 | Computation and Language |
Fine-grained Sentiment Classification using BERT | Sentiment classification is an important process in understanding people's
perception towards a product, service, or topic. Many natural language
processing models have been proposed to solve the sentiment classification
problem. However, most of them have focused on binary sentiment classification.
In this paper, we use a promising deep learning model called BERT to solve the
fine-grained sentiment classification task. Experiments show that our model
outperforms other popular models for this task without sophisticated
architecture. We also demonstrate the effectiveness of transfer learning in
natural language processing in the process.
| 2,019 | Computation and Language |
Named Entity Recognition System for Sindhi Language | Named Entity Recognition (NER) System aims to extract the existing
information into the following categories such as: Persons Name, Organization,
Location, Date and Time, Term, Designation and Short forms. Now, it is
considered to be important aspect for many natural languages processing (NLP)
tasks such as: information retrieval system, machine translation system,
information extraction system and question answering. Even at a surface level,
the understanding of the named entities involved in a document gives richer
analytical framework and cross referencing. It has been used for different
Arabic Script-Based languages like, Arabic, Persian and Urdu but, Sindhi could
not come into being yet. This paper explains the problem of NER in the
framework of Sindhi Language and provides relevant solution. The system is
developed to tag ten different Named Entities. We have used Ruled based
approach for NER system of Sindhi Language. For the training and testing, 936
words were used and calculated performance accuracy of 98.71%.
| 2,019 | Computation and Language |
Classification As Decoder: Trading Flexibility For Control In Neural
Dialogue | Generative seq2seq dialogue systems are trained to predict the next word in
dialogues that have already occurred. They can learn from large unlabeled
conversation datasets, build a deep understanding of conversational context,
and generate a wide variety of responses. This flexibility comes at the cost of
control. Undesirable responses in the training data will be reproduced by the
model at inference time, and longer generations often don't make sense. Instead
of generating responses one word at a time, we train a classifier to choose
from a predefined list of full responses. The classifier is trained on
(conversation context, response class) pairs, where each response class is a
noisily labeled group of interchangeable responses. At inference, we generate
the exemplar response associated with the predicted response class. Experts can
edit and improve these exemplar responses over time without retraining the
classifier or invalidating old training data. Human evaluation of 775 unseen
doctor/patient conversations shows that this tradeoff improves responses. Only
12% of our discriminative approach's responses are worse than the doctor's
response in the same conversational context, compared to 18% for the generative
model. A discriminative model trained without any manual labeling of response
classes achieves equal performance to the generative model.
| 2,019 | Computation and Language |
Semi-Supervised Neural Text Generation by Joint Learning of Natural
Language Generation and Natural Language Understanding Models | In Natural Language Generation (NLG), End-to-End (E2E) systems trained
through deep learning have recently gained a strong interest. Such deep models
need a large amount of carefully annotated data to reach satisfactory
performance. However, acquiring such datasets for every new NLG application is
a tedious and time-consuming task. In this paper, we propose a semi-supervised
deep learning scheme that can learn from non-annotated data and annotated data
when available. It uses an NLG and a Natural Language Understanding (NLU)
sequence-to-sequence models which are learned jointly to compensate for the
lack of annotation. Experiments on two benchmark datasets show that, with
limited amount of annotated data, the method can achieve very competitive
results while not using any pre-processing or re-scoring tricks. These findings
open the way to the exploitation of non-annotated datasets which is the current
bottleneck for the E2E NLG system development to new applications.
| 2,019 | Computation and Language |
Controlled Text Generation for Data Augmentation in Intelligent
Artificial Agents | Data availability is a bottleneck during early stages of development of new
capabilities for intelligent artificial agents. We investigate the use of text
generation techniques to augment the training data of a popular commercial
artificial agent across categories of functionality, with the goal of faster
development of new functionality. We explore a variety of encoder-decoder
generative models for synthetic training data generation and propose using
conditional variational auto-encoders. Our approach requires only direct
optimization, works well with limited data and significantly outperforms the
previous controlled text generation techniques. Further, the generated data are
used as additional training samples in an extrinsic intent classification task,
leading to improved performance by up to 5\% absolute f-score in low-resource
cases, validating the usefulness of our approach.
| 2,019 | Computation and Language |
Neural Language Priors | The choice of sentence encoder architecture reflects assumptions about how a
sentence's meaning is composed from its constituent words. We examine the
contribution of these architectures by holding them randomly initialised and
fixed, effectively treating them as as hand-crafted language priors, and
evaluating the resulting sentence encoders on downstream language tasks. We
find that even when encoders are presented with additional information that can
be used to solve tasks, the corresponding priors do not leverage this
information, except in an isolated case. We also find that apparently
uninformative priors are just as good as seemingly informative priors on almost
all tasks, indicating that learning is a necessary component to leverage
information provided by architecture choice.
| 2,019 | Computation and Language |
Fake news detection using Deep Learning | The evolution of the information and communication technologies has
dramatically increased the number of people with access to the Internet, which
has changed the way the information is consumed. As a consequence of the above,
fake news have become one of the major concerns because its potential to
destabilize governments, which makes them a potential danger to modern society.
An example of this can be found in the US. electoral campaign, where the term
"fake news" gained great notoriety due to the influence of the hoaxes in the
final result of these. In this work the feasibility of applying deep learning
techniques to discriminate fake news on the Internet using only their text is
studied. In order to accomplish that, three different neural network
architectures are proposed, one of them based on BERT, a modern language model
created by Google which achieves state-of-the-art results.
| 2,019 | Computation and Language |
SentiCite: An Approach for Publication Sentiment Analysis | With the rapid growth in the number of scientific publications, year after
year, it is becoming increasingly difficult to identify quality authoritative
work on a single topic. Though there is an availability of scientometric
measures which promise to offer a solution to this problem, these measures are
mostly quantitative and rely, for instance, only on the number of times an
article is cited. With this approach, it becomes irrelevant if an article is
cited 10 times in a positive, negative or neutral way. In this context, it is
quite important to study the qualitative aspect of a citation to understand its
significance. This paper presents a novel system for sentiment analysis of
citations in scientific documents (SentiCite) and is also capable of detecting
nature of citations by targeting the motivation behind a citation, e.g.,
reference to a dataset, reading reference. Furthermore, the paper also presents
two datasets (SentiCiteDB and IntentCiteDB) containing about 2,600 citations
with their ground truth for sentiment and nature of citation. SentiCite along
with other state-of-the-art methods for sentiment analysis are evaluated on the
presented datasets. Evaluation results reveal that SentiCite outperforms
state-of-the-art methods for sentiment analysis in scientific publications by
achieving a F1-measure of 0.71.
| 2,019 | Computation and Language |
Investigating the Effectiveness of Representations Based on
Word-Embeddings in Active Learning for Labelling Text Datasets | Manually labelling large collections of text data is a time-consuming,
expensive, and laborious task, but one that is necessary to support machine
learning based on text datasets. Active learning has been shown to be an
effective way to alleviate some of the effort required in utilising large
collections of unlabelled data for machine learning tasks without needing to
fully label them. The representation mechanism used to represent text documents
when performing active learning, however, has a significant influence on how
effective the process will be. While simple vector representations such as bag
of words have been shown to be an effective way to represent documents during
active learning, the emergence of representation mechanisms based on the word
embeddings prevalent in neural network research (e.g. word2vec and
transformer-based models like BERT) offer a promising, and as yet not fully
explored, alternative. This paper describes a large-scale evaluation of the
effectiveness of different text representation mechanisms for active learning
across 8 datasets from varied domains. This evaluation shows that using
representations based on modern word embeddings---especially BERT---, which
have not yet been widely used in active learning, achieves a significant
improvement over more commonly used vector-based methods like bag of words.
| 2,019 | Computation and Language |
Towards Controllable and Personalized Review Generation | In this paper, we propose a novel model RevGAN that automatically generates
controllable and personalized user reviews based on the arbitrarily given
sentimental and stylistic information. RevGAN utilizes the combination of three
novel components, including self-attentive recursive autoencoders, conditional
discriminators, and personalized decoders. We test its performance on the
several real-world datasets, where our model significantly outperforms
state-of-the-art generation models in terms of sentence quality, coherence,
personalization and human evaluations. We also empirically show that the
generated reviews could not be easily distinguished from the organically
produced reviews and that they follow the same statistical linguistics laws.
| 2,020 | Computation and Language |
Find or Classify? Dual Strategy for Slot-Value Predictions on
Multi-Domain Dialog State Tracking | Dialog state tracking (DST) is a core component in task-oriented dialog
systems. Existing approaches for DST mainly fall into one of two categories,
namely, ontology-based and ontology-free methods. An ontology-based method
selects a value from a candidate-value list for each target slot, while an
ontology-free method extracts spans from dialog contexts. Recent work
introduced a BERT-based model to strike a balance between the two methods by
pre-defining categorical and non-categorical slots. However, it is not clear
enough which slots are better handled by either of the two slot types, and the
way to use the pre-trained model has not been well investigated. In this paper,
we propose a simple yet effective dual-strategy model for DST, by adapting a
single BERT-style reading comprehension model to jointly handle both the
categorical and non-categorical slots. Our experiments on the MultiWOZ datasets
show that our method significantly outperforms the BERT-based counterpart,
finding that the key is a deep interaction between the domain-slot and context
information. When evaluated on noisy (MultiWOZ 2.0) and cleaner (MultiWOZ 2.1)
settings, our method performs competitively and robustly across the two
different settings. Our method sets the new state of the art in the noisy
setting, while performing more robustly than the best model in the cleaner
setting. We also conduct a comprehensive error analysis on the dataset,
including the effects of the dual strategy for each slot, to facilitate future
research.
| 2,020 | Computation and Language |
Executing Instructions in Situated Collaborative Interactions | We study a collaborative scenario where a user not only instructs a system to
complete tasks, but also acts alongside it. This allows the user to adapt to
the system abilities by changing their language or deciding to simply
accomplish some tasks themselves, and requires the system to effectively
recover from errors as the user strategically assigns it new goals. We build a
game environment to study this scenario, and learn to map user instructions to
system actions. We introduce a learning approach focused on recovery from
cascading errors between instructions, and modeling methods to explicitly
reason about instructions with multiple goals. We evaluate with a new
evaluation protocol using recorded interactions and online games with human
users, and observe how users adapt to the system abilities.
| 2,022 | Computation and Language |
Unfolding the Structure of a Document using Deep Learning | Understanding and extracting of information from large documents, such as
business opportunities, academic articles, medical documents and technical
reports, poses challenges not present in short documents. Such large documents
may be multi-themed, complex, noisy and cover diverse topics. We describe a
framework that can analyze large documents and help people and computer systems
locate desired information in them. We aim to automatically identify and
classify different sections of documents and understand their purpose within
the document. A key contribution of our research is modeling and extracting the
logical and semantic structure of electronic documents using deep learning
techniques. We evaluate the effectiveness and robustness of our framework
through extensive experiments on two collections: more than one million
scholarly articles from arXiv and a collection of requests for proposal
documents from government sources.
| 2,019 | Computation and Language |
Do People Prefer "Natural" code? | Natural code is known to be very repetitive (much more so than natural
language corpora); furthermore, this repetitiveness persists, even after
accounting for the simpler syntax of code. However, programming languages are
very expressive, allowing a great many different ways (all clear and
unambiguous) to express even very simple computations. So why is natural code
repetitive? We hypothesize that the reasons for this lie in fact that code is
bimodal: it is executed by machines, but also read by humans. This bimodality,
we argue, leads developers to write code in certain preferred ways that would
be familiar to code readers. To test this theory, we 1) model familiarity using
a language model estimated over a large training corpus and 2) run an
experiment applying several meaning preserving transformations to Java and
Python expressions in a distinct test corpus to see if forms more familiar to
readers (as predicted by the language models) are in fact the ones actually
written. We find that these transformations generally produce program
structures that are less common in practice, supporting the theory that the
high repetitiveness in code is a matter of deliberate preference. Finally, 3)
we use a human subject study to show alignment between language model score and
human preference for the first time in code, providing support for using this
measure to improve code.
| 2,019 | Computation and Language |
Knowledge Distillation from Internal Representations | Knowledge distillation is typically conducted by training a small model (the
student) to mimic a large and cumbersome model (the teacher). The idea is to
compress the knowledge from the teacher by using its output probabilities as
soft-labels to optimize the student. However, when the teacher is considerably
large, there is no guarantee that the internal knowledge of the teacher will be
transferred into the student; even if the student closely matches the
soft-labels, its internal representations may be considerably different. This
internal mismatch can undermine the generalization capabilities originally
intended to be transferred from the teacher to the student. In this paper, we
propose to distill the internal representations of a large model such as BERT
into a simplified version of it. We formulate two ways to distill such
representations and various algorithms to conduct the distillation. We
experiment with datasets from the GLUE benchmark and consistently show that
adding knowledge distillation from internal representations is a more powerful
method than only using soft-label distillation.
| 2,020 | Computation and Language |
Towards De-identification of Legal Texts | In many countries, personal information that can be published or shared
between organizations is regulated and, therefore, documents must undergo a
process of de-identification to eliminate or obfuscate confidential data. Our
work focuses on the de-identification of legal texts, where the goal is to hide
the names of the actors involved in a lawsuit without losing the sense of the
story. We present a first evaluation on our corpus of NLP tools in tasks such
as segmentation, tokenization and recognition of named entities, and we analyze
several evaluation measures for our de-identification task. Results are meager:
84% of the documents have at least one name not covered by NER tools, something
that might lead to the re-identification of involved names. We conclude that
tools must be strongly adapted for processing texts of this particular domain.
| 2,019 | Computation and Language |
The Daunting Task of Real-World Textual Style Transfer Auto-Evaluation | The difficulty of textual style transfer lies in the lack of parallel
corpora. Numerous advances have been proposed for the unsupervised generation.
However, significant problems remain with the auto-evaluation of style transfer
tasks. Based on the summary of Pang and Gimpel (2018) and Mir et al. (2019),
style transfer evaluations rely on three criteria: style accuracy of
transferred sentences, content similarity between original and transferred
sentences, and fluency of transferred sentences. We elucidate the problematic
current state of style transfer research. Given that current tasks do not
represent real use cases of style transfer, current auto-evaluation approach is
flawed. This discussion aims to bring researchers to think about the future of
style transfer and style transfer evaluation research.
| 2,019 | Computation and Language |
Alternating Recurrent Dialog Model with Large-scale Pre-trained Language
Models | Existing dialog system models require extensive human annotations and are
difficult to generalize to different tasks. The recent success of large
pre-trained language models such as BERT and GPT-2 (Devlin et al., 2019;
Radford et al., 2019) have suggested the effectiveness of incorporating
language priors in down-stream NLP tasks. However, how much pre-trained
language models can help dialog response generation is still under exploration.
In this paper, we propose a simple, general, and effective framework:
Alternating Roles Dialog Model (ARDM). ARDM models each speaker separately and
takes advantage of the large pre-trained language model. It requires no
supervision from human annotations such as belief states or dialog acts to
achieve effective conversations. ARDM outperforms or is on par with
state-of-the-art methods on two popular task-oriented dialog datasets:
CamRest676 and MultiWOZ. Moreover, we can generalize ARDM to more challenging,
non-collaborative tasks such as persuasion. In persuasion tasks, ARDM is
capable of generating human-like responses to persuade people to donate to a
charity.
| 2,021 | Computation and Language |
HuggingFace's Transformers: State-of-the-art Natural Language Processing | Recent progress in natural language processing has been driven by advances in
both model architecture and model pretraining. Transformer architectures have
facilitated building higher-capacity models and pretraining has made it
possible to effectively utilize this capacity for a wide variety of tasks.
\textit{Transformers} is an open-source library with the goal of opening up
these advances to the wider machine learning community. The library consists of
carefully engineered state-of-the art Transformer architectures under a unified
API. Backing this library is a curated collection of pretrained models made by
and available for the community. \textit{Transformers} is designed to be
extensible by researchers, simple for practitioners, and fast and robust in
industrial deployments. The library is available at
\url{https://github.com/huggingface/transformers}.
| 2,020 | Computation and Language |
Is Multilingual BERT Fluent in Language Generation? | The multilingual BERT model is trained on 104 languages and meant to serve as
a universal language model and tool for encoding sentences. We explore how well
the model performs on several languages across several tasks: a diagnostic
classification probing the embeddings for a particular syntactic property, a
cloze task testing the language modelling ability to fill in gaps in a
sentence, and a natural language generation task testing for the ability to
produce coherent text fitting a given context. We find that the currently
available multilingual BERT model is clearly inferior to the monolingual
counterparts, and cannot in many cases serve as a substitute for a well-trained
monolingual model. We find that the English and German models perform well at
generation, whereas the multilingual model is lacking, in particular, for
Nordic languages.
| 2,019 | Computation and Language |
Word Embedding Visualization Via Dictionary Learning | Co-occurrence statistics based word embedding techniques have proved to be
very useful in extracting the semantic and syntactic representation of words as
low dimensional continuous vectors. In this work, we discovered that dictionary
learning can open up these word vectors as a linear combination of more
elementary word factors. We demonstrate many of the learned factors have
surprisingly strong semantic or syntactic meaning corresponding to the factors
previously identified manually by human inspection. Thus dictionary learning
provides a powerful visualization tool for understanding word embedding
representations. Furthermore, we show that the word factors can help in
identifying key semantic and syntactic differences in word analogy tasks and
improve upon the state-of-the-art word embedding techniques in these tasks by a
large margin.
| 2,021 | Computation and Language |
Novel Applications of Factored Neural Machine Translation | In this work, we explore the usefulness of target factors in neural machine
translation (NMT) beyond their original purpose of predicting word lemmas and
their inflections, as proposed by Garc\`ia-Mart\`inez et al., 2016. For this,
we introduce three novel applications of the factored output architecture: In
the first one, we use a factor to explicitly predict the word case separately
from the target word itself. This allows for information to be shared between
different casing variants of a word. In a second task, we use a factor to
predict when two consecutive subwords have to be joined, eliminating the need
for target subword joining markers. The third task is the prediction of special
tokens of the operation sequence NMT model (OSNMT) of Stahlberg et al., 2018.
Automatic evaluation on English-to-German and English-to-Turkish tasks showed
that integration of such auxiliary prediction tasks into NMT is at least as
good as the standard NMT approach. For the OSNMT, we observed a significant
improvement in BLEU over the baseline OSNMT implementation due to a reduced
output sequence length that resulted from the introduction of the target
factors.
| 2,019 | Computation and Language |
Measuring Sentences Similarity: A Survey | This study is to review the approaches used for measuring sentences
similarity. Measuring similarity between natural language sentences is a
crucial task for many Natural Language Processing applications such as text
classification, information retrieval, question answering, and plagiarism
detection. This survey classifies approaches of calculating sentences
similarity based on the adopted methodology into three categories. Word-to-word
based, structure based, and vector-based are the most widely used approaches to
find sentences similarity. Each approach measures relatedness between short
texts based on a specific perspective. In addition, datasets that are mostly
used as benchmarks for evaluating techniques in this field are introduced to
provide a complete view on this issue. The approaches that combine more than
one perspective give better results. Moreover, structure based similarity that
measures similarity between sentences structures needs more investigation.
| 2,019 | Computation and Language |
Assessing the Efficacy of Clinical Sentiment Analysis and Topic
Extraction in Psychiatric Readmission Risk Prediction | Predicting which patients are more likely to be readmitted to a hospital
within 30 days after discharge is a valuable piece of information in clinical
decision-making. Building a successful readmission risk classifier based on the
content of Electronic Health Records (EHRs) has proved, however, to be a
challenging task. Previously explored features include mainly structured
information, such as sociodemographic data, comorbidity codes and physiological
variables. In this paper we assess incorporating additional clinically
interpretable NLP-based features such as topic extraction and clinical
sentiment analysis to predict early readmission risk in psychiatry patients.
| 2,019 | Computation and Language |
BHAAV- A Text Corpus for Emotion Analysis from Hindi Stories | In this paper, we introduce the first and largest Hindi text corpus, named
BHAAV, which means emotions in Hindi, for analyzing emotions that a writer
expresses through his characters in a story, as perceived by a narrator/reader.
The corpus consists of 20,304 sentences collected from 230 different short
stories spanning across 18 genres such as Inspirational and Mystery. Each
sentence has been annotated into one of the five emotion categories - anger,
joy, suspense, sad, and neutral, by three native Hindi speakers with at least
ten years of formal education in Hindi. We also discuss challenges in the
annotation of low resource languages such as Hindi, and discuss the scope of
the proposed corpus along with its possible uses. We also provide a detailed
analysis of the dataset and train strong baseline classifiers reporting their
performances.
| 2,019 | Computation and Language |
A Closer Look At Feature Space Data Augmentation For Few-Shot Intent
Classification | New conversation topics and functionalities are constantly being added to
conversational AI agents like Amazon Alexa and Apple Siri. As data collection
and annotation is not scalable and is often costly, only a handful of examples
for the new functionalities are available, which results in poor generalization
performance. We formulate it as a Few-Shot Integration (FSI) problem where a
few examples are used to introduce a new intent. In this paper, we study six
feature space data augmentation methods to improve classification performance
in FSI setting in combination with both supervised and unsupervised
representation learning methods such as BERT. Through realistic experiments on
two public conversational datasets, SNIPS, and the Facebook Dialog corpus, we
show that data augmentation in feature space provides an effective way to
improve intent classification performance in few-shot setting beyond
traditional transfer learning approaches. In particular, we show that (a)
upsampling in latent space is a competitive baseline for feature space
augmentation (b) adding the difference between two examples to a new example is
a simple yet effective data augmentation method.
| 2,019 | Computation and Language |
Efficient Semi-Supervised Learning for Natural Language Understanding by
Optimizing Diversity | Expanding new functionalities efficiently is an ongoing challenge for
single-turn task-oriented dialogue systems. In this work, we explore
functionality-specific semi-supervised learning via self-training. We consider
methods that augment training data automatically from unlabeled data sets in a
functionality-targeted manner. In addition, we examine multiple techniques for
efficient selection of augmented utterances to reduce training time and
increase diversity. First, we consider paraphrase detection methods that
attempt to find utterance variants of labeled training data with good coverage.
Second, we explore sub-modular optimization based on n-grams features for
utterance selection. Experiments show that functionality-specific self-training
is very effective for improving system performance. In addition, methods
optimizing diversity can reduce training data in many cases to 50% with little
impact on performance.
| 2,019 | Computation and Language |
Perturbation Sensitivity Analysis to Detect Unintended Model Biases | Data-driven statistical Natural Language Processing (NLP) techniques leverage
large amounts of language data to build models that can understand language.
However, most language data reflect the public discourse at the time the data
was produced, and hence NLP models are susceptible to learning incidental
associations around named referents at a particular point in time, in addition
to general linguistic meaning. An NLP system designed to model notions such as
sentiment and toxicity should ideally produce scores that are independent of
the identity of such entities mentioned in text and their social associations.
For example, in a general purpose sentiment analysis system, a phrase such as I
hate Katy Perry should be interpreted as having the same sentiment as I hate
Taylor Swift. Based on this idea, we propose a generic evaluation framework,
Perturbation Sensitivity Analysis, which detects unintended model biases
related to named entities, and requires no new annotations or corpora. We
demonstrate the utility of this analysis by employing it on two different NLP
models --- a sentiment model and a toxicity model --- applied on online
comments in English language from four different genres.
| 2,019 | Computation and Language |
Spoken Language Identification using ConvNets | Language Identification (LI) is an important first step in several speech
processing systems. With a growing number of voice-based assistants, speech LI
has emerged as a widely researched field. To approach the problem of
identifying languages, we can either adopt an implicit approach where only the
speech for a language is present or an explicit one where text is available
with its corresponding transcript. This paper focuses on an implicit approach
due to the absence of transcriptive data. This paper benchmarks existing models
and proposes a new attention based model for language identification which uses
log-Mel spectrogram images as input. We also present the effectiveness of raw
waveforms as features to neural network models for LI tasks. For training and
evaluation of models, we classified six languages (English, French, German,
Spanish, Russian and Italian) with an accuracy of 95.4% and four languages
(English, French, German, Spanish) with an accuracy of 96.3% obtained from the
VoxForge dataset. This approach can further be scaled to incorporate more
languages.
| 2,019 | Computation and Language |
Learning to Contextually Aggregate Multi-Source Supervision for Sequence
Labeling | Sequence labeling is a fundamental framework for various natural language
processing problems. Its performance is largely influenced by the annotation
quality and quantity in supervised learning scenarios, and obtaining ground
truth labels is often costly. In many cases, ground truth labels do not exist,
but noisy annotations or annotations from different domains are accessible. In
this paper, we propose a novel framework Consensus Network (ConNet) that can be
trained on annotations from multiple sources (e.g., crowd annotation,
cross-domain data...). It learns individual representation for every source and
dynamically aggregates source-specific knowledge by a context-aware attention
module. Finally, it leads to a model reflecting the agreement (consensus) among
multiple sources. We evaluate the proposed framework in two practical settings
of multi-source learning: learning with crowd annotations and unsupervised
cross-domain model adaptation. Extensive experimental results show that our
model achieves significant improvements over existing methods in both settings.
We also demonstrate that the method can apply to various tasks and cope with
different encoders.
| 2,020 | Computation and Language |
FUSE: Multi-Faceted Set Expansion by Coherent Clustering of Skip-grams | Set expansion aims to expand a small set of seed entities into a complete set
of relevant entities. Most existing approaches assume the input seed set is
unambiguous and completely ignore the multi-faceted semantics of seed entities.
As a result, given the seed set {"Canon", "Sony", "Nikon"}, previous models
return one mixed set of entities that are either Camera Brands or Japanese
Companies. In this paper, we study the task of multi-faceted set expansion,
which aims to capture all semantic facets in the seed set and return multiple
sets of entities, one for each semantic facet. We propose an unsupervised
framework, FUSE, which consists of three major components: (1) facet discovery
module: identifies all semantic facets of each seed entity by extracting and
clustering its skip-grams, and (2) facet fusion module: discovers shared
semantic facets of the entire seed set by an optimization formulation, and (3)
entity expansion module: expands each semantic facet by utilizing a masked
language model with pre-trained BERT models. Extensive experiments demonstrate
that FUSE can accurately identify multiple semantic facets of the seed set and
generate quality entities for each facet.
| 2,020 | Computation and Language |
Learning Only from Relevant Keywords and Unlabeled Documents | We consider a document classification problem where document labels are
absent but only relevant keywords of a target class and unlabeled documents are
given. Although heuristic methods based on pseudo-labeling have been
considered, theoretical understanding of this problem has still been limited.
Moreover, previous methods cannot easily incorporate well-developed techniques
in supervised text classification. In this paper, we propose a theoretically
guaranteed learning framework that is simple to implement and has flexible
choices of models, e.g., linear models or neural networks. We demonstrate how
to optimize the area under the receiver operating characteristic curve (AUC)
effectively and also discuss how to adjust it to optimize other well-known
evaluation metrics such as the accuracy and F1-measure. Finally, we show the
effectiveness of our framework using benchmark datasets.
| 2,019 | Computation and Language |
Controllable Sentence Simplification: Employing Syntactic and Lexical
Constraints | Sentence simplification aims to make sentences easier to read and understand.
Recent approaches have shown promising results with sequence-to-sequence models
which have been developed assuming homogeneous target audiences. In this paper
we argue that different users have different simplification needs (e.g.
dyslexics vs. non-native speakers), and propose CROSS, ContROllable Sentence
Simplification model, which allows to control both the level of simplicity and
the type of the simplification. We achieve this by enriching a
Transformer-based architecture with syntactic and lexical constraints (which
can be set or learned from data). Empirical results on two benchmark datasets
show that constraints are key to successful simplification, offering flexible
generation output.
| 2,019 | Computation and Language |
Language Transfer for Early Warning of Epidemics from Social Media | Statements on social media can be analysed to identify individuals who are
experiencing red flag medical symptoms, allowing early detection of the spread
of disease such as influenza. Since disease does not respect cultural borders
and may spread between populations speaking different languages, we would like
to build multilingual models. However, the data required to train models for
every language may be difficult, expensive and time-consuming to obtain,
particularly for low-resource languages. Taking Japanese as our target
language, we explore methods by which data in one language might be used to
build models for a different language. We evaluate strategies of training on
machine translated data and of zero-shot transfer through the use of
multilingual models. We find that the choice of source language impacts the
performance, with Chinese-Japanese being a better language pair than
English-Japanese. Training on machine translated data shows promise, especially
when used in conjunction with a small amount of target language data.
| 2,019 | Computation and Language |
R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for
the Right Reason | Recent studies have revealed that reading comprehension (RC) systems learn to
exploit annotation artifacts and other biases in current datasets. This
prevents the community from reliably measuring the progress of RC systems. To
address this issue, we introduce R4C, a new task for evaluating RC systems'
internal reasoning. R4C requires giving not only answers but also derivations:
explanations that justify predicted answers. We present a reliable,
crowdsourced framework for scalably annotating RC datasets with derivations. We
create and publicly release the R4C dataset, the first, quality-assured dataset
consisting of 4.6k questions, each of which is annotated with 3 reference
derivations (i.e. 13.8k derivations). Experiments show that our automatic
evaluation metrics using multiple reference derivations are reliable, and that
R4C assesses different skills from an existing benchmark.
| 2,020 | Computation and Language |
Multi-label Categorization of Accounts of Sexism using a Neural
Framework | Sexism, an injustice that subjects women and girls to enormous suffering,
manifests in blatant as well as subtle ways. In the wake of growing
documentation of experiences of sexism on the web, the automatic categorization
of accounts of sexism has the potential to assist social scientists and policy
makers in studying and countering sexism better. The existing work on sexism
classification, which is different from sexism detection, has certain
limitations in terms of the categories of sexism used and/or whether they can
co-occur. To the best of our knowledge, this is the first work on the
multi-label classification of sexism of any kind(s), and we contribute the
largest dataset for sexism categorization. We develop a neural solution for
this multi-label classification that can combine sentence representations
obtained using models such as BERT with distributional and linguistic word
embeddings using a flexible, hierarchical architecture involving recurrent
components and optional convolutional ones. Further, we leverage unlabeled
accounts of sexism to infuse domain-specific elements into our framework. The
best proposed method outperforms several deep learning as well as traditional
machine learning baselines by an appreciable margin.
| 2,019 | Computation and Language |
Universal Adversarial Perturbation for Text Classification | Given a state-of-the-art deep neural network text classifier, we show the
existence of a universal and very small perturbation vector (in the embedding
space) that causes natural text to be misclassified with high probability.
Unlike images on which a single fixed-size adversarial perturbation can be
found, text is of variable length, so we define the "universality" as
"token-agnostic", where a single perturbation is applied to each token,
resulting in different perturbations of flexible sizes at the sequence level.
We propose an algorithm to compute universal adversarial perturbations, and
show that the state-of-the-art deep neural networks are highly vulnerable to
them, even though they keep the neighborhood of tokens mostly preserved. We
also show how to use these adversarial perturbations to generate adversarial
text samples. The surprising existence of universal "token-agnostic"
adversarial perturbations may reveal important properties of a text classifier.
| 2,019 | Computation and Language |
Multilingual Question Answering from Formatted Text applied to
Conversational Agents | Recent advances with language models (e.g. BERT, XLNet, ...), have allowed
surpassing human performance on complex NLP tasks such as Reading
Comprehension. However, labeled datasets for training are available mostly in
English which makes it difficult to acknowledge progress in other languages.
Fortunately, models are now pre-trained on unlabeled data from hundreds of
languages and exhibit interesting transfer abilities from one language to
another. In this paper, we show that multilingual BERT is naturally capable of
zero-shot transfer for an extractive Question Answering task (eQA) from English
to other languages. More specifically, it outperforms the best previously known
baseline for transfer to Japanese and French. Moreover, using a recently
published large eQA French dataset, we are able to further show that (1)
zero-shot transfer provides results really close to a direct training on the
target language and (2) combination of transfer and training on target is the
best option overall. We finally present a practical application: a multilingual
conversational agent called Kate which answers to HR-related questions in
several languages directly from the content of intranet pages.
| 2,021 | Computation and Language |
Cross-lingual Alignment vs Joint Training: A Comparative Study and A
Simple Unified Framework | Learning multilingual representations of text has proven a successful method
for many cross-lingual transfer learning tasks. There are two main paradigms
for learning such representations: (1) alignment, which maps different
independently trained monolingual representations into a shared space, and (2)
joint training, which directly learns unified multilingual representations
using monolingual and cross-lingual objectives jointly. In this paper, we first
conduct direct comparisons of representations learned using both of these
methods across diverse cross-lingual tasks. Our empirical results reveal a set
of pros and cons for both methods, and show that the relative performance of
alignment versus joint training is task-dependent. Stemming from this analysis,
we propose a simple and novel framework that combines these two previously
mutually-exclusive approaches. Extensive experiments demonstrate that our
proposed framework alleviates limitations of both approaches, and outperforms
existing methods on the MUSE bilingual lexicon induction (BLI) benchmark. We
further show that this framework can generalize to contextualized
representations such as Multilingual BERT, and produces state-of-the-art
results on the CoNLL cross-lingual NER benchmark.
| 2,020 | Computation and Language |
Automatic Quality Estimation for Natural Language Generation: Ranting
(Jointly Rating and Ranking) | We present a recurrent neural network based system for automatic quality
estimation of natural language generation (NLG) outputs, which jointly learns
to assign numerical ratings to individual outputs and to provide pairwise
rankings of two different outputs. The latter is trained using pairwise hinge
loss over scores from two copies of the rating network.
We use learning to rank and synthetic data to improve the quality of ratings
assigned by our system: we synthesise training pairs of distorted system
outputs and train the system to rank the less distorted one higher. This leads
to a 12% increase in correlation with human ratings over the previous
benchmark. We also establish the state of the art on the dataset of relative
rankings from the E2E NLG Challenge (Du\v{s}ek et al., 2019), where synthetic
data lead to a 4% accuracy increase over the base model.
| 2,019 | Computation and Language |
Structured Pruning of Large Language Models | Large language models have recently achieved state of the art performance
across a wide variety of natural language tasks. Meanwhile, the size of these
models and their latency have significantly increased, which makes their usage
costly, and raises an interesting question: do language models need to be
large? We study this question through the lens of model compression. We present
a generic, structured pruning approach by parameterizing each weight matrix
using its low-rank factorization, and adaptively removing rank-1 components
during training. On language modeling tasks, our structured approach
outperforms other unstructured and block-structured pruning baselines at
various compression levels, while achieving significant speedups during both
training and inference. We also demonstrate that our method can be applied to
pruning adaptive word embeddings in large language models, and to pruning the
BERT model on several downstream fine-tuning classification benchmarks.
| 2,021 | Computation and Language |
Conversational Transfer Learning for Emotion Recognition | Recognizing emotions in conversations is a challenging task due to the
presence of contextual dependencies governed by self- and inter-personal
influences. Recent approaches have focused on modeling these dependencies
primarily via supervised learning. However, purely supervised strategies demand
large amounts of annotated data, which is lacking in most of the available
corpora in this task. To tackle this challenge, we look at transfer learning
approaches as a viable alternative. Given the large amount of available
conversational data, we investigate whether generative conversational models
can be leveraged to transfer affective knowledge for detecting emotions in
context. We propose an approach, TL-ERC, where we pre-train a hierarchical
dialogue model on multi-turn conversations (source) and then transfer its
parameters to a conversational emotion classifier (target). In addition to the
popular practice of using pre-trained sentence encoders, our approach also
incorporates recurrent parameters that model inter-sentential context across
the whole conversation. Based on this idea, we perform several experiments
across multiple datasets and find improvement in performance and robustness
against limited training data. TL-ERC also achieves better validation
performances in significantly fewer epochs. Overall, we infer that knowledge
acquired from dialogue generators can indeed help recognize emotions in
conversations.
| 2,020 | Computation and Language |
Automatic segmentation of texts into units of meaning for reading
assistance | The emergence of the digital book is a major step forward in providing access
to reading, and therefore often to the common culture and the labour market. By
allowing the enrichment of texts with cognitive crutches, EPub 3 compatible
accessibility formats such as FROG have proven their effectiveness in
alleviating but also reducing dyslexic disorders. In this paper, we show how
Artificial Intelligence and particularly Transfer Learning with Google BERT can
automate the division into units of meaning, and thus facilitate the creation
of enriched digital books at a moderate cost.
| 2,019 | Computation and Language |
Group, Extract and Aggregate: Summarizing a Large Amount of Finance News
for Forex Movement Prediction | Incorporating related text information has proven successful in stock market
prediction. However, it is a huge challenge to utilize texts in the enormous
forex (foreign currency exchange) market because the associated texts are too
redundant. In this work, we propose a BERT-based Hierarchical Aggregation Model
to summarize a large amount of finance news to predict forex movement. We
firstly group news from different aspects: time, topic and category. Then we
extract the most crucial news in each group by the SOTA extractive
summarization method. Finally, we conduct interaction between the news and the
trade data with attention to predict the forex movement. The experimental
results show that the category based method performs best among three grouping
methods and outperforms all the baselines. Besides, we study the influence of
essential news attributes (category and region) by statistical analysis and
summarize the influence patterns for different currency pairs.
| 2,019 | Computation and Language |
BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual
Reading Comprehension on Novels | This paper presents BiPaR, a bilingual parallel novel-style machine reading
comprehension (MRC) dataset, developed to support multilingual and
cross-lingual reading comprehension. The biggest difference between BiPaR and
existing reading comprehension datasets is that each triple (Passage, Question,
Answer) in BiPaR is written parallelly in two languages. We collect 3,667
bilingual parallel paragraphs from Chinese and English novels, from which we
construct 14,668 parallel question-answer pairs via crowdsourced workers
following a strict quality control procedure. We analyze BiPaR in depth and
find that BiPaR offers good diversification in prefixes of questions, answer
types and relationships between questions and passages. We also observe that
answering questions of novels requires reading comprehension skills of
coreference resolution, multi-sentence reasoning, and understanding of implicit
causality, etc. With BiPaR, we build monolingual, multilingual, and
cross-lingual MRC baseline models. Even for the relatively simple monolingual
MRC on this dataset, experiments show that a strong BERT baseline is over 30
points behind human in terms of both EM and F1 score, indicating that BiPaR
provides a challenging testbed for monolingual, multilingual and cross-lingual
MRC on novels. The dataset is available at https://multinlp.github.io/BiPaR/.
| 2,019 | Computation and Language |
Keyphrase Generation: A Multi-Aspect Survey | Extractive keyphrase generation research has been around since the nineties,
but the more advanced abstractive approach based on the encoder-decoder
framework and sequence-to-sequence learning has been explored only recently. In
fact, more than a dozen of abstractive methods have been proposed in the last
three years, producing meaningful keyphrases and achieving state-of-the-art
scores. In this survey, we examine various aspects of the extractive keyphrase
generation methods and focus mostly on the more recent abstractive methods that
are based on neural networks. We pay particular attention to the mechanisms
that have driven the perfection of the later. A huge collection of scientific
article metadata and the corresponding keyphrases is created and released for
the research community. We also present various keyphrase generation and text
summarization research patterns and trends of the last two decades.
| 2,020 | Computation and Language |
Multi-Task Learning for Conversational Question Answering over a
Large-Scale Knowledge Base | We consider the problem of conversational question answering over a
large-scale knowledge base. To handle huge entity vocabulary of a large-scale
knowledge base, recent neural semantic parsing based approaches usually
decompose the task into several subtasks and then solve them sequentially,
which leads to following issues: 1) errors in earlier subtasks will be
propagated and negatively affect downstream ones; and 2) each subtask cannot
naturally share supervision signals with others. To tackle these issues, we
propose an innovative multi-task learning framework where a pointer-equipped
semantic parsing model is designed to resolve coreference in conversations, and
naturally empower joint learning with a novel type-aware entity detection
model. The proposed framework thus enables shared supervisions and alleviates
the effect of error propagation. Experiments on a large-scale conversational
question answering dataset containing 1.6M question answering pairs over 12.8M
entities show that the proposed framework improves overall F1 score from 67% to
79% compared with previous state-of-the-art work.
| 2,019 | Computation and Language |
How Does Language Influence Documentation Workflow? Unsupervised Word
Discovery Using Translations in Multiple Languages | For language documentation initiatives, transcription is an expensive
resource: one minute of audio is estimated to take one hour and a half on
average of a linguist's work (Austin and Sallabank, 2013). Recently, collecting
aligned translations in well-resourced languages became a popular solution for
ensuring posterior interpretability of the recordings (Adda et al. 2016). In
this paper we investigate language-related impact in automatic approaches for
computational language documentation. We translate the bilingual Mboshi-French
parallel corpus (Godard et al. 2017) into four other languages, and we perform
bilingual-rooted unsupervised word discovery. Our results hint towards an
impact of the well-resourced language in the quality of the output. However, by
combining the information learned by different bilingual models, we are only
able to marginally increase the quality of the segmentation.
| 2,019 | Computation and Language |
exBERT: A Visual Analysis Tool to Explore Learned Representations in
Transformers Models | Large language models can produce powerful contextual representations that
lead to improvements across many NLP tasks. Since these models are typically
guided by a sequence of learned self attention mechanisms and may comprise
undesired inductive biases, it is paramount to be able to explore what the
attention has learned. While static analyses of these models lead to targeted
insights, interactive tools are more dynamic and can help humans better gain an
intuition for the model-internal reasoning process. We present exBERT, an
interactive tool named after the popular BERT language model, that provides
insights into the meaning of the contextual representations by matching a
human-specified input to similar contexts in a large annotated dataset. By
aggregating the annotations of the matching similar contexts, exBERT helps
intuitively explain what each attention-head has learned.
| 2,019 | Computation and Language |
The Emergence of Compositional Languages for Numeric Concepts Through
Iterated Learning in Neural Agents | Since first introduced, computer simulation has been an increasingly
important tool in evolutionary linguistics. Recently, with the development of
deep learning techniques, research in grounded language learning has also
started to focus on facilitating the emergence of compositional languages
without pre-defined elementary linguistic knowledge. In this work, we explore
the emergence of compositional languages for numeric concepts in multi-agent
communication systems. We demonstrate that compositional language for encoding
numeric concepts can emerge through iterated learning in populations of deep
neural network agents. However, language properties greatly depend on the input
representations given to agents. We found that compositional languages only
emerge if they require less iterations to be fully learnt than other
non-degenerate languages for agents on a given input representation.
| 2,019 | Computation and Language |
Neural Generation for Czech: Data and Baselines | We present the first dataset targeted at end-to-end NLG in Czech in the
restaurant domain, along with several strong baseline models using the
sequence-to-sequence approach. While non-English NLG is under-explored in
general, Czech, as a morphologically rich language, makes the task even harder:
Since Czech requires inflecting named entities, delexicalization or copy
mechanisms do not work out-of-the-box and lexicalizing the generated outputs is
non-trivial.
In our experiments, we present two different approaches to this this problem:
(1) using a neural language model to select the correct inflected form while
lexicalizing, (2) a two-step generation setup: our sequence-to-sequence model
generates an interleaved sequence of lemmas and morphological tags, which are
then inflected by a morphological generator.
| 2,019 | Computation and Language |
Learning Analogy-Preserving Sentence Embeddings for Answer Selection | Answer selection aims at identifying the correct answer for a given question
from a set of potentially correct answers. Contrary to previous works, which
typically focus on the semantic similarity between a question and its answer,
our hypothesis is that question-answer pairs are often in analogical relation
to each other. Using analogical inference as our use case, we propose a
framework and a neural network architecture for learning dedicated sentence
embeddings that preserve analogical properties in the semantic space. We
evaluate the proposed method on benchmark datasets for answer selection and
demonstrate that our sentence embeddings indeed capture analogical properties
better than conventional embeddings, and that analogy-based question answering
outperforms a comparable similarity-based technique.
| 2,019 | Computation and Language |
Model-based Interactive Semantic Parsing: A Unified Framework and A
Text-to-SQL Case Study | As a promising paradigm, interactive semantic parsing has shown to improve
both semantic parsing accuracy and user confidence in the results. In this
paper, we propose a new, unified formulation of the interactive semantic
parsing problem, where the goal is to design a model-based intelligent agent.
The agent maintains its own state as the current predicted semantic parse,
decides whether and where human intervention is needed, and generates a
clarification question in natural language. A key part of the agent is a world
model: it takes a percept (either an initial question or subsequent feedback
from the user) and transitions to a new state. We then propose a simple yet
remarkably effective instantiation of our framework, demonstrated on two
text-to-SQL datasets (WikiSQL and Spider) with different state-of-the-art base
semantic parsers. Compared to an existing interactive semantic parsing approach
that treats the base parser as a black box, our approach solicits less user
feedback but yields higher run-time accuracy.
| 2,019 | Computation and Language |
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations | We propose vq-wav2vec to learn discrete representations of audio segments
through a wav2vec-style self-supervised context prediction task. The algorithm
uses either a gumbel softmax or online k-means clustering to quantize the dense
representations. Discretization enables the direct application of algorithms
from the NLP community which require discrete inputs. Experiments show that
BERT pre-training achieves a new state of the art on TIMIT phoneme
classification and WSJ speech recognition.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.