Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
From the Token to the Review: A Hierarchical Multimodal approach to
Opinion Mining | The task of predicting fine grained user opinion based on spontaneous spoken
language is a key problem arising in the development of Computational Agents as
well as in the development of social network based opinion miners.
Unfortunately, gathering reliable data on which a model can be trained is
notoriously difficult and existing works rely only on coarsely labeled
opinions. In this work we aim at bridging the gap separating fine grained
opinion models already developed for written language and coarse grained models
developed for spontaneous multimodal opinion mining. We take advantage of the
implicit hierarchical structure of opinions to build a joint fine and coarse
grained opinion model that exploits different views of the opinion expression.
The resulting model shares some properties with attention-based models and is
shown to provide competitive results on a recently released multimodal fine
grained annotated corpus.
| 2,019 | Computation and Language |
FAMULUS: Interactive Annotation and Feedback Generation for Teaching
Diagnostic Reasoning | Our proposed system FAMULUS helps students learn to diagnose based on
automatic feedback in virtual patient simulations, and it supports instructors
in labeling training data.
Diagnosing is an exceptionally difficult skill to obtain but vital for many
different professions (e.g., medical doctors, teachers).
Previous case simulation systems are limited to multiple-choice questions and
thus cannot give constructive individualized feedback on a student's diagnostic
reasoning process.
Given initially only limited data, we leverage a (replaceable) NLP model to
both support experts in their further data annotation with automatic
suggestions, and we provide automatic feedback for students.
We argue that because the central model consistently improves, our
interactive approach encourages both students and instructors to recurrently
use the tool, and thus accelerate the speed of data creation and annotation.
We show results from two user studies on diagnostic reasoning in medicine and
teacher education and outline how our system can be extended to further use
cases.
| 2,019 | Computation and Language |
Grounded Agreement Games: Emphasizing Conversational Grounding in Visual
Dialogue Settings | Where early work on dialogue in Computational Linguistics put much emphasis
on dialogue structure and its relation to the mental states of the dialogue
participants (e.g., Allen 1979, Grosz & Sidner 1986), current work mostly
reduces dialogue to the task of producing at any one time a next utterance;
e.g. in neural chatbot or Visual Dialogue settings. As a methodological
decision, this is sound: Even the longest journey is a sequence of steps. It
becomes detrimental, however, when the tasks and datasets from which dialogue
behaviour is to be learned are tailored too much to this framing of the
problem. In this short note, we describe a family of settings which still allow
to keep dialogues simple, but add a constraint that makes participants care
about reaching mutual understanding. In such agreement games, there is a
secondary, but explicit goal besides the task level goal, and that is to reach
mutual understanding about whether the task level goal has been reached. As we
argue, this naturally triggers meta-semantic interaction and mutual engagement,
and hence leads to richer data from which to induce models.
| 2,019 | Computation and Language |
HARE: a Flexible Highlighting Annotator for Ranking and Exploration | Exploration and analysis of potential data sources is a significant challenge
in the application of NLP techniques to novel information domains. We describe
HARE, a system for highlighting relevant information in document collections to
support ranking and triage, which provides tools for post-processing and
qualitative analysis for model development and tuning. We apply HARE to the use
case of narrative descriptions of mobility information in clinical data, and
demonstrate its utility in comparing candidate embedding features. We provide a
web-based interface for annotation visualization and document ranking, with a
modular backend to support interoperability with existing annotation tools. Our
system is available online at https://github.com/OSU-slatelab/HARE.
| 2,019 | Computation and Language |
Memorizing All for Implicit Discourse Relation Recognition | Implicit discourse relation recognition is a challenging task due to the
absence of the necessary informative clue from explicit connectives. The
prediction of relations requires a deep understanding of the semantic meanings
of sentence pairs. As implicit discourse relation recognizer has to carefully
tackle the semantic similarity of the given sentence pairs and the severe data
sparsity issue exists in the meantime, it is supposed to be beneficial from
mastering the entire training data. Thus in this paper, we propose a novel
memory mechanism to tackle the challenges for further performance improvement.
The memory mechanism is adequately memorizing information by pairing
representations and discourse relations of all training instances, which right
fills the slot of the data-hungry issue in the current implicit discourse
relation recognizer. Our experiments show that our full model with memorizing
the entire training set reaches new state-of-the-art against strong baselines,
which especially for the first time exceeds the milestone of 60% accuracy in
the 4-way task.
| 2,019 | Computation and Language |
Translate and Label! An Encoder-Decoder Approach for Cross-lingual
Semantic Role Labeling | We propose a Cross-lingual Encoder-Decoder model that simultaneously
translates and generates sentences with Semantic Role Labeling annotations in a
resource-poor target language. Unlike annotation projection techniques, our
model does not need parallel data during inference time. Our approach can be
applied in monolingual, multilingual and cross-lingual settings and is able to
produce dependency-based and span-based SRL annotations. We benchmark the
labeling performance of our model in different monolingual and multilingual
settings using well-known SRL datasets. We then train our model in a
cross-lingual setting to generate new SRL labeled data. Finally, we measure the
effectiveness of our method by using the generated data to augment the training
basis for resource-poor languages and perform manual evaluation to show that it
produces high-quality sentences and assigns accurate semantic role annotations.
Our proposed architecture offers a flexible method for leveraging SRL data in
multiple languages.
| 2,019 | Computation and Language |
CCKS 2019 Shared Task on Inter-Personal Relationship Extraction | The CCKS2019 shared task was devoted to inter-personal relationship
extraction. Given two person entities and at least one sentence containing
these two entities, participating teams are asked to predict the relationship
between the entities according to a given relation list. This year, 358 teams
from various universities and organizations participated in this task. In this
paper, we present the task definition, the description of data and the
evaluation methodology used during this shared task. We also present a brief
overview of the various methods adopted by the participating teams. Finally, we
present the evaluation results.
| 2,019 | Computation and Language |
Human-grounded Evaluations of Explanation Methods for Text
Classification | Due to the black-box nature of deep learning models, methods for explaining
the models' results are crucial to gain trust from humans and support
collaboration between AIs and humans. In this paper, we consider several
model-agnostic and model-specific explanation methods for CNNs for text
classification and conduct three human-grounded evaluations, focusing on
different purposes of explanations: (1) revealing model behavior, (2)
justifying model predictions, and (3) helping humans investigate uncertain
predictions. The results highlight dissimilar qualities of the various
explanation methods we consider and show the degree to which these methods
could serve for each purpose.
| 2,019 | Computation and Language |
Improving Deep Transformer with Depth-Scaled Initialization and Merged
Attention | The general trend in NLP is towards increasing model capacity and performance
via deeper neural networks. However, simply stacking more layers of the popular
Transformer architecture for machine translation results in poor convergence
and high computational overhead. Our empirical analysis suggests that
convergence is poor due to gradient vanishing caused by the interaction between
residual connections and layer normalization. We propose depth-scaled
initialization (DS-Init), which decreases parameter variance at the
initialization stage, and reduces output variance of residual connections so as
to ease gradient back-propagation through normalization layers. To address
computational cost, we propose a merged attention sublayer (MAtt) which
combines a simplified averagebased self-attention sublayer and the
encoderdecoder attention sublayer on the decoder side. Results on WMT and IWSLT
translation tasks with five translation directions show that deep Transformers
with DS-Init and MAtt can substantially outperform their base counterpart in
terms of BLEU (+1.1 BLEU on average for 12-layer models), while matching the
decoding speed of the baseline model thanks to the efficiency improvements of
MAtt.
| 2,019 | Computation and Language |
Learning Latent Parameters without Human Response Patterns: Item
Response Theory with Artificial Crowds | Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable
information about model performance and behavior. Traditionally, IRT models are
learned using human response pattern (RP) data, presenting a significant
bottleneck for large data sets like those required for training deep neural
networks (DNNs). In this work we propose learning IRT models using RPs
generated from artificial crowds of DNN models. We demonstrate the
effectiveness of learning IRT models using DNN-generated data through
quantitative and qualitative analyses for two NLP tasks. Parameters learned
from human and machine RPs for natural language inference and sentiment
analysis exhibit medium to large positive correlations. We demonstrate a
use-case for latent difficulty item parameters, namely training set filtering,
and show that using difficulty to sample training data outperforms baseline
methods. Finally, we highlight cases where human expectation about item
difficulty does not match difficulty as estimated from the machine RPs.
| 2,019 | Computation and Language |
Cross-lingual topic prediction for speech using translations | Given a large amount of unannotated speech in a low-resource language, can we
classify the speech utterances by topic? We consider this question in the
setting where a small amount of speech in the low-resource language is paired
with text translations in a high-resource language. We develop an effective
cross-lingual topic classifier by training on just 20 hours of translated
speech, using a recent model for direct speech-to-text translation. While the
translations are poor, they are still good enough to correctly classify the
topic of 1-minute speech segments over 70% of the time - a 20% improvement over
a majority-class baseline. Such a system could be useful for humanitarian
applications like crisis response, where incoming speech in a foreign
low-resource language must be quickly assessed for further action.
| 2,020 | Computation and Language |
Feature2Vec: Distributional semantic modelling of human property
knowledge | Feature norm datasets of human conceptual knowledge, collected in surveys of
human volunteers, yield highly interpretable models of word meaning and play an
important role in neurolinguistic research on semantic cognition. However,
these datasets are limited in size due to practical obstacles associated with
exhaustively listing properties for a large number of words. In contrast, the
development of distributional modelling techniques and the availability of vast
text corpora have allowed researchers to construct effective vector space
models of word meaning over large lexicons. However, this comes at the cost of
interpretable, human-like information about word meaning. We propose a method
for mapping human property knowledge onto a distributional semantic space,
which adapts the word2vec architecture to the task of modelling concept
features. Our approach gives a measure of concept and feature affinity in a
single semantic space, which makes for easy and efficient ranking of candidate
human-derived semantic properties for arbitrary words. We compare our model
with a previous approach, and show that it performs better on several
evaluation tasks. Finally, we discuss how our method could be used to develop
efficient sampling techniques to extend existing feature norm datasets in a
reliable way.
| 2,019 | Computation and Language |
NarrativeTime: Dense Temporal Annotation on a Timeline | For the past decade, temporal annotation has been sparse: only a small
portion of event pairs in a text was annotated. We present NarrativeTime, the
first timeline-based annotation framework that achieves full coverage of all
possible TLinks. To compare with the previous SOTA in dense temporal
annotation, we perform full re-annotation of TimeBankDense corpus, which shows
comparable agreement with a significant increase in density. We contribute
TimeBankNT corpus (with each text fully annotated by two expert annotators),
extensive annotation guidelines, open-source tools for annotation and
conversion to TimeML format, baseline results, as well as quantitative and
qualitative analysis of inter-annotator agreement.
| 2,022 | Computation and Language |
Detecting and Reducing Bias in a High Stakes Domain | Gang-involved youth in cities such as Chicago sometimes post on social media
to express their aggression towards rival gangs and previous research has
demonstrated that a deep learning approach can predict aggression and loss in
posts. To address the possibility of bias in this sensitive application, we
developed an approach to systematically interpret the state of the art model.
We found, surprisingly, that it frequently bases its predictions on stop words
such as "a" or "on", an approach that could harm social media users who have no
aggressive intentions. To tackle this bias, domain experts annotated the
rationales, highlighting words that explain why a tweet is labeled as
"aggression". These new annotations enable us to quantitatively measure how
justified the model predictions are, and build models that drastically reduce
bias. Our study shows that in high stake scenarios, accuracy alone cannot
guarantee a good system and we need new evaluation methods.
| 2,019 | Computation and Language |
Dialog Intent Induction with Deep Multi-View Clustering | We introduce the dialog intent induction task and present a novel deep
multi-view clustering approach to tackle the problem. Dialog intent induction
aims at discovering user intents from user query utterances in human-human
conversations such as dialogs between customer support agents and customers.
Motivated by the intuition that a dialog intent is not only expressed in the
user query utterance but also captured in the rest of the dialog, we split a
conversation into two independent views and exploit multi-view clustering
techniques for inducing the dialog intent. In particular, we propose
alternating-view k-means (AV-KMEANS) for joint multi-view representation
learning and clustering analysis. The key innovation is that the instance-view
representations are updated iteratively by predicting the cluster assignment
obtained from the alternative view, so that the multi-view representations of
the instances lead to similar cluster assignments. Experiments on two public
datasets show that AV-KMEANS can induce better dialog intent clusters than
state-of-the-art unsupervised representation learning methods and standard
multi-view clustering approaches.
| 2,020 | Computation and Language |
DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension | Multi-choice reading comprehension is a challenging task to select an answer
from a set of candidate options when given passage and question. Previous
approaches usually only calculate question-aware passage representation and
ignore passage-aware question representation when modeling the relationship
between passage and question, which obviously cannot take the best of
information between passage and question. In this work, we propose dual
co-matching network (DCMN) which models the relationship among passage,
question and answer options bidirectionally. Besides, inspired by how human
solve multi-choice questions, we integrate two reading strategies into our
model: (i) passage sentence selection that finds the most salient supporting
sentences to answer the question, (ii) answer option interaction that encodes
the comparison information between answer options. DCMN integrated with the two
strategies (DCMN+) obtains state-of-the-art results on five multi-choice
reading comprehension datasets which are from different domains: RACE,
SemEval-2018 Task 11, ROCStories, COIN, MCTest.
| 2,020 | Computation and Language |
Charge-Based Prison Term Prediction with Deep Gating Network | Judgment prediction for legal cases has attracted much research efforts for
its practice use, of which the ultimate goal is prison term prediction. While
existing work merely predicts the total prison term, in reality a defendant is
often charged with multiple crimes. In this paper, we argue that charge-based
prison term prediction (CPTP) not only better fits realistic needs, but also
makes the total prison term prediction more accurate and interpretable. We
collect the first large-scale structured data for CPTP and evaluate several
competitive baselines. Based on the observation that fine-grained feature
selection is the key to achieving good performance, we propose the Deep Gating
Network (DGN) for charge-specific feature selection and aggregation.
Experiments show that DGN achieves the state-of-the-art performance.
| 2,019 | Computation and Language |
Parsing All: Syntax and Semantics, Dependencies and Spans | Both syntactic and semantic structures are key linguistic contextual clues,
in which parsing the latter has been well shown beneficial from parsing the
former. However, few works ever made an attempt to let semantic parsing help
syntactic parsing. As linguistic representation formalisms, both syntax and
semantics may be represented in either span (constituent/phrase) or dependency,
on both of which joint learning was also seldom explored. In this paper, we
propose a novel joint model of syntactic and semantic parsing on both span and
dependency representations, which incorporates syntactic information
effectively in the encoder of neural network and benefits from two
representation formalisms in a uniform way. The experiments show that semantics
and syntax can benefit each other by optimizing joint objectives. Our single
model achieves new state-of-the-art or competitive results on both span and
dependency semantic parsing on Propbank benchmarks and both dependency and
constituent syntactic parsing on Penn Treebank.
| 2,020 | Computation and Language |
Learning to Infer Entities, Properties and their Relations from Clinical
Conversations | Recently we proposed the Span Attribute Tagging (SAT) Model (Du et al., 2019)
to infer clinical entities (e.g., symptoms) and their properties (e.g.,
duration). It tackles the challenge of large label space and limited training
data using a hierarchical two-stage approach that identifies the span of
interest in a tagging step and assigns labels to the span in a classification
step.
We extend the SAT model to jointly infer not only entities and their
properties but also relations between them. Most relation extraction models
restrict inferring relations between tokens within a few neighboring sentences,
mainly to avoid high computational complexity. In contrast, our proposed
Relation-SAT (R-SAT) model is computationally efficient and can infer relations
over the entire conversation, spanning an average duration of 10 minutes.
We evaluate our model on a corpus of clinical conversations. When the
entities are given, the R-SAT outperforms baselines in identifying relations
between symptoms and their properties by about 32% (0.82 vs 0.62 F-score) and
by about 50% (0.60 vs 0.41 F-score) on medications and their properties. On the
more difficult task of jointly inferring entities and relations, the R-SAT
model achieves a performance of 0.34 and 0.45 for symptoms and medications
respectively, which is significantly better than 0.18 and 0.35 for the baseline
model. The contributions of different components of the model are quantified
using ablation analysis.
| 2,019 | Computation and Language |
DialogueGCN: A Graph Convolutional Neural Network for Emotion
Recognition in Conversation | Emotion recognition in conversation (ERC) has received much attention,
lately, from researchers due to its potential widespread applications in
diverse areas, such as health-care, education, and human resources. In this
paper, we present Dialogue Graph Convolutional Network (DialogueGCN), a graph
neural network based approach to ERC. We leverage self and inter-speaker
dependency of the interlocutors to model conversational context for emotion
recognition. Through the graph network, DialogueGCN addresses context
propagation issues present in the current RNN-based methods. We empirically
show that this method alleviates such issues, while outperforming the current
state of the art on a number of benchmark emotion classification datasets.
| 2,019 | Computation and Language |
Modeling Multi-Action Policy for Task-Oriented Dialogues | Dialogue management (DM) plays a key role in the quality of the interaction
with the user in a task-oriented dialogue system. In most existing approaches,
the agent predicts only one DM policy action per turn. This significantly
limits the expressive power of the conversational agent and introduces unwanted
turns of interactions that may challenge users' patience. Longer conversations
also lead to more errors and the system needs to be more robust to handle them.
In this paper, we compare the performance of several models on the task of
predicting multiple acts for each turn. A novel policy model is proposed based
on a recurrent cell called gated Continue-Act-Slots (gCAS) that overcomes the
limitations of the existing models. Experimental results show that gCAS
outperforms other approaches. The code is available at
https://leishu02.github.io/
| 2,019 | Computation and Language |
Detect Camouflaged Spam Content via StoneSkipping: Graph and Text Joint
Embedding for Chinese Character Variation Representation | The task of Chinese text spam detection is very challenging due to both glyph
and phonetic variations of Chinese characters. This paper proposes a novel
framework to jointly model Chinese variational, semantic, and contextualized
representations for Chinese text spam detection task. In particular, a
Variation Family-enhanced Graph Embedding (VFGE) algorithm is designed based on
a Chinese character variation graph. The VFGE can learn both the graph
embeddings of the Chinese characters (local) and the latent variation families
(global). Furthermore, an enhanced bidirectional language model, with a
combination gate function and an aggregation learning function, is proposed to
integrate the graph and text information while capturing the sequential
information. Extensive experiments have been conducted on both SMS and review
datasets, to show the proposed method outperforms a series of state-of-the-art
models for Chinese spam detection.
| 2,019 | Computation and Language |
Hierarchical Pointer Net Parsing | Transition-based top-down parsing with pointer networks has achieved
state-of-the-art results in multiple parsing tasks, while having a linear time
complexity. However, the decoder of these parsers has a sequential structure,
which does not yield the most appropriate inductive bias for deriving tree
structures. In this paper, we propose hierarchical pointer network parsers, and
apply them to dependency and sentence-level discourse parsing tasks. Our
results on standard benchmark datasets demonstrate the effectiveness of our
approach, outperforming existing methods and setting a new state-of-the-art.
| 2,022 | Computation and Language |
On Laughter and Speech-Laugh, Based on Observations of Child-Robot
Interaction | In this article, we study laughter found in child-robot interaction where it
had not been prompted intentionally. Different types of laughter and
speech-laugh are annotated and processed. In a descriptive part, we report on
the position of laughter and speech-laugh in syntax and dialogue structure, and
on communicative functions. In a second part, we report on automatic
classification performance and on acoustic characteristics, based on extensive
feature selection procedures.
| 2,019 | Computation and Language |
Online influence, offline violence: Language Use on YouTube surrounding
the 'Unite the Right' rally | The media frequently describes the 2017 Charlottesville 'Unite the Right'
rally as a turning point for the alt-right and white supremacist movements.
Social movement theory suggests that the media attention and public discourse
concerning the rally may have influenced the alt-right, but this has yet to be
empirically tested. The current study investigates whether there are
differences in language use between 7,142 alt-right and progressive YouTube
channels, in addition to measuring possible changes as a result of the rally.
To do so, we create structural topic models and measure bigram proportions in
video transcripts, spanning eight weeks before to eight weeks after the rally.
We observe differences in topics between the two groups, with the 'alternative
influencers' for example discussing topics related to race and free speech to
an increasing and larger extent than progressive channels. We also observe
structural breakpoints in the use of bigrams at the time of the rally,
suggesting there are changes in language use within the two groups as a result
of the rally. While most changes relate to mentions of the rally itself, the
alternative group also shows an increase in promotion of their YouTube
channels. Results are discussed in light of social movement theory, followed by
a discussion of potential implications for understanding the alt-right and
their language use on YouTube.
| 2,020 | Computation and Language |
Cross-domain Aspect Category Transfer and Detection via Traceable
Heterogeneous Graph Representation Learning | Aspect category detection is an essential task for sentiment analysis and
opinion mining. However, the cost of categorical data labeling, e.g., label the
review aspect information for a large number of product domains, can be
inevitable but unaffordable. In this study, we propose a novel problem,
cross-domain aspect category transfer and detection, which faces three
challenges: various feature spaces, different data distributions, and diverse
output spaces. To address these problems, we propose an innovative solution,
Traceable Heterogeneous Graph Representation Learning (THGRL). Unlike prior
text-based aspect detection works, THGRL explores latent domain aspect category
connections via massive user behavior information on a heterogeneous graph.
Moreover, an innovative latent variable "Walker Tracer" is introduced to
characterize the global semantic/aspect dependencies and capture the
informative vertexes on the random walk paths. By using THGRL, we project
different domains' feature spaces into a common one, while allowing data
distributions and output spaces stay differently. Experiment results show that
the proposed method outperforms a series of state-of-the-art baseline models.
| 2,019 | Computation and Language |
Exploring Domain Shift in Extractive Text Summarization | Although domain shift has been well explored in many NLP applications, it
still has received little attention in the domain of extractive text
summarization. As a result, the model is under-utilizing the nature of the
training data due to ignoring the difference in the distribution of training
sets and shows poor generalization on the unseen domain.
With the above limitation in mind, in this paper, we first extend the
conventional definition of the domain from categories into data sources for the
text summarization task. Then we re-purpose a multi-domain summarization
dataset and verify how the gap between different domains influences the
performance of neural summarization models.
Furthermore, we investigate four learning strategies and examine their
abilities to deal with the domain shift problem.
Experimental results on three different settings show their different
characteristics in our new testbed.
Our source code including \textit{BERT-based}, \textit{meta-learning} methods
for multi-domain summarization learning and the re-purposed dataset
\textsc{Multi-SUM} will be available on our project:
\url{http://pfliu.com/TransferSum/}.
| 2,019 | Computation and Language |
Fact-Checking Meets Fauxtography: Verifying Claims About Images | The recent explosion of false claims in social media and on the Web in
general has given rise to a lot of manual fact-checking initiatives.
Unfortunately, the number of claims that need to be fact-checked is several
orders of magnitude larger than what humans can handle manually. Thus, there
has been a lot of research aiming at automating the process. Interestingly,
previous work has largely ignored the growing number of claims about images.
This is despite the fact that visual imagery is more influential than text and
naturally appears alongside fake news. Here we aim at bridging this gap. In
particular, we create a new dataset for this problem, and we explore a variety
of features modeling the claim, the image, and the relationship between the
claim and the image. The evaluation results show sizable improvements over the
baseline. We release our dataset, hoping to enable further research on
fact-checking claims about images.
| 2,019 | Computation and Language |
Earlier Isn't Always Better: Sub-aspect Analysis on Corpus and System
Biases in Summarization | Despite the recent developments on neural summarization systems, the
underlying logic behind the improvements from the systems and its
corpus-dependency remains largely unexplored. Position of sentences in the
original text, for example, is a well known bias for news summarization.
Following in the spirit of the claim that summarization is a combination of
sub-functions, we define three sub-aspects of summarization: position,
importance, and diversity and conduct an extensive analysis of the biases of
each sub-aspect with respect to the domain of nine different summarization
corpora (e.g., news, academic papers, meeting minutes, movie script, books,
posts). We find that while position exhibits substantial bias in news articles,
this is not the case, for example, with academic papers and meeting minutes.
Furthermore, our empirical study shows that different types of summarization
systems (e.g., neural-based) are composed of different degrees of the
sub-aspects. Our study provides useful lessons regarding consideration of
underlying sub-aspects when collecting a new summarization dataset or
developing a new system.
| 2,019 | Computation and Language |
Encoders Help You Disambiguate Word Senses in Neural Machine Translation | Neural machine translation (NMT) has achieved new state-of-the-art
performance in translating ambiguous words. However, it is still unclear which
component dominates the process of disambiguation. In this paper, we explore
the ability of NMT encoders and decoders to disambiguate word senses by
evaluating hidden states and investigating the distributions of self-attention.
We train a classifier to predict whether a translation is correct given the
representation of an ambiguous noun. We find that encoder hidden states
outperform word embeddings significantly which indicates that encoders
adequately encode relevant information for disambiguation into hidden states.
Decoders could provide further relevant information for disambiguation.
Moreover, the attention weights and attention entropy show that self-attention
can detect ambiguous nouns and distribute more attention to the context. Note
that this is a revised version. The content related to decoder hidden states
has been updated.
| 2,020 | Computation and Language |
Answering Conversational Questions on Structured Data without Logical
Forms | We present a novel approach to answering sequential questions based on
structured objects such as knowledge bases or tables without using a logical
form as an intermediate representation. We encode tables as graphs using a
graph neural network model based on the Transformer architecture. The answers
are then selected from the encoded graph using a pointer network. This model is
appropriate for processing conversations around structured data, where the
attention mechanism that selects the answers to a question can also be used to
resolve conversational references. We demonstrate the validity of this approach
with competitive results on the Sequential Question Answering (SQA) task (Iyyer
et al., 2017).
| 2,019 | Computation and Language |
Linguistic Versus Latent Relations for Modeling Coherent Flow in
Paragraphs | Generating a long, coherent text such as a paragraph requires a high-level
control of different levels of relations between sentences (e.g., tense,
coreference). We call such a logical connection between sentences as a
(paragraph) flow. In order to produce a coherent flow of text, we explore two
forms of intersentential relations in a paragraph: one is a human-created
linguistical relation that forms a structure (e.g., discourse tree) and the
other is a relation from latent representation learned from the sentences
themselves. Our two proposed models incorporate each form of relations into
document-level language models: the former is a supervised model that jointly
learns a language model as well as discourse relation prediction, and the
latter is an unsupervised model that is hierarchically conditioned by a
recurrent neural network (RNN) over the latent information. Our proposed models
with both forms of relations outperform the baselines in partially conditioned
paragraph generation task. Our codes and data are publicly available.
| 2,019 | Computation and Language |
Multi-Task Learning with Language Modeling for Question Generation | This paper explores the task of answer-aware questions generation. Based on
the attention-based pointer generator model, we propose to incorporate an
auxiliary task of language modeling to help question generation in a
hierarchical multi-task learning structure. Our joint-learning model enables
the encoder to learn a better representation of the input sequence, which will
guide the decoder to generate more coherent and fluent questions. On both SQuAD
and MARCO datasets, our multi-task learning model boosts the performance,
achieving state-of-the-art results. Moreover, human evaluation further proves
the high quality of our generated questions.
| 2,019 | Computation and Language |
PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase
Identification | Most existing work on adversarial data generation focuses on English. For
example, PAWS (Paraphrase Adversaries from Word Scrambling) consists of
challenging English paraphrase identification pairs from Wikipedia and Quora.
We remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS
evaluation pairs in six typologically distinct languages: French, Spanish,
German, Chinese, Japanese, and Korean. We provide baseline numbers for three
models with different capacity to capture non-local context and sentence
structure, and using different multilingual training and evaluation regimes.
Multilingual BERT fine-tuned on PAWS English plus machine-translated data
performs the best, with a range of 83.1-90.8 accuracy across the non-English
languages and an average accuracy gain of 23% over the next best model. PAWS-X
shows the effectiveness of deep, multilingual pre-training while also leaving
considerable headroom as a new challenge to drive multilingual research that
better captures structure and contextual information.
| 2,019 | Computation and Language |
CodeSwitch-Reddit: Exploration of Written Multilingual Discourse in
Online Discussion Forums | In contrast to many decades of research on oral code-switching, the study of
written multilingual productions has only recently enjoyed a surge of interest.
Many open questions remain regarding the sociolinguistic underpinnings of
written code-switching, and progress has been limited by a lack of suitable
resources. We introduce a novel, large, and diverse dataset of written
code-switched productions, curated from topical threads of multiple bilingual
communities on the Reddit discussion platform, and explore questions that were
mainly addressed in the context of spoken language thus far. We investigate
whether findings in oral code-switching concerning content and style, as well
as speaker proficiency, are carried over into written code-switching in
discussion forums. The released dataset can further facilitate a range of
research and practical activities.
| 2,019 | Computation and Language |
Adapt or Get Left Behind: Domain Adaptation through BERT Language Model
Finetuning for Aspect-Target Sentiment Classification | Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Based
Sentiment Analysis (ABSA), which has many applications e.g. in e-commerce,
where data and insights from reviews can be leveraged to create value for
businesses and customers. Recently, deep transfer-learning methods have been
applied successfully to a myriad of Natural Language Processing (NLP) tasks,
including ATSC. Building on top of the prominent BERT language model, we
approach ATSC using a two-step procedure: self-supervised domain-specific BERT
language model finetuning, followed by supervised task-specific finetuning. Our
findings on how to best exploit domain-specific language model finetuning
enable us to produce new state-of-the-art performance on the SemEval 2014 Task
4 restaurants dataset. In addition, to explore the real-world robustness of our
models, we perform cross-domain evaluation. We show that a cross-domain adapted
BERT language model performs significantly better than strong baseline models
like vanilla BERT-base and XLNet-base. Finally, we conduct a case study to
interpret model prediction errors.
| 2,019 | Computation and Language |
Adaptively Sparse Transformers | Attention mechanisms have become ubiquitous in NLP. Recent architectures,
notably the Transformer, learn powerful context-aware word representations
through layered, multi-headed attention. The multiple heads learn diverse types
of word relationships. However, with standard softmax attention, all attention
heads are dense, assigning a non-zero weight to all context words. In this
work, we introduce the adaptively sparse Transformer, wherein attention heads
have flexible, context-dependent sparsity patterns. This sparsity is
accomplished by replacing softmax with $\alpha$-entmax: a differentiable
generalization of softmax that allows low-scoring words to receive precisely
zero weight. Moreover, we derive a method to automatically learn the $\alpha$
parameter -- which controls the shape and sparsity of $\alpha$-entmax --
allowing attention heads to choose between focused or spread-out behavior. Our
adaptively sparse Transformer improves interpretability and head diversity when
compared to softmax Transformers on machine translation datasets. Findings of
the quantitative and qualitative analysis of our approach include that heads in
different layers learn different sparsity preferences and tend to be more
diverse in their attention distributions than softmax Transformers.
Furthermore, at no cost in accuracy, sparsity in attention heads helps to
uncover different head specializations.
| 2,019 | Computation and Language |
Handling Syntactic Divergence in Low-resource Machine Translation | Despite impressive empirical successes of neural machine translation (NMT) on
standard benchmarks, limited parallel data impedes the application of NMT
models to many language pairs. Data augmentation methods such as
back-translation make it possible to use monolingual data to help alleviate
these issues, but back-translation itself fails in extreme low-resource
scenarios, especially for syntactically divergent languages. In this paper, we
propose a simple yet effective solution, whereby target-language sentences are
re-ordered to match the order of the source and used as an additional source of
training-time supervision. Experiments with simulated low-resource
Japanese-to-English, and real low-resource Uyghur-to-English scenarios find
significant improvements over other semi-supervised alternatives.
| 2,019 | Computation and Language |
Sequential Learning of Convolutional Features for Effective Text
Classification | Text classification has been one of the major problems in natural language
processing. With the advent of deep learning, convolutional neural network
(CNN) has been a popular solution to this task. However, CNNs which were first
proposed for images, face many crucial challenges in the context of text
processing, namely in their elementary blocks: convolution filters and max
pooling. These challenges have largely been overlooked by the most existing CNN
models proposed for text classification. In this paper, we present an
experimental study on the fundamental blocks of CNNs in text categorization.
Based on this critique, we propose Sequential Convolutional Attentive Recurrent
Network (SCARN). The proposed SCARN model utilizes both the advantages of
recurrent and convolutional structures efficiently in comparison to previously
proposed recurrent convolutional models. We test our model on different text
classification datasets across tasks like sentiment analysis and question
classification. Extensive experiments establish that SCARN outperforms other
recurrent convolutional architectures with significantly less parameters.
Furthermore, SCARN achieves better performance compared to equally large
various deep CNN and LSTM architectures.
| 2,019 | Computation and Language |
Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic
Text Exchange | In this paper, we present a novel method for measurably adjusting the
semantics of text while preserving its sentiment and fluency, a task we call
semantic text exchange. This is useful for text data augmentation and the
semantic correction of text generated by chatbots and virtual assistants. We
introduce a pipeline called SMERTI that combines entity replacement, similarity
masking, and text infilling. We measure our pipeline's success by its Semantic
Text Exchange Score (STES): the ability to preserve the original text's
sentiment and fluency while adjusting semantic content. We propose to use
masking (replacement) rate threshold as an adjustable parameter to control the
amount of semantic change in the text. Our experiments demonstrate that SMERTI
can outperform baseline models on Yelp reviews, Amazon reviews, and news
headlines.
| 2,020 | Computation and Language |
Automatically Inferring Gender Associations from Language | In this paper, we pose the question: do people talk about women and men in
different ways? We introduce two datasets and a novel integration of approaches
for automatically inferring gender associations from language, discovering
coherent word clusters, and labeling the clusters for the semantic concepts
they represent. The datasets allow us to compare how people write about women
and men in two different settings - one set draws from celebrity news and the
other from student reviews of computer science professors. We demonstrate that
there are large-scale differences in the ways that people talk about women and
men and that these differences vary across domains. Human evaluations show that
our methods significantly outperform strong baselines.
| 2,019 | Computation and Language |
(Male, Bachelor) and (Female, Ph.D) have different connotations:
Parallelly Annotated Stylistic Language Dataset with Multiple Personas | Stylistic variation in text needs to be studied with different aspects
including the writer's personal traits, interpersonal relations, rhetoric, and
more. Despite recent attempts on computational modeling of the variation, the
lack of parallel corpora of style language makes it difficult to systematically
control the stylistic change as well as evaluate such models. We release
PASTEL, the parallel and annotated stylistic language dataset, that contains
~41K parallel sentences (8.3K parallel stories) annotated across different
personas. Each persona has different styles in conjunction: gender, age,
country, political view, education, ethnic, and time-of-writing. The dataset is
collected from human annotators with solid control of input denotation: not
only preserving original meaning between text, but promoting stylistic
diversity to annotators. We test the dataset on two interesting applications of
style language, where PASTEL helps design appropriate experiment and
evaluation. First, in predicting a target style (e.g., male or female in
gender) given a text, multiple styles of PASTEL make other external style
variables controlled (or fixed), which is a more accurate experimental design.
Second, a simple supervised model with our parallel text outperforms the
unsupervised models using nonparallel text in style transfer. Our dataset is
publicly available.
| 2,019 | Computation and Language |
Small and Practical BERT Models for Sequence Labeling | We propose a practical scheme to train a single multilingual sequence
labeling model that yields state of the art results and is small and fast
enough to run on a single CPU. Starting from a public multilingual BERT
checkpoint, our final model is 6x smaller and 27x faster, and has higher
accuracy than a state-of-the-art multilingual baseline. We show that our model
especially outperforms on low-resource languages, and works on codemixed input
text without being explicitly trained on codemixed examples. We showcase the
effectiveness of our method by reporting on part-of-speech tagging and
morphological prediction on 70 treebanks and 48 languages.
| 2,019 | Computation and Language |
Knowledge Enhanced Attention for Robust Natural Language Inference | Neural network models have been very successful at achieving high accuracy on
natural language inference (NLI) tasks. However, as demonstrated in recent
literature, when tested on some simple adversarial examples, most of the models
suffer a significant drop in performance. This raises the concern about the
robustness of NLI models. In this paper, we propose to make NLI models robust
by incorporating external knowledge to the attention mechanism using a simple
transformation. We apply the new attention to two popular types of NLI models:
one is Transformer encoder, and the other is a decomposable model, and show
that our method can significantly improve their robustness. Moreover, when
combined with BERT pretraining, our method achieves the human-level performance
on the adversarial SNLI data set.
| 2,019 | Computation and Language |
Generating Personalized Recipes from Historical User Preferences | Existing approaches to recipe generation are unable to create recipes for
users with culinary preferences but incomplete knowledge of ingredients in
specific dishes. We propose a new task of personalized recipe generation to
help these users: expanding a name and incomplete ingredient details into
complete natural-text instructions aligned with the user's historical
preferences. We attend on technique- and recipe-level representations of a
user's previously consumed recipes, fusing these 'user-aware' representations
in an attention fusion layer to control recipe text generation. Experiments on
a new dataset of 180K recipes and 700K interactions show our model's ability to
generate plausible and personalized recipes compared to non-personalized
baselines.
| 2,019 | Computation and Language |
Behavior Gated Language Models | Most current language modeling techniques only exploit co-occurrence,
semantic and syntactic information from the sequence of words. However, a range
of information such as the state of the speaker and dynamics of the interaction
might be useful. In this work we derive motivation from psycholinguistics and
propose the addition of behavioral information into the context of language
modeling. We propose the augmentation of language models with an additional
module which analyzes the behavioral state of the current context. This
behavioral information is used to gate the outputs of the language model before
the final word prediction output. We show that the addition of behavioral
context in language models achieves lower perplexities on behavior-rich
datasets. We also confirm the validity of the proposed models on a variety of
model architectures and improve on previous state-of-the-art models with
generic domain Penn Treebank Corpus.
| 2,019 | Computation and Language |
Giving BERT a Calculator: Finding Operations and Arguments with Reading
Comprehension | Reading comprehension models have been successfully applied to extractive
text answers, but it is unclear how best to generalize these models to
abstractive numerical answers. We enable a BERT-based reading comprehension
model to perform lightweight numerical reasoning. We augment the model with a
predefined set of executable 'programs' which encompass simple arithmetic as
well as extraction. Rather than having to learn to manipulate numbers directly,
the model can pick a program and execute it. On the recent Discrete Reasoning
Over Passages (DROP) dataset, designed to challenge reading comprehension
models, we show a 33% absolute improvement by adding shallow programs. The
model can learn to predict new operations when appropriate in a math word
problem setting (Roy and Roth, 2015) with very few training examples.
| 2,019 | Computation and Language |
Quantity doesn't buy quality syntax with neural language models | Recurrent neural networks can learn to predict upcoming words remarkably well
on average; in syntactically complex contexts, however, they often assign
unexpectedly high probabilities to ungrammatical words. We investigate to what
extent these shortcomings can be mitigated by increasing the size of the
network and the corpus on which it is trained. We find that gains from
increasing network size are minimal beyond a certain point. Likewise, expanding
the training corpus yields diminishing returns; we estimate that the training
corpus would need to be unrealistically large for the models to match human
performance. A comparison to GPT and BERT, Transformer-based models trained on
billions of words, reveals that these models perform even more poorly than our
LSTMs in some constructions. Our results make the case for more data efficient
architectures.
| 2,019 | Computation and Language |
Learning with Noisy Labels for Sentence-level Sentiment Classification | Deep neural networks (DNNs) can fit (or even over-fit) the training data very
well. If a DNN model is trained using data with noisy labels and tested on data
with clean labels, the model may perform poorly. This paper studies the problem
of learning with noisy labels for sentence-level sentiment classification. We
propose a novel DNN model called NetAb (as shorthand for convolutional neural
Networks with Ab-networks) to handle noisy labels during training. NetAb
consists of two convolutional neural networks, one with a noise transition
layer for dealing with the input noisy labels and the other for predicting
'clean' labels. We train the two networks using their respective loss functions
in a mutual reinforcement manner. Experimental results demonstrate the
effectiveness of the proposed model.
| 2,019 | Computation and Language |
Evaluating Pronominal Anaphora in Machine Translation: An Evaluation
Measure and a Test Suite | The ongoing neural revolution in machine translation has made it easier to
model larger contexts beyond the sentence-level, which can potentially help
resolve some discourse-level ambiguities such as pronominal anaphora, thus
enabling better translations. Unfortunately, even when the resulting
improvements are seen as substantial by humans, they remain virtually unnoticed
by traditional automatic evaluation measures like BLEU, as only a few words end
up being affected. Thus, specialized evaluation measures are needed. With this
aim in mind, we contribute an extensive, targeted dataset that can be used as a
test suite for pronoun translation, covering multiple source languages and
different pronoun errors drawn from real system translations, for English. We
further propose an evaluation measure to differentiate good and bad pronoun
translations. We also conduct a user study to report correlations with human
judgments.
| 2,019 | Computation and Language |
Modeling Graph Structure in Transformer for Better AMR-to-Text
Generation | Recent studies on AMR-to-text generation often formalize the task as a
sequence-to-sequence (seq2seq) learning problem by converting an Abstract
Meaning Representation (AMR) graph into a word sequence. Graph structures are
further modeled into the seq2seq framework in order to utilize the structural
information in the AMR graphs. However, previous approaches only consider the
relations between directly connected concepts while ignoring the rich structure
in AMR graphs. In this paper we eliminate such a strong limitation and propose
a novel structure-aware self-attention approach to better modeling the
relations between indirectly connected concepts in the state-of-the-art seq2seq
model, i.e., the Transformer. In particular, a few different methods are
explored to learn structural representations between two concepts. Experimental
results on English AMR benchmark datasets show that our approach significantly
outperforms the state of the art with 29.66 and 31.82 BLEU scores on LDC2015E86
and LDC2017T10, respectively. To the best of our knowledge, these are the best
results achieved so far by supervised models on the benchmarks.
| 2,019 | Computation and Language |
EntEval: A Holistic Evaluation Benchmark for Entity Representations | Rich entity representations are useful for a wide class of problems involving
entities. Despite their importance, there is no standardized benchmark that
evaluates the overall quality of entity representations. In this work, we
propose EntEval: a test suite of diverse tasks that require nontrivial
understanding of entities including entity typing, entity similarity, entity
relation prediction, and entity disambiguation. In addition, we develop
training techniques for learning better entity representations by using natural
hyperlink annotations in Wikipedia. We identify effective objectives for
incorporating the contextual information in hyperlinks into state-of-the-art
pretrained language models and show that they improve strong baselines on
multiple EntEval tasks.
| 2,019 | Computation and Language |
Question-type Driven Question Generation | Question generation is a challenging task which aims to ask a question based
on an answer and relevant context. The existing works suffer from the
mismatching between question type and answer, i.e. generating a question with
type $how$ while the answer is a personal name. We propose to automatically
predict the question type based on the input answer and context. Then, the
question type is fused into a seq2seq model to guide the question generation,
so as to deal with the mismatching problem. We achieve significant improvement
on the accuracy of question type prediction and finally obtain state-of-the-art
results for question generation on both SQuAD and MARCO datasets.
| 2,019 | Computation and Language |
Deep Reinforcement Learning with Distributional Semantic Rewards for
Abstractive Summarization | Deep reinforcement learning (RL) has been a commonly-used strategy for the
abstractive summarization task to address both the exposure bias and
non-differentiable task issues. However, the conventional reward Rouge-L simply
looks for exact n-grams matches between candidates and annotated references,
which inevitably makes the generated sentences repetitive and incoherent. In
this paper, instead of Rouge-L, we explore the practicability of utilizing the
distributional semantics to measure the matching degrees. With distributional
semantics, sentence-level evaluation can be obtained, and semantically-correct
phrases can also be generated without being limited to the surface form of the
reference sentences. Human judgments on Gigaword and CNN/Daily Mail datasets
show that our proposed distributional semantics reward (DSR) has distinct
superiority in capturing the lexical and compositional diversity of natural
language.
| 2,019 | Computation and Language |
Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence
Representations | Prior work on pretrained sentence embeddings and benchmarks focus on the
capabilities of stand-alone sentences. We propose DiscoEval, a test suite of
tasks to evaluate whether sentence representations include broader context
information. We also propose a variety of training objectives that makes use of
natural annotations from Wikipedia to build sentence encoders capable of
modeling discourse. We benchmark sentence encoders pretrained with our proposed
training objectives, as well as other popular pretrained sentence encoders on
DiscoEval and other sentence evaluation tasks. Empirically, we show that these
training objectives help to encode different aspects of information in document
structures. Moreover, BERT and ELMo demonstrate strong performances over
DiscoEval with individual hidden layers showing different characteristics.
| 2,019 | Computation and Language |
Adversarial Learning with Contextual Embeddings for Zero-resource
Cross-lingual Classification and NER | Contextual word embeddings (e.g. GPT, BERT, ELMo, etc.) have demonstrated
state-of-the-art performance on various NLP tasks. Recent work with the
multilingual version of BERT has shown that the model performs very well in
zero-shot and zero-resource cross-lingual settings, where only labeled English
data is used to finetune the model. We improve upon multilingual BERT's
zero-resource cross-lingual performance via adversarial learning. We report the
magnitude of the improvement on the multilingual MLDoc text classification and
CoNLL 2002/2003 named entity recognition tasks. Furthermore, we show that
language-adversarial training encourages BERT to align the embeddings of
English documents and their translations, which may be the cause of the
observed performance gains.
| 2,020 | Computation and Language |
NCLS: Neural Cross-Lingual Summarization | Cross-lingual summarization (CLS) is the task to produce a summary in one
particular language for a source document in a different language. Existing
methods simply divide this task into two steps: summarization and translation,
leading to the problem of error propagation. To handle that, we present an
end-to-end CLS framework, which we refer to as Neural Cross-Lingual
Summarization (NCLS), for the first time. Moreover, we propose to further
improve NCLS by incorporating two related tasks, monolingual summarization and
machine translation, into the training process of CLS under multi-task
learning. Due to the lack of supervised CLS data, we propose a round-trip
translation strategy to acquire two high-quality large-scale CLS datasets based
on existing monolingual summarization datasets. Experimental results have shown
that our NCLS achieves remarkable improvement over traditional pipeline methods
on both English-to-Chinese and Chinese-to-English CLS human-corrected test
sets. In addition, NCLS with multi-task learning can further significantly
improve the quality of generated summaries. We make our dataset and code
publicly available here: http://www.nlpr.ia.ac.cn/cip/dataset.htm.
| 2,019 | Computation and Language |
Improving Back-Translation with Uncertainty-based Confidence Estimation | While back-translation is simple and effective in exploiting abundant
monolingual corpora to improve low-resource neural machine translation (NMT),
the synthetic bilingual corpora generated by NMT models trained on limited
authentic bilingual data are inevitably noisy. In this work, we propose to
quantify the confidence of NMT model predictions based on model uncertainty.
With word- and sentence-level confidence measures based on uncertainty, it is
possible for back-translation to better cope with noise in synthetic bilingual
corpora. Experiments on Chinese-English and English-German translation tasks
show that uncertainty-based confidence estimation significantly improves the
performance of back-translation.
| 2,019 | Computation and Language |
Incorporating Domain Knowledge into Medical NLI using Knowledge Graphs | Recently, biomedical version of embeddings obtained from language models such
as BioELMo have shown state-of-the-art results for the textual inference task
in the medical domain. In this paper, we explore how to incorporate structured
domain knowledge, available in the form of a knowledge graph (UMLS), for the
Medical NLI task. Specifically, we experiment with fusing embeddings obtained
from knowledge graph with the state-of-the-art approaches for NLI task (ESIM
model). We also experiment with fusing the domain-specific sentiment
information for the task. Experiments conducted on MedNLI dataset clearly show
that this strategy improves the baseline BioELMo architecture for the Medical
NLI task.
| 2,021 | Computation and Language |
Benchmarking Zero-shot Text Classification: Datasets, Evaluation and
Entailment Approach | Zero-shot text classification (0Shot-TC) is a challenging NLU problem to
which little attention has been paid by the research community. 0Shot-TC aims
to associate an appropriate label with a piece of text, irrespective of the
text domain and the aspect (e.g., topic, emotion, event, etc.) described by the
label. And there are only a few articles studying 0Shot-TC, all focusing only
on topical categorization which, we argue, is just the tip of the iceberg in
0Shot-TC. In addition, the chaotic experiments in literature make no uniform
comparison, which blurs the progress.
This work benchmarks the 0Shot-TC problem by providing unified datasets,
standardized evaluations, and state-of-the-art baselines. Our contributions
include: i) The datasets we provide facilitate studying 0Shot-TC relative to
conceptually different and diverse aspects: the ``topic'' aspect includes
``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy''
and ``anger''; the ``situation'' aspect includes ``medical assistance'' and
``water shortage''. ii) We extend the existing evaluation setup
(label-partially-unseen) -- given a dataset, train on some labels, test on all
labels -- to include a more challenging yet realistic evaluation
label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text
snippets without seeing task specific training data at all. iii) We unify the
0Shot-TC of diverse aspects within a textual entailment formulation and study
it this way.
Code & Data: https://github.com/yinwenpeng/BenchmarkingZeroShot
| 2,019 | Computation and Language |
Open Named Entity Modeling from Embedding Distribution | In this paper, we report our discovery on named entity distribution in a
general word embedding space, which helps an open definition on multilingual
named entity definition rather than previous closed and constraint definition
on named entities through a named entity dictionary, which is usually derived
from human labor and replies on schedule update. Our initial visualization of
monolingual word embeddings indicates named entities tend to gather together
despite of named entity types and language difference, which enable us to model
all named entities using a specific geometric structure inside embedding space,
namely, the named entity hypersphere. For monolingual cases, the proposed named
entity model gives an open description of diverse named entity types and
different languages. For cross-lingual cases, mapping the proposed named entity
model provides a novel way to build a named entity dataset for resource-poor
languages. At last, the proposed named entity model may be shown as a handy
clue to enhance state-of-the-art named entity recognition systems generally.
| 2,021 | Computation and Language |
Joint Detection and Location of English Puns | A pun is a form of wordplay for an intended humorous or rhetorical effect,
where a word suggests two or more meanings by exploiting polysemy (homographic
pun) or phonological similarity to another word (heterographic pun). This paper
presents an approach that addresses pun detection and pun location jointly from
a sequence labeling perspective. We employ a new tagging scheme such that the
model is capable of performing such a joint task, where useful structural
information can be properly captured. We show that our proposed model is
effective in handling both homographic and heterographic puns. Empirical
results on the benchmark datasets demonstrate that our approach can achieve new
state-of-the-art results.
| 2,019 | Computation and Language |
Quantity Tagger: A Latent-Variable Sequence Labeling Approach to Solving
Addition-Subtraction Word Problems | An arithmetic word problem typically includes a textual description
containing several constant quantities. The key to solving the problem is to
reveal the underlying mathematical relations (such as addition and subtraction)
among quantities, and then generate equations to find solutions. This work
presents a novel approach, Quantity Tagger, that automatically discovers such
hidden relations by tagging each quantity with a sign corresponding to one type
of mathematical operation. For each quantity, we assume there exists a latent,
variable-sized quantity span surrounding the quantity token in the text, which
conveys information useful for determining its sign. Empirical results show
that our method achieves 5 and 8 points of accuracy gains on two datasets
respectively, compared to prior approaches.
| 2,019 | Computation and Language |
Explicit Cross-lingual Pre-training for Unsupervised Machine Translation | Pre-training has proven to be effective in unsupervised machine translation
due to its ability to model deep context information in cross-lingual
scenarios. However, the cross-lingual information obtained from shared BPE
spaces is inexplicit and limited. In this paper, we propose a novel
cross-lingual pre-training method for unsupervised machine translation by
incorporating explicit cross-lingual training signals. Specifically, we first
calculate cross-lingual n-gram embeddings and infer an n-gram translation table
from them. With those n-gram translation pairs, we propose a new pre-training
model called Cross-lingual Masked Language Model (CMLM), which randomly chooses
source n-grams in the input text stream and predicts their translation
candidates at each time step. Experiments show that our method can incorporate
beneficial cross-lingual information into pre-trained models. Taking
pre-trained CMLM models as the encoder and decoder, we significantly improve
the performance of unsupervised machine translation.
| 2,019 | Computation and Language |
Deep Ordinal Regression for Pledge Specificity Prediction | Many pledges are made in the course of an election campaign, forming
important corpora for political analysis of campaign strategy and governmental
accountability. At present, there are no publicly available annotated datasets
of pledges, and most political analyses rely on manual analysis. In this paper
we collate a novel dataset of manifestos from eleven Australian federal
election cycles, with over 12,000 sentences annotated with specificity (e.g.,
rhetorical vs.\ detailed pledge) on a fine-grained scale. We propose deep
ordinal regression approaches for specificity prediction, under both supervised
and semi-supervised settings, and provide empirical results demonstrating the
effectiveness of the proposed techniques over several baseline approaches. We
analyze the utility of pledge specificity modeling across a spectrum of policy
issues in performing ideology prediction, and further provide qualitative
analysis in terms of capturing party-specific issue salience across election
cycles.
| 2,019 | Computation and Language |
Improving Multi-Head Attention with Capsule Networks | Multi-head attention advances neural machine translation by working out
multiple versions of attention in different subspaces, but the neglect of
semantic overlapping between subspaces increases the difficulty of translation
and consequently hinders the further improvement of translation performance. In
this paper, we employ capsule networks to comb the information from the
multiple heads of the attention so that similar information can be clustered
and unique information can be reserved. To this end, we adopt two routing
mechanisms of Dynamic Routing and EM Routing, to fulfill the clustering and
separating. We conducted experiments on Chinese-to-English and
English-to-German translation tasks and got consistent improvements over the
strong Transformer baseline.
| 2,019 | Computation and Language |
NEZHA: Neural Contextualized Representation for Chinese Language
Understanding | The pre-trained language models have achieved great successes in various
natural language understanding (NLU) tasks due to its capacity to capture the
deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language
models named NEZHA (NEural contextualiZed representation for CHinese lAnguage
understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The
current version of NEZHA is based on BERT with a collection of proven
improvements, which include Functional Relative Positional Encoding as an
effective positional encoding scheme, Whole Word Masking strategy, Mixed
Precision Training and the LAMB Optimizer in training the models. The
experimental results show that NEZHA achieves the state-of-the-art performances
when finetuned on several representative Chinese tasks, including named entity
recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment
classification (ChnSenti) and natural language inference (XNLI).
| 2,021 | Computation and Language |
QAInfomax: Learning Robust Question Answering System by Mutual
Information Maximization | Standard accuracy metrics indicate that modern reading comprehension systems
have achieved strong performance in many question answering datasets. However,
the extent these systems truly understand language remains unknown, and
existing systems are not good at distinguishing distractor sentences, which
look related but do not actually answer the question. To address this problem,
we propose QAInfomax as a regularizer in reading comprehension systems by
maximizing mutual information among passages, a question, and its answer.
QAInfomax helps regularize the model to not simply learn the superficial
correlation for answering questions. The experiments show that our proposed
QAInfomax achieves the state-of-the-art performance on the benchmark
Adversarial-SQuAD dataset.
| 2,019 | Computation and Language |
Connecting the Dots: Document-level Neural Relation Extraction with
Edge-oriented Graphs | Document-level relation extraction is a complex human process that requires
logical inference to extract relationships between named entities in text.
Existing approaches use graph-based neural models with words as nodes and edges
as relations between them, to encode relations across sentences. These models
are node-based, i.e., they form pair representations based solely on the two
target node representations. However, entity relations can be better expressed
through unique edge representations formed as paths between nodes. We thus
propose an edge-oriented graph neural model for document-level relation
extraction. The model utilises different types of nodes and edges to create a
document-level graph. An inference mechanism on the graph edges enables to
learn intra- and inter-sentence relations using multi-instance learning
internally. Experiments on two document-level biomedical datasets for
chemical-disease and gene-disease associations show the usefulness of the
proposed edge-oriented approach.
| 2,019 | Computation and Language |
Humor Detection: A Transformer Gets the Last Laugh | Much previous work has been done in attempting to identify humor in text. In
this paper we extend that capability by proposing a new task: assessing whether
or not a joke is humorous. We present a novel way of approaching this problem
by building a model that learns to identify humorous jokes based on ratings
gleaned from Reddit pages, consisting of almost 16,000 labeled instances. Using
these ratings to determine the level of humor, we then employ a Transformer
architecture for its advantages in learning from sentence context. We
demonstrate the effectiveness of this approach and show results that are
comparable to human performance. We further demonstrate our model's increased
capabilities on humor identification problems, such as the previously created
datasets for short jokes and puns. These experiments show that this method
outperforms all previous work done on these tasks, with an F-measure of 93.1%
for the Puns dataset and 98.6% on the Short Jokes dataset.
| 2,019 | Computation and Language |
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense
Reasoning | Understanding narratives requires reading between the lines, which in turn,
requires interpreting the likely causes and effects of events, even when they
are not mentioned explicitly. In this paper, we introduce Cosmos QA, a
large-scale dataset of 35,600 problems that require commonsense-based reading
comprehension, formulated as multiple-choice questions. In stark contrast to
most existing reading comprehension datasets where the questions focus on
factual and literal understanding of the context paragraph, our dataset focuses
on reading between the lines over a diverse collection of people's everyday
narratives, asking such questions as "what might be the possible reason of
...?", or "what would have happened if ..." that require reasoning beyond the
exact text spans in the context. To establish baseline performances on Cosmos
QA, we experiment with several state-of-the-art neural architectures for
reading comprehension, and also propose a new architecture that improves over
the competitive baselines. Experimental results demonstrate a significant gap
between machine (68.4%) and human performance (94%), pointing to avenues for
future research on commonsense machine comprehension. Dataset, code and
leaderboard is publicly available at https://wilburone.github.io/cosmos.
| 2,019 | Computation and Language |
Generating Classical Chinese Poems from Vernacular Chinese | Classical Chinese poetry is a jewel in the treasure house of Chinese culture.
Previous poem generation models only allow users to employ keywords to
interfere the meaning of generated poems, leaving the dominion of generation to
the model. In this paper, we propose a novel task of generating classical
Chinese poems from vernacular, which allows users to have more control over the
semantic of generated poems. We adapt the approach of unsupervised machine
translation (UMT) to our task. We use segmentation-based padding and
reinforcement learning to address under-translation and over-translation
respectively. According to experiments, our approach significantly improve the
perplexity and BLEU compared with typical UMT models. Furthermore, we explored
guidelines on how to write the input vernacular to generate better poems. Human
evaluation showed our approach can generate high-quality poems which are
comparable to amateur poems.
| 2,019 | Computation and Language |
Phrase Grounding by Soft-Label Chain Conditional Random Field | The phrase grounding task aims to ground each entity mention in a given
caption of an image to a corresponding region in that image. Although there are
clear dependencies between how different mentions of the same caption should be
grounded, previous structured prediction methods that aim to capture such
dependencies need to resort to approximate inference or non-differentiable
losses. In this paper, we formulate phrase grounding as a sequence labeling
task where we treat candidate regions as potential labels, and use neural chain
Conditional Random Fields (CRFs) to model dependencies among regions for
adjacent mentions. In contrast to standard sequence labeling tasks, the phrase
grounding task is defined such that there may be multiple correct candidate
regions. To address this multiplicity of gold labels, we define so-called
Soft-Label Chain CRFs, and present an algorithm that enables convenient
end-to-end training. Our method establishes a new state-of-the-art on phrase
grounding on the Flickr30k Entities dataset. Analysis shows that our model
benefits both from the entity dependencies captured by the CRF and from the
soft-label training regime. Our code is available at
\url{github.com/liujch1998/SoftLabelCCRF}
| 2,019 | Computation and Language |
Higher-order Comparisons of Sentence Encoder Representations | Representational Similarity Analysis (RSA) is a technique developed by
neuroscientists for comparing activity patterns of different measurement
modalities (e.g., fMRI, electrophysiology, behavior). As a framework, RSA has
several advantages over existing approaches to interpretation of language
encoders based on probing or diagnostic classification: namely, it does not
require large training samples, is not prone to overfitting, and it enables a
more transparent comparison between the representational geometries of
different models and modalities. We demonstrate the utility of RSA by
establishing a previously unknown correspondence between widely-employed
pretrained language encoders and human processing difficulty via eye-tracking
data, showcasing its potential in the interpretability toolbox for neural
models
| 2,019 | Computation and Language |
Syntax-aware Multilingual Semantic Role Labeling | Recently, semantic role labeling (SRL) has earned a series of success with
even higher performance improvements, which can be mainly attributed to
syntactic integration and enhanced word representation. However, most of these
efforts focus on English, while SRL on multiple languages more than English has
received relatively little attention so that is kept underdevelopment. Thus
this paper intends to fill the gap on multilingual SRL with special focus on
the impact of syntax and contextualized word representation. Unlike existing
work, we propose a novel method guided by syntactic rule to prune arguments,
which enables us to integrate syntax into multilingual SRL model simply and
effectively. We present a unified SRL model designed for multiple languages
together with the proposed uniform syntax enhancement. Our model achieves new
state-of-the-art results on the CoNLL-2009 benchmarks of all seven languages.
Besides, we pose a discussion on the syntactic role among different languages
and verify the effectiveness of deep enhanced representation for multilingual
SRL.
| 2,019 | Computation and Language |
A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment
Analysis | Aspect based sentiment analysis (ABSA) aims to identify the sentiment
polarity towards the given aspect in a sentence, while previous models
typically exploit an aspect-independent (weakly associative) encoder for
sentence representation generation. In this paper, we propose a novel
Aspect-Guided Deep Transition model, named AGDT, which utilizes the given
aspect to guide the sentence encoding from scratch with the specially-designed
deep transition architecture. Furthermore, an aspect-oriented objective is
designed to enforce AGDT to reconstruct the given aspect with the generated
sentence representation. In doing so, our AGDT can accurately generate
aspect-specific sentence representation, and thus conduct more accurate
sentiment predictions. Experimental results on multiple SemEval datasets
demonstrate the effectiveness of our proposed approach, which significantly
outperforms the best reported results with the same setting.
| 2,019 | Computation and Language |
Repurposing Decoder-Transformer Language Models for Abstractive
Summarization | Neural network models have shown excellent fluency and performance when
applied to abstractive summarization. Many approaches to neural abstractive
summarization involve the introduction of significant inductive bias,
exemplified through the use of components such as pointer-generator
architectures, coverage, and partially extractive procedures, designed to mimic
the process by which humans summarize documents. We show that it is possible to
attain competitive performance by instead directly viewing summarization as a
language modeling problem and effectively leveraging transfer learning. We
introduce a simple procedure built upon decoder-transformers to obtain highly
competitive ROUGE scores for summarization performance using a language
modeling loss alone, with no beam-search or other decoding-time optimization,
and instead relying on efficient nucleus sampling and greedy decoding.
| 2,019 | Computation and Language |
Towards Understanding Neural Machine Translation with Word Importance | Although neural machine translation (NMT) has advanced the state-of-the-art
on various language pairs, the interpretability of NMT remains unsatisfactory.
In this work, we propose to address this gap by focusing on understanding the
input-output behavior of NMT models. Specifically, we measure the word
importance by attributing the NMT output to every input word through a
gradient-based method. We validate the approach on a couple of perturbation
operations, language pairs, and model architectures, demonstrating its
superiority on identifying input words with higher influence on translation
performance. Encouragingly, the calculated importance can serve as indicators
of input words that are under-translated by NMT models. Furthermore, our
analysis reveals that words of certain syntactic categories have higher
importance while the categories vary across language pairs, which can inspire
better design principles of NMT architectures for multi-lingual translation.
| 2,019 | Computation and Language |
QuASE: Question-Answer Driven Sentence Encoding | Question-answering (QA) data often encodes essential information in many
facets. This paper studies a natural question: Can we get supervision from QA
data for other tasks (typically, non-QA ones)? For example, {\em can we use
QAMR (Michael et al., 2017) to improve named entity recognition?} We suggest
that simply further pre-training BERT is often not the best option, and propose
the {\em question-answer driven sentence encoding (QuASE)} framework. QuASE
learns representations from QA data, using BERT or other state-of-the-art
contextual language models. In particular, we observe the need to distinguish
between two types of sentence encodings, depending on whether the target task
is a single- or multi-sentence input; in both cases, the resulting encoding is
shown to be an easy-to-use plugin for many downstream tasks. This work may
point out an alternative way to supervise NLP tasks.
| 2,020 | Computation and Language |
Monitoring stance towards vaccination in Twitter messages | We developed a system to automatically classify stance towards vaccination in
Twitter messages, with a focus on messages with a negative stance. Such a
system makes it possible to monitor the ongoing stream of messages on social
media, offering actionable insights into public hesitance with respect to
vaccination. For Dutch Twitter messages that mention vaccination-related key
terms, we annotated their stance and feeling in relation to vaccination
(provided that they referred to this topic). Subsequently, we used these coded
data to train and test different machine learning set-ups. With the aim to best
identify messages with a negative stance towards vaccination, we compared
set-ups at an increasing dataset size and decreasing reliability, at an
increasing number of categories to distinguish, and with different
classification algorithms. We found that Support Vector Machines trained on a
combination of strictly and laxly labeled data with a more fine-grained
labeling yielded the best result, at an F1-score of 0.36 and an Area under the
ROC curve of 0.66, outperforming a rule-based sentiment analysis baseline that
yielded an F1-score of 0.25 and an Area under the ROC curve of 0.57. The
outcomes of our study indicate that stance prediction by a computerized system
only is a challenging task. Our analysis of the data and behavior of our system
suggests that an approach is needed in which the use of a larger training
dataset is combined with a setting in which a human-in-the-loop provides the
system with feedback on its predictions.
| 2,019 | Computation and Language |
A Unified Neural Coherence Model | Recently, neural approaches to coherence modeling have achieved
state-of-the-art results in several evaluation tasks. However, we show that
most of these models often fail on harder tasks with more realistic application
scenarios. In particular, the existing models underperform on tasks that
require the model to be sensitive to local contexts such as candidate ranking
in conversational dialogue and in machine translation. In this paper, we
propose a unified coherence model that incorporates sentence grammar,
inter-sentence coherence relations, and global coherence patterns into a common
neural framework. With extensive experiments on local and global discrimination
tasks, we demonstrate that our proposed model outperforms existing models by a
good margin, and establish a new state-of-the-art.
| 2,019 | Computation and Language |
Enhancing AMR-to-Text Generation with Dual Graph Representations | Generating text from graph-based data, such as Abstract Meaning
Representation (AMR), is a challenging task due to the inherent difficulty in
how to properly encode the structure of a graph with labeled edges. To address
this difficulty, we propose a novel graph-to-sequence model that encodes
different but complementary perspectives of the structural information
contained in the AMR graph. The model learns parallel top-down and bottom-up
representations of nodes capturing contrasting views of the graph. We also
investigate the use of different node message passing strategies, employing
different state-of-the-art graph encoders to compute node representations based
on incoming and outgoing perspectives. In our experiments, we demonstrate that
the dual graph representation leads to improvements in AMR-to-text generation,
achieving state-of-the-art results on two AMR datasets.
| 2,019 | Computation and Language |
Cross-Lingual Machine Reading Comprehension | Though the community has made great progress on Machine Reading Comprehension
(MRC) task, most of the previous works are solving English-based MRC problems,
and there are few efforts on other languages mainly due to the lack of
large-scale training data. In this paper, we propose Cross-Lingual Machine
Reading Comprehension (CLMRC) task for the languages other than English.
Firstly, we present several back-translation approaches for CLMRC task, which
is straightforward to adopt. However, to accurately align the answer into
another language is difficult and could introduce additional noise. In this
context, we propose a novel model called Dual BERT, which takes advantage of
the large-scale training data provided by rich-resource language (such as
English) and learn the semantic relations between the passage and question in a
bilingual context, and then utilize the learned knowledge to improve reading
comprehension performance of low-resource language. We conduct experiments on
two Chinese machine reading comprehension datasets CMRC 2018 and DRCD. The
results show consistent and significant improvements over various
state-of-the-art systems by a large margin, which demonstrate the potentials in
CLMRC task. Resources available: https://github.com/ymcui/Cross-Lingual-MRC
| 2,019 | Computation and Language |
One Model to Learn Both: Zero Pronoun Prediction and Translation | Zero pronouns (ZPs) are frequently omitted in pro-drop languages, but should
be recalled in non-pro-drop languages. This discourse phenomenon poses a
significant challenge for machine translation (MT) when translating texts from
pro-drop to non-pro-drop languages. In this paper, we propose a unified and
discourse-aware ZP translation approach for neural MT models. Specifically, we
jointly learn to predict and translate ZPs in an end-to-end manner, allowing
both components to interact with each other. In addition, we employ
hierarchical neural networks to exploit discourse-level context, which is
beneficial for ZP prediction and thus translation. Experimental results on both
Chinese-English and Japanese-English data show that our approach significantly
and accumulatively improves both translation performance and ZP prediction
accuracy over not only baseline but also previous works using external ZP
prediction models. Extensive analyses confirm that the performance improvement
comes from the alleviation of different kinds of errors especially caused by
subjective ZPs.
| 2,019 | Computation and Language |
Self-Attention with Structural Position Representations | Although self-attention networks (SANs) have advanced the state-of-the-art on
various NLP tasks, one criticism of SANs is their ability of encoding positions
of input words (Shaw et al., 2018). In this work, we propose to augment SANs
with structural position representations to model the latent structure of the
input sentence, which is complementary to the standard sequential positional
representations. Specifically, we use dependency tree to represent the
grammatical structure of a sentence, and propose two strategies to encode the
positional relationships among words in the dependency tree. Experimental
results on NIST Chinese-to-English and WMT14 English-to-German translation
tasks show that the proposed approach consistently boosts performance over both
the absolute and relative sequential position representations.
| 2,019 | Computation and Language |
A Dataset of General-Purpose Rebuttal | In Natural Language Understanding, the task of response generation is usually
focused on responses to short texts, such as tweets or a turn in a dialog. Here
we present a novel task of producing a critical response to a long
argumentative text, and suggest a method based on general rebuttal arguments to
address it. We do this in the context of the recently-suggested task of
listening comprehension over argumentative content: given a speech on some
specified topic, and a list of relevant arguments, the goal is to determine
which of the arguments appear in the speech. The general rebuttals we describe
here (written in English) overcome the need for topic-specific arguments to be
provided, by proving to be applicable for a large set of topics. This allows
creating responses beyond the scope of topics for which specific arguments are
available. All data collected during this work is freely available for
research.
| 2,019 | Computation and Language |
You Shall Know a User by the Company It Keeps: Dynamic Representations
for Social Media Users in NLP | Information about individuals can help to better understand what they say,
particularly in social media where texts are short. Current approaches to
modelling social media users pay attention to their social connections, but
exploit this information in a static way, treating all connections uniformly.
This ignores the fact, well known in sociolinguistics, that an individual may
be part of several communities which are not equally relevant in all
communicative situations. We present a model based on Graph Attention Networks
that captures this observation. It dynamically explores the social graph of a
user, computes a user representation given the most relevant connections for a
target task, and combines it with linguistic information to make a prediction.
We apply our model to three different tasks, evaluate it against alternative
models, and analyse the results extensively, showing that it significantly
outperforms other current methods.
| 2,019 | Computation and Language |
What You See is What You Get: Visual Pronoun Coreference Resolution in
Dialogues | Grounding a pronoun to a visual object it refers to requires complex
reasoning from various information sources, especially in conversational
scenarios. For example, when people in a conversation talk about something all
speakers can see, they often directly use pronouns (e.g., it) to refer to it
without previous introduction. This fact brings a huge challenge for modern
natural language understanding systems, particularly conventional context-based
pronoun coreference models. To tackle this challenge, in this paper, we
formally define the task of visual-aware pronoun coreference resolution (PCR)
and introduce VisPro, a large-scale dialogue PCR dataset, to investigate
whether and how the visual information can help resolve pronouns in dialogues.
We then propose a novel visual-aware PCR model, VisCoref, for this task and
conduct comprehensive experiments and case studies on our dataset. Results
demonstrate the importance of the visual information in this PCR case and show
the effectiveness of the proposed model.
| 2,019 | Computation and Language |
Global Entity Disambiguation with BERT | We propose a global entity disambiguation (ED) model based on BERT. To
capture global contextual information for ED, our model treats not only words
but also entities as input tokens, and solves the task by sequentially
resolving mentions to their referent entities and using resolved entities as
inputs at each step. We train the model using a large entity-annotated corpus
obtained from Wikipedia. We achieve new state-of-the-art results on five
standard ED datasets: AIDA-CoNLL, MSNBC, AQUAINT, ACE2004, and WNED-WIKI. The
source code and model checkpoint are available at
https://github.com/studio-ousia/luke.
| 2,022 | Computation and Language |
An Improved Neural Baseline for Temporal Relation Extraction | Determining temporal relations (e.g., before or after) between events has
been a challenging natural language understanding task, partly due to the
difficulty to generate large amounts of high-quality training data.
Consequently, neural approaches have not been widely used on it, or showed only
moderate improvements. This paper proposes a new neural system that achieves
about 10% absolute improvement in accuracy over the previous best system (25%
error reduction) on two benchmark datasets. The proposed system is trained on
the state-of-the-art MATRES dataset and applies contextualized word embeddings,
a Siamese encoder of a temporal common sense knowledge base, and global
inference via integer linear programming (ILP). We suggest that the new
approach could serve as a strong baseline for future research in this area.
| 2,019 | Computation and Language |
Evaluating the Cross-Lingual Effectiveness of Massively Multilingual
Neural Machine Translation | The recently proposed massively multilingual neural machine translation (NMT)
system has been shown to be capable of translating over 100 languages to and
from English within a single model. Its improved translation performance on low
resource languages hints at potential cross-lingual transfer capability for
downstream tasks. In this paper, we evaluate the cross-lingual effectiveness of
representations from the encoder of a massively multilingual NMT model on 5
downstream classification and sequence labeling tasks covering a diverse set of
over 50 languages. We compare against a strong baseline, multilingual BERT
(mBERT), in different cross-lingual transfer learning scenarios and show gains
in zero-shot transfer in 4 out of these 5 tasks.
| 2,020 | Computation and Language |
A Discriminative Neural Model for Cross-Lingual Word Alignment | We introduce a novel discriminative word alignment model, which we integrate
into a Transformer-based machine translation model. In experiments based on a
small number of labeled examples (~1.7K-5K sentences) we evaluate its
performance intrinsically on both English-Chinese and English-Arabic alignment,
where we achieve major improvements over unsupervised baselines (11-27 F1). We
evaluate the model extrinsically on data projection for Chinese NER, showing
that our alignments lead to higher performance when used to project NER tags
from English to Chinese. Finally, we perform an ablation analysis and an
annotation experiment that jointly support the utility and feasibility of
future manual alignment elicitation.
| 2,019 | Computation and Language |
An Empirical Study of Incorporating Pseudo Data into Grammatical Error
Correction | The incorporation of pseudo data in the training of grammatical error
correction models has been one of the main factors in improving the performance
of such models. However, consensus is lacking on experimental configurations,
namely, choosing how the pseudo data should be generated or used. In this
study, these choices are investigated through extensive experiments, and
state-of-the-art performance is achieved on the CoNLL-2014 test set
($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task
($F_{0.5}=70.2$) without making any modifications to the model architecture.
| 2,019 | Computation and Language |
Rotate King to get Queen: Word Relationships as Orthogonal
Transformations in Embedding Space | A notable property of word embeddings is that word relationships can exist as
linear substructures in the embedding space. For example, $\textit{gender}$
corresponds to $\vec{\textit{woman}} - \vec{\textit{man}}$ and
$\vec{\textit{queen}} - \vec{\textit{king}}$. This, in turn, allows word
analogies to be solved arithmetically: $\vec{\textit{king}} -
\vec{\textit{man}} + \vec{\textit{woman}} \approx \vec{\textit{queen}}$. This
property is notable because it suggests that models trained on word embeddings
can easily learn such relationships as geometric translations. However, there
is no evidence that models $\textit{exclusively}$ represent relationships in
this manner. We document an alternative way in which downstream models might
learn these relationships: orthogonal and linear transformations. For example,
given a translation vector for $\textit{gender}$, we can find an orthogonal
matrix $R$, representing a rotation and reflection, such that
$R(\vec{\textit{king}}) \approx \vec{\textit{queen}}$ and
$R(\vec{\textit{man}}) \approx \vec{\textit{woman}}$. Analogical reasoning
using orthogonal transformations is almost as accurate as using vector
arithmetic; using linear transformations is more accurate than both. Our
findings suggest that these transformations can be as good a representation of
word relationships as translation vectors.
| 2,019 | Computation and Language |
Commonsense Knowledge Mining from Pretrained Models | Inferring commonsense knowledge is a key challenge in natural language
processing, but due to the sparsity of training data, previous work has shown
that supervised methods for commonsense knowledge mining underperform when
evaluated on novel data. In this work, we develop a method for generating
commonsense knowledge using a large, pre-trained bidirectional language model.
By transforming relational triples into masked sentences, we can use this model
to rank a triple's validity by the estimated pointwise mutual information
between the two entities. Since we do not update the weights of the
bidirectional model, our approach is not biased by the coverage of any one
commonsense knowledge base. Though this method performs worse on a test set
than models explicitly trained on a corresponding training set, it outperforms
these methods when mining commonsense knowledge from new sources, suggesting
that unsupervised techniques may generalize better than current supervised
approaches.
| 2,019 | Computation and Language |
How Contextual are Contextualized Word Representations? Comparing the
Geometry of BERT, ELMo, and GPT-2 Embeddings | Replacing static word embeddings with contextualized word representations has
yielded significant improvements on many NLP tasks. However, just how
contextual are the contextualized representations produced by models such as
ELMo and BERT? Are there infinitely many context-specific representations for
each word, or are words essentially assigned one of a finite number of
word-sense representations? For one, we find that the contextualized
representations of all words are not isotropic in any layer of the
contextualizing model. While representations of the same word in different
contexts still have a greater cosine similarity than those of two different
words, this self-similarity is much lower in upper layers. This suggests that
upper layers of contextualizing models produce more context-specific
representations, much like how upper layers of LSTMs produce more task-specific
representations. In all layers of ELMo, BERT, and GPT-2, on average, less than
5% of the variance in a word's contextualized representations can be explained
by a static embedding for that word, providing some justification for the
success of contextualized representations.
| 2,019 | Computation and Language |
Beyond The Wall Street Journal: Anchoring and Comparing Discourse
Signals across Genres | Recent research on discourse relations has found that they are cued not only
by discourse markers (DMs) but also by other textual signals and that signaling
information is indicative of genres. While several corpora exist with discourse
relation signaling information such as the Penn Discourse Treebank (PDTB,
Prasad et al. 2008) and the Rhetorical Structure Theory Signalling Corpus
(RST-SC, Das and Taboada 2018), they both annotate the Wall Street Journal
(WSJ) section of the Penn Treebank (PTB, Marcus et al. 1993), which is limited
to the news domain. Thus, this paper adapts the signal identification and
anchoring scheme (Liu and Zeldes, 2019) to three more genres, examines the
distribution of signaling devices across relations and genres, and provides a
taxonomy of indicative signals found in this dataset.
| 2,019 | Computation and Language |
All Roads Lead to UD: Converting Stanford and Penn Parses to English
Universal Dependencies with Multilayer Annotations | We describe and evaluate different approaches to the conversion of gold
standard corpus data from Stanford Typed Dependencies (SD) and Penn-style
constituent trees to the latest English Universal Dependencies representation
(UD 2.2). Our results indicate that pure SD to UD conversion is highly accurate
across multiple genres, resulting in around 1.5% errors, but can be improved
further to fewer than 0.5% errors given access to annotations beyond the pure
syntax tree, such as entity types and coreference resolution, which are
necessary for correct generation of several UD relations. We show that
constituent-based conversion using CoreNLP (with automatic NER) performs
substantially worse in all genres, including when using gold constituent trees,
primarily due to underspecification of phrasal grammatical functions.
| 2,019 | Computation and Language |
Improving Context-aware Neural Machine Translation with Target-side
Context | In recent years, several studies on neural machine translation (NMT) have
attempted to use document-level context by using a multi-encoder and two
attention mechanisms to read the current and previous sentences to incorporate
the context of the previous sentences. These studies concluded that the
target-side context is less useful than the source-side context. However, we
considered that the reason why the target-side context is less useful lies in
the architecture used to model these contexts.
Therefore, in this study, we investigate how the target-side context can
improve context-aware neural machine translation. We propose a weight sharing
method wherein NMT saves decoder states and calculates an attention vector
using the saved states when translating a current sentence. Our experiments
show that the target-side context is also useful if we plug it into NMT as the
decoder state when translating a previous sentence.
| 2,019 | Computation and Language |
Classification Betters Regression in Query-based Multi-document
Summarisation Techniques for Question Answering: Macquarie University at
BioASQ7b | Task B Phase B of the 2019 BioASQ challenge focuses on biomedical question
answering. Macquarie University's participation applies query-based
multi-document extractive summarisation techniques to generate a multi-sentence
answer given the question and the set of relevant snippets. In past
participation we explored the use of regression approaches using deep learning
architectures and a simple policy gradient architecture. For the 2019 challenge
we experiment with the use of classification approaches with and without
reinforcement learning. In addition, we conduct a correlation analysis between
various ROUGE metrics and the BioASQ human evaluation scores.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.