Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Relation Embedding with Dihedral Group in Knowledge Graph | Link prediction is critical for the application of incomplete knowledge graph
(KG) in the downstream tasks. As a family of effective approaches for link
predictions, embedding methods try to learn low-rank representations for both
entities and relations such that the bilinear form defined therein is a
well-behaved scoring function. Despite of their successful performances,
existing bilinear forms overlook the modeling of relation compositions,
resulting in lacks of interpretability for reasoning on KG. To fulfill this
gap, we propose a new model called DihEdral, named after dihedral symmetry
group. This new model learns knowledge graph embeddings that can capture
relation compositions by nature. Furthermore, our approach models the relation
embeddings parametrized by discrete values, thereby decrease the solution space
drastically. Our experiments show that DihEdral is able to capture all desired
properties such as (skew-) symmetry, inversion and (non-) Abelian composition,
and outperforms existing bilinear form based approach and is comparable to or
better than deep learning models such as ConvE.
| 2,019 | Computation and Language |
Gender-preserving Debiasing for Pre-trained Word Embeddings | Word embeddings learnt from massive text collections have demonstrated
significant levels of discriminative biases such as gender, racial or ethnic
biases, which in turn bias the down-stream NLP applications that use those word
embeddings. Taking gender-bias as a working example, we propose a debiasing
method that preserves non-discriminative gender-related information, while
removing stereotypical discriminative gender biases from pre-trained word
embeddings. Specifically, we consider four types of information:
\emph{feminine}, \emph{masculine}, \emph{gender-neutral} and
\emph{stereotypical}, which represent the relationship between gender vs. bias,
and propose a debiasing method that (a) preserves the gender-related
information in feminine and masculine words, (b) preserves the neutrality in
gender-neutral words, and (c) removes the biases from stereotypical words.
Experimental results on several previously proposed benchmark datasets show
that our proposed method can debias pre-trained word embeddings better than
existing SoTA methods proposed for debiasing word embeddings while preserving
gender-related but non-discriminative information.
| 2,019 | Computation and Language |
Multi-task Pairwise Neural Ranking for Hashtag Segmentation | Hashtags are often employed on social media and beyond to add metadata to a
textual utterance with the goal of increasing discoverability, aiding search,
or providing additional semantics. However, the semantic content of hashtags is
not straightforward to infer as these represent ad-hoc conventions which
frequently include multiple words joined together and can include abbreviations
and unorthodox spellings. We build a dataset of 12,594 hashtags split into
individual segments and propose a set of approaches for hashtag segmentation by
framing it as a pairwise ranking problem between candidate segmentations. Our
novel neural approaches demonstrate 24.6% error reduction in hashtag
segmentation accuracy compared to the current state-of-the-art method. Finally,
we demonstrate that a deeper understanding of hashtag semantics obtained
through segmentation is useful for downstream applications such as sentiment
analysis, for which we achieved a 2.6% increase in average recall on the
SemEval 2017 sentiment analysis dataset.
| 2,019 | Computation and Language |
Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by
Evidence Pooling | This paper presents a strong set of results for resolving gendered ambiguous
pronouns on the Gendered Ambiguous Pronouns shared task. The model presented
here draws upon the strengths of state-of-the-art language and coreference
resolution models, and introduces a novel evidence-based deep learning
architecture. Injecting evidence from the coreference models compliments the
base architecture, and analysis shows that the model is not hindered by their
weaknesses, specifically gender bias. The modularity and simplicity of the
architecture make it very easy to extend for further improvement and applicable
to other NLP problems. Evaluation on GAP test data results in a
state-of-the-art performance at 92.5% F1 (gender bias of 0.97), edging closer
to the human performance of 96.6%. The end-to-end solution presented here
placed 1st in the Kaggle competition, winning by a significant lead. The code
is available at https://github.com/sattree/gap.
| 2,019 | Computation and Language |
From Words to Sentences: A Progressive Learning Approach for
Zero-resource Machine Translation with Visual Pivots | The neural machine translation model has suffered from the lack of
large-scale parallel corpora. In contrast, we humans can learn multi-lingual
translations even without parallel texts by referring our languages to the
external world. To mimic such human learning behavior, we employ images as
pivots to enable zero-resource translation learning. However, a picture tells a
thousand words, which makes multi-lingual sentences pivoted by the same image
noisy as mutual translations and thus hinders the translation model learning.
In this work, we propose a progressive learning approach for image-pivoted
zero-resource machine translation. Since words are less diverse when grounded
in the image, we first learn word-level translation with image pivots, and then
progress to learn the sentence-level translation by utilizing the learned word
translation to suppress noises in image-pivoted multi-lingual sentences.
Experimental results on two widely used image-pivot translation datasets,
IAPR-TC12 and Multi30k, show that the proposed approach significantly
outperforms other state-of-the-art methods.
| 2,019 | Computation and Language |
Phase-based Minimalist Parsing and complexity in non-local dependencies | A cognitively plausible parsing algorithm should perform like the human
parser in critical contexts. Here I propose an adaptation of Earley's parsing
algorithm, suitable for Phase-based Minimalist Grammars (PMG, Chesi 2012), that
is able to predict complexity effects in performance. Focusing on self-paced
reading experiments of object clefts sentences (Warren & Gibson 2005) I will
associate to parsing a complexity metric based on cued features to be retrieved
at the verb segment (Feature Retrieval & Encoding Cost, FREC). FREC is
crucially based on the usage of memory predicted by the discussed parsing
algorithm and it correctly fits with the reading time revealed.
| 2,017 | Computation and Language |
TACAM: Topic And Context Aware Argument Mining | In this work we address the problem of argument search. The purpose of
argument search is the distillation of pro and contra arguments for requested
topics from large text corpora. In previous works, the usual approach is to use
a standard search engine to extract text parts which are relevant to the given
topic and subsequently use an argument recognition algorithm to select
arguments from them. The main challenge in the argument recognition task, which
is also known as argument mining, is that often sentences containing arguments
are structurally similar to purely informative sentences without any stance
about the topic. In fact, they only differ semantically. Most approaches use
topic or search term information only for the first search step and therefore
assume that arguments can be classified independently of a topic. We argue that
topic information is crucial for argument mining, since the topic defines the
semantic context of an argument. Precisely, we propose different models for the
classification of arguments, which take information about a topic of an
argument into account. Moreover, to enrich the context of a topic and to let
models understand the context of the potential argument better, we integrate
information from different external sources such as Knowledge Graphs or
pre-trained NLP models. Our evaluation shows that considering topic
information, especially in connection with external information, provides a
significant performance boost for the argument mining task.
| 2,019 | Computation and Language |
A computational linguistic study of personal recovery in bipolar
disorder | Mental health research can benefit increasingly fruitfully from computational
linguistics methods, given the abundant availability of language data in the
internet and advances of computational tools. This interdisciplinary project
will collect and analyse social media data of individuals diagnosed with
bipolar disorder with regard to their recovery experiences. Personal recovery -
living a satisfying and contributing life along symptoms of severe mental
health issues - so far has only been investigated qualitatively with structured
interviews and quantitatively with standardised questionnaires with mainly
English-speaking participants in Western countries. Complementary to this
evidence, computational linguistic methods allow us to analyse first-person
accounts shared online in large quantities, representing unstructured settings
and a more heterogeneous, multilingual population, to draw a more complete
picture of the aspects and mechanisms of personal recovery in bipolar disorder.
| 2,019 | Computation and Language |
Better Character Language Modeling Through Morphology | We incorporate morphological supervision into character language models
(CLMs) via multitasking and show that this addition improves bits-per-character
(BPC) performance across 24 languages, even when the morphology data and
language modeling data are disjoint. Analyzing the CLMs shows that inflected
words benefit more from explicitly modeling morphology than uninflected words,
and that morphological supervision improves performance even as the amount of
language modeling data grows. We then transfer morphological supervision across
languages to improve language modeling performance in the low-resource setting.
| 2,019 | Computation and Language |
Transforming Complex Sentences into a Semantic Hierarchy | We present an approach for recursively splitting and rephrasing complex
English sentences into a novel semantic hierarchy of simplified sentences, with
each of them presenting a more regular structure that may facilitate a wide
variety of artificial intelligence tasks, such as machine translation (MT) or
information extraction (IE). Using a set of hand-crafted transformation rules,
input sentences are recursively transformed into a two-layered hierarchical
representation in the form of core sentences and accompanying contexts that are
linked via rhetorical relations. In this way, the semantic relationship of the
decomposed constituents is preserved in the output, maintaining its
interpretability for downstream applications. Both a thorough manual analysis
and automatic evaluation across three datasets from two different domains
demonstrate that the proposed syntactic simplification approach outperforms the
state of the art in structural text simplification. Moreover, an extrinsic
evaluation shows that when applying our framework as a preprocessing step the
performance of state-of-the-art Open IE systems can be improved by up to 346%
in precision and 52% in recall. To enable reproducible research, all code is
provided online.
| 2,019 | Computation and Language |
Handling Divergent Reference Texts when Evaluating Table-to-Text
Generation | Automatically constructed datasets for generating text from semi-structured
data (tables), such as WikiBio, often contain reference texts that diverge from
the information in the corresponding semi-structured data. We show that metrics
which rely solely on the reference texts, such as BLEU and ROUGE, show poor
correlation with human judgments when those references diverge. We propose a
new metric, PARENT, which aligns n-grams from the reference and generated texts
to the semi-structured data before computing their precision and recall.
Through a large scale human evaluation study of table-to-text models for
WikiBio, we show that PARENT correlates with human judgments better than
existing text generation metrics. We also adapt and evaluate the information
extraction based evaluation proposed by Wiseman et al (2017), and show that
PARENT has comparable correlation to it, while being easier to use. We show
that PARENT is also applicable when the reference texts are elicited from
humans using the data from the WebNLG challenge.
| 2,019 | Computation and Language |
Training Neural Machine Translation To Apply Terminology Constraints | This paper proposes a novel method to inject custom terminology into neural
machine translation at run time. Previous works have mainly proposed
modifications to the decoding algorithm in order to constrain the output to
include run-time-provided target terms. While being effective, these
constrained decoding methods add, however, significant computational overhead
to the inference step, and, as we show in this paper, can be brittle when
tested in realistic conditions. In this paper we approach the problem by
training a neural MT system to learn how to use custom terminology when
provided with the input. Comparative experiments show that our method is not
only more effective than a state-of-the-art implementation of constrained
decoding, but is also as fast as constraint-free decoding.
| 2,019 | Computation and Language |
Dynamically Composing Domain-Data Selection with Clean-Data Selection by
"Co-Curricular Learning" for Neural Machine Translation | Noise and domain are important aspects of data quality for neural machine
translation. Existing research focus separately on domain-data selection,
clean-data selection, or their static combination, leaving the dynamic
interaction across them not explicitly examined. This paper introduces a
"co-curricular learning" method to compose dynamic domain-data selection with
dynamic clean-data selection, for transfer learning across both capabilities.
We apply an EM-style optimization procedure to further refine the
"co-curriculum". Experiment results and analysis with two domains demonstrate
the effectiveness of the method and the properties of data scheduled by the
co-curriculum.
| 2,019 | Computation and Language |
Simultaneous Translation with Flexible Policy via Restricted Imitation
Learning | Simultaneous translation is widely useful but remains one of the most
difficult tasks in NLP. Previous work either uses fixed-latency policies, or
train a complicated two-staged model using reinforcement learning. We propose a
much simpler single model that adds a `delay' token to the target vocabulary,
and design a restricted dynamic oracle to greatly simplify training.
Experiments on Chinese<->English simultaneous translation show that our work
leads to flexible policies that achieve better BLEU scores and lower latencies
compared to both fixed and RL-learned policies.
| 2,019 | Computation and Language |
System Demo for Transfer Learning across Vision and Text using Domain
Specific CNN Accelerator for On-Device NLP Applications | Power-efficient CNN Domain Specific Accelerator (CNN-DSA) chips are currently
available for wide use in mobile devices. These chips are mainly used in
computer vision applications. However, the recent work of Super Characters
method for text classification and sentiment analysis tasks using
two-dimensional CNN models has also achieved state-of-the-art results through
the method of transfer learning from vision to text. In this paper, we
implemented the text classification and sentiment analysis applications on
mobile devices using CNN-DSA chips. Compact network representations using
one-bit and three-bits precision for coefficients and five-bits for activations
are used in the CNN-DSA chip with power consumption less than 300mW. For edge
devices under memory and compute constraints, the network is further compressed
by approximating the external Fully Connected (FC) layers within the CNN-DSA
chip. At the workshop, we have two system demonstrations for NLP tasks. The
first demo classifies the input English Wikipedia sentence into one of the 14
ontologies. The second demo classifies the Chinese online-shopping review into
positive or negative.
| 2,019 | Computation and Language |
Improving Long Distance Slot Carryover in Spoken Dialogue Systems | Tracking the state of the conversation is a central component in
task-oriented spoken dialogue systems. One such approach for tracking the
dialogue state is slot carryover, where a model makes a binary decision if a
slot from the context is relevant to the current turn. Previous work on the
slot carryover task used models that made independent decisions for each slot.
A close analysis of the results show that this approach results in poor
performance over longer context dialogues. In this paper, we propose to jointly
model the slots. We propose two neural network architectures, one based on
pointer networks that incorporate slot ordering information, and the other
based on transformer networks that uses self attention mechanism to model the
slot interdependencies. Our experiments on an internal dialogue benchmark
dataset and on the public DSTC2 dataset demonstrate that our proposed models
are able to resolve longer distance slot references and are able to achieve
competitive performance.
| 2,019 | Computation and Language |
Detecting Local Insights from Global Labels: Supervised & Zero-Shot
Sequence Labeling via a Convolutional Decomposition | We propose a new, more actionable view of neural network interpretability and
data analysis by leveraging the remarkable matching effectiveness of
representations derived from deep networks, guided by an approach for
class-conditional feature detection. The decomposition of the filter-ngram
interactions of a convolutional neural network and a linear layer over a
pre-trained deep network yields a strong binary sequence labeler, with
flexibility in producing predictions at -- and defining loss functions for --
varying label granularities, from the fully-supervised sequence labeling
setting to the challenging zero-shot sequence labeling setting, in which we
seek token-level predictions but only have document-level labels for training.
From this sequence-labeling layer we derive dense representations of the input
that can then be matched to instances from training, or a support set with
known labels. Such introspection with inference-time decision rules provides a
means, in some settings, of making local updates to the model by altering the
labels or instances in the support set without re-training the full model.
Finally, we construct a particular K-nearest neighbors (K-NN) model from
matched exemplar representations that approximates the original model's
predictions and is at least as effective a predictor with respect to the
ground-truth labels. This additionally yields interpretable heuristics at the
token level for determining when predictions are less likely to be reliable,
and for screening input dissimilar to the support set. In effect, we show that
we can transform the deep network into a simple weighting over exemplars and
associated labels, yielding an introspectable -- and modestly updatable --
version of the original model.
| 2,021 | Computation and Language |
ShEMO -- A Large-Scale Validated Database for Persian Speech Emotion
Detection | This paper introduces a large-scale, validated database for Persian called
Sharif Emotional Speech Database (ShEMO). The database includes 3000
semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data
extracted from online radio plays. The ShEMO covers speech samples of 87
native-Persian speakers for five basic emotions including anger, fear,
happiness, sadness and surprise, as well as neutral state. Twelve annotators
label the underlying emotional state of utterances and majority voting is used
to decide on the final labels. According to the kappa measure, the
inter-annotator agreement is 64% which is interpreted as "substantial
agreement". We also present benchmark results based on common classification
methods in speech emotion detection task. According to the experiments, support
vector machine achieves the best results for both gender-independent (58.2%)
and gender-dependent models (female=59.4%, male=57.6%). The ShEMO is available
for academic purposes free of charge to provide a baseline for further research
on Persian emotional speech.
| 2,019 | Computation and Language |
A Review of Automated Speech and Language Features for Assessment of
Cognitive and Thought Disorders | It is widely accepted that information derived from analyzing speech (the
acoustic signal) and language production (words and sentences) serves as a
useful window into the health of an individual's cognitive ability. In fact,
most neuropsychological testing batteries have a component related to speech
and language where clinicians elicit speech from patients for subjective
evaluation across a broad set of dimensions. With advances in speech signal
processing and natural language processing, there has been recent interest in
developing tools to detect more subtle changes in cognitive-linguistic
function. This work relies on extracting a set of features from recorded and
transcribed speech for objective assessments of speech and language, early
diagnosis of neurological disease, and tracking of disease after diagnosis.
With an emphasis on cognitive and thought disorders, in this paper we provide a
review of existing speech and language features used in this domain, discuss
their clinical application, and highlight their advantages and disadvantages.
Broadly speaking, the review is split into two categories: language features
based on natural language processing and speech features based on speech signal
processing. Within each category, we consider features that aim to measure
complementary dimensions of cognitive-linguistics, including language
diversity, syntactic complexity, semantic coherence, and timing. We conclude
the review with a proposal of new research directions to further advance the
field.
| 2,019 | Computation and Language |
Resolving Gendered Ambiguous Pronouns with BERT | Pronoun resolution is part of coreference resolution, the task of pairing an
expression to its referring entity. This is an important task for natural
language understanding and a necessary component of machine translation
systems, chat bots and assistants. Neural machine learning systems perform far
from ideally in this task, reaching as low as 73% F1 scores on modern benchmark
datasets. Moreover, they tend to perform better for masculine pronouns than for
feminine ones. Thus, the problem is both challenging and important for NLP
researchers and practitioners. In this project, we describe our BERT-based
approach to solving the problem of gender-balanced pronoun resolution. We are
able to reach 92% F1 score and a much lower gender bias on the benchmark
dataset shared by Google AI Language team.
| 2,019 | Computation and Language |
Improved Zero-shot Neural Machine Translation via Ignoring Spurious
Correlations | Zero-shot translation, translating between language pairs on which a Neural
Machine Translation (NMT) system has never been trained, is an emergent
property when training the system in multilingual settings. However, naive
training for zero-shot NMT easily fails, and is sensitive to hyper-parameter
setting. The performance typically lags far behind the more conventional
pivot-based approach which translates twice using a third language as a pivot.
In this work, we address the degeneracy problem due to capturing spurious
correlations by quantitatively analyzing the mutual information between
language IDs of the source and decoded sentences. Inspired by this analysis, we
propose to use two simple but effective approaches: (1) decoder pre-training;
(2) back-translation. These methods show significant improvement (4~22 BLEU
points) over the vanilla zero-shot translation on three challenging
multilingual datasets, and achieve similar or better results than the
pivot-based approach.
| 2,019 | Computation and Language |
Converse Attention Knowledge Transfer for Low-Resource Named Entity
Recognition | In recent years, great success has been achieved in many tasks of natural
language processing (NLP), e.g., named entity recognition (NER), especially in
the high-resource language, i.e., English, thanks in part to the considerable
amount of labeled resources. However, most low-resource languages do not have
such an abundance of labeled data as high-resource English, leading to poor
performance of NER in these low-resource languages. Inspired by knowledge
transfer, we propose Converse Attention Network, or CAN in short, to improve
the performance of NER in low-resource languages by leveraging the knowledge
learned in pretrained high-resource English models. CAN first translates
low-resource languages into high-resource English using an attention based
translation module. In the process of translation, CAN obtain the attention
matrices that align the two languages. Furthermore, CAN use the attention
matrices to align the high-resource semantic features from a pretrained
high-resource English model with the low-resource semantic features. As a
result, CAN obtains aligned high-resource semantic features to enrich the
representations of low-resource languages. Experiments on four low-resource NER
datasets show that CAN achieves consistent and significant performance
improvements, which indicates the effectiveness of CAN.
| 2,023 | Computation and Language |
Joint Effects of Context and User History for Predicting Online
Conversation Re-entries | As the online world continues its exponential growth, interpersonal
communication has come to play an increasingly central role in opinion
formation and change. In order to help users better engage with each other
online, we study a challenging problem of re-entry prediction foreseeing
whether a user will come back to a conversation they once participated in. We
hypothesize that both the context of the ongoing conversations and the users'
previous chatting history will affect their continued interests in future
engagement. Specifically, we propose a neural framework with three main layers,
each modeling context, user history, and interactions between them, to explore
how the conversation context and user chatting history jointly result in their
re-entry behavior. We experiment with two large-scale datasets collected from
Twitter and Reddit. Results show that our proposed framework with bi-attention
achieves an F1 score of 61.1 on Twitter conversations, outperforming the
state-of-the-art methods from previous work.
| 2,019 | Computation and Language |
Exploring Phoneme-Level Speech Representations for End-to-End Speech
Translation | Previous work on end-to-end translation from speech has primarily used
frame-level features as speech representations, which creates longer, sparser
sequences than text. We show that a naive method to create compressed
phoneme-like speech representations is far more effective and efficient for
translation than traditional frame-level speech features. Specifically, we
generate phoneme labels for speech frames and average consecutive frames with
the same label to create shorter, higher-level source sequences for
translation. We see improvements of up to 5 BLEU on both our high and low
resource language pairs, with a reduction in training time of 60%. Our
improvements hold across multiple data sizes and two language pairs.
| 2,019 | Computation and Language |
Progressive Self-Supervised Attention Learning for Aspect-Level
Sentiment Analysis | In aspect-level sentiment classification (ASC), it is prevalent to equip
dominant neural models with attention mechanisms, for the sake of acquiring the
importance of each context word on the given aspect. However, such a mechanism
tends to excessively focus on a few frequent words with sentiment polarities,
while ignoring infrequent ones. In this paper, we propose a progressive
self-supervised attention learning approach for neural ASC models, which
automatically mines useful attention supervision information from a training
corpus to refine attention mechanisms. Specifically, we iteratively conduct
sentiment predictions on all training instances. Particularly, at each
iteration, the context word with the maximum attention weight is extracted as
the one with active/misleading influence on the correct/incorrect prediction of
every instance, and then the word itself is masked for subsequent iterations.
Finally, we augment the conventional training objective with a regularization
term, which enables ASC models to continue equally focusing on the extracted
active context words while decreasing weights of those misleading ones.
Experimental results on multiple datasets show that our proposed approach
yields better attention mechanisms, leading to substantial improvements over
the two state-of-the-art neural ASC models. Source code and trained models are
available at https://github.com/DeepLearnXMU/PSSAttention.
| 2,019 | Computation and Language |
From Independent Prediction to Re-ordered Prediction: Integrating
Relative Position and Global Label Information to Emotion Cause
Identification | Emotion cause identification aims at identifying the potential causes that
lead to a certain emotion expression in text. Several techniques including rule
based methods and traditional machine learning methods have been proposed to
address this problem based on manually designed rules and features. More
recently, some deep learning methods have also been applied to this task, with
the attempt to automatically capture the causal relationship of emotion and its
causes embodied in the text. In this work, we find that in addition to the
content of the text, there are another two kinds of information, namely
relative position and global labels, that are also very important for emotion
cause identification. To integrate such information, we propose a model based
on the neural network architecture to encode the three elements ($i.e.$, text
content, relative position and global label), in an unified and end-to-end
fashion. We introduce a relative position augmented embedding learning
algorithm, and transform the task from an independent prediction problem to a
reordered prediction problem, where the dynamic global label information is
incorporated. Experimental results on a benchmark emotion cause dataset show
that our model achieves new state-of-the-art performance and performs
significantly better than a number of competitive baselines. Further analysis
shows the effectiveness of the relative position augmented embedding learning
algorithm and the reordered prediction mechanism with dynamic global labels.
| 2,019 | Computation and Language |
Coherent Comment Generation for Chinese Articles with a
Graph-to-Sequence Model | Automatic article commenting is helpful in encouraging user engagement and
interaction on online news platforms. However, the news documents are usually
too long for traditional encoder-decoder based models, which often results in
general and irrelevant comments. In this paper, we propose to generate comments
with a graph-to-sequence model that models the input news as a topic
interaction graph. By organizing the article into graph structure, our model
can better understand the internal structure of the article and the connection
between topics, which makes it better able to understand the story. We collect
and release a large scale news-comment corpus from a popular Chinese online
news platform Tencent Kuaibao. Extensive experiment results show that our model
can generate much more coherent and informative comments compared with several
strong baseline models.
| 2,019 | Computation and Language |
Transcoding compositionally: using attention to find more generalizable
solutions | While sequence-to-sequence models have shown remarkable generalization power
across several natural language tasks, their construct of solutions are argued
to be less compositional than human-like generalization. In this paper, we
present seq2attn, a new architecture that is specifically designed to exploit
attention to find compositional patterns in the input. In seq2attn, the two
standard components of an encoder-decoder model are connected via a transcoder,
that modulates the information flow between them. We show that seq2attn can
successfully generalize, without requiring any additional supervision, on two
tasks which are specifically constructed to challenge the compositional skills
of neural networks. The solutions found by the model are highly interpretable,
allowing easy analysis of both the types of solutions that are found and
potential causes for mistakes. We exploit this opportunity to introduce a new
paradigm to test compositionality that studies the extent to which a model
overgeneralizes when confronted with exceptions. We show that seq2attn exhibits
such overgeneralization to a larger degree than a standard sequence-to-sequence
model.
| 2,019 | Computation and Language |
RTHN: A RNN-Transformer Hierarchical Network for Emotion Cause
Extraction | The emotion cause extraction (ECE) task aims at discovering the potential
causes behind a certain emotion expression in a document. Techniques including
rule-based methods, traditional machine learning methods and deep neural
networks have been proposed to solve this task. However, most of the previous
work considered ECE as a set of independent clause classification problems and
ignored the relations between multiple clauses in a document. In this work, we
propose a joint emotion cause extraction framework, named RNN-Transformer
Hierarchical Network (RTHN), to encode and classify multiple clauses
synchronously. RTHN is composed of a lower word-level encoder based on RNNs to
encode multiple words in each clause, and an upper clause-level encoder based
on Transformer to learn the correlation between multiple clauses in a document.
We furthermore propose ways to encode the relative position and global
predication information into Transformer that can capture the causality between
clauses and make RTHN more efficient. We finally achieve the best performance
among 12 compared systems and improve the F1 score of the state-of-the-art from
72.69\% to 76.77\%.
| 2,019 | Computation and Language |
Multi-Task Semantic Dependency Parsing with Policy Gradient for Learning
Easy-First Strategies | In Semantic Dependency Parsing (SDP), semantic relations form directed
acyclic graphs, rather than trees. We propose a new iterative predicate
selection (IPS) algorithm for SDP. Our IPS algorithm combines the graph-based
and transition-based parsing approaches in order to handle multiple semantic
head words. We train the IPS model using a combination of multi-task learning
and task-specific policy gradient training. Trained this way, IPS achieves a
new state of the art on the SemEval 2015 Task 18 datasets. Furthermore, we
observe that policy gradient training learns an easy-first strategy.
| 2,019 | Computation and Language |
Learning to Explain: Answering Why-Questions via Rephrasing | Providing plausible responses to why questions is a challenging but critical
goal for language based human-machine interaction. Explanations are challenging
in that they require many different forms of abstract knowledge and reasoning.
Previous work has either relied on human-curated structured knowledge bases or
detailed domain representation to generate satisfactory explanations. They are
also often limited to ranking pre-existing explanation choices. In our work, we
contribute to the under-explored area of generating natural language
explanations for general phenomena. We automatically collect large datasets of
explanation-phenomenon pairs which allow us to train sequence-to-sequence
models to generate natural language explanations. We compare different training
strategies and evaluate their performance using both automatic scores and human
ratings. We demonstrate that our strategy is sufficient to generate highly
plausible explanations for general open-domain phenomena compared to other
models trained on different datasets.
| 2,019 | Computation and Language |
Boosting Entity Linking Performance by Leveraging Unlabeled Documents | Modern entity linking systems rely on large collections of documents
specifically annotated for the task (e.g., AIDA CoNLL). In contrast, we propose
an approach which exploits only naturally occurring information: unlabeled
documents and Wikipedia. Our approach consists of two stages. First, we
construct a high recall list of candidate entities for each mention in an
unlabeled document. Second, we use the candidate lists as weak supervision to
constrain our document-level entity linking model. The model treats entities as
latent variables and, when estimated on a collection of unlabelled texts,
learns to choose entities relying both on local context of each mention and on
coherence with other entities in the document. The resulting approach rivals
fully-supervised state-of-the-art systems on standard test sets. It also
approaches their performance in the very challenging setting: when tested on a
test set sampled from the data used to estimate the supervised systems. By
comparing to Wikipedia-only training of our model, we demonstrate that modeling
unlabeled documents is beneficial.
| 2,019 | Computation and Language |
ChID: A Large-scale Chinese IDiom Dataset for Cloze Test | Cloze-style reading comprehension in Chinese is still limited due to the lack
of various corpora. In this paper we propose a large-scale Chinese cloze test
dataset ChID, which studies the comprehension of idiom, a unique language
phenomenon in Chinese. In this corpus, the idioms in a passage are replaced by
blank symbols and the correct answer needs to be chosen from well-designed
candidate idioms. We carefully study how the design of candidate idioms and the
representation of idioms affect the performance of state-of-the-art models.
Results show that the machine accuracy is substantially worse than that of
human, indicating a large space for further research.
| 2,020 | Computation and Language |
Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts | Emotion cause extraction (ECE), the task aimed at extracting the potential
causes behind certain emotions in text, has gained much attention in recent
years due to its wide applications. However, it suffers from two shortcomings:
1) the emotion must be annotated before cause extraction in ECE, which greatly
limits its applications in real-world scenarios; 2) the way to first annotate
emotion and then extract the cause ignores the fact that they are mutually
indicative. In this work, we propose a new task: emotion-cause pair extraction
(ECPE), which aims to extract the potential pairs of emotions and corresponding
causes in a document. We propose a 2-step approach to address this new ECPE
task, which first performs individual emotion extraction and cause extraction
via multi-task learning, and then conduct emotion-cause pairing and filtering.
The experimental results on a benchmark emotion cause corpus prove the
feasibility of the ECPE task as well as the effectiveness of our approach.
| 2,019 | Computation and Language |
Exploiting Sentential Context for Neural Machine Translation | In this work, we present novel approaches to exploit sentential context for
neural machine translation (NMT). Specifically, we first show that a shallow
sentential context extracted from the top encoder layer only, can improve
translation performance via contextualizing the encoding representations of
individual words. Next, we introduce a deep sentential context, which
aggregates the sentential context representations from all the internal layers
of the encoder to form a more comprehensive context representation.
Experimental results on the WMT14 English-to-German and English-to-French
benchmarks show that our model consistently improves performance over the
strong TRANSFORMER model (Vaswani et al., 2017), demonstrating the necessity
and effectiveness of exploiting sentential context for NMT.
| 2,019 | Computation and Language |
Are we there yet? Encoder-decoder neural networks as cognitive models of
English past tense inflection | The cognitive mechanisms needed to account for the English past tense have
long been a subject of debate in linguistics and cognitive science. Neural
network models were proposed early on, but were shown to have clear flaws.
Recently, however, Kirov and Cotterell (2018) showed that modern
encoder-decoder (ED) models overcome many of these flaws. They also presented
evidence that ED models demonstrate humanlike performance in a nonce-word task.
Here, we look more closely at the behaviour of their model in this task. We
find that (1) the model exhibits instability across multiple simulations in
terms of its correlation with human data, and (2) even when results are
aggregated across simulations (treating each simulation as an individual human
participant), the fit to the human data is not strong---worse than an older
rule-based model. These findings hold up through several alternative training
regimes and evaluation measures. Although other neural architectures might do
better, we conclude that there is still insufficient evidence to claim that
neural nets are a good cognitive model for this task.
| 2,019 | Computation and Language |
Lattice-Based Transformer Encoder for Neural Machine Translation | Neural machine translation (NMT) takes deterministic sequences for source
representations. However, either word-level or subword-level segmentations have
multiple choices to split a source sequence with different word segmentors or
different subword vocabulary sizes. We hypothesize that the diversity in
segmentations may affect the NMT performance. To integrate different
segmentations with the state-of-the-art NMT model, Transformer, we propose
lattice-based encoders to explore effective word or subword representation in
an automatic way during training. We propose two methods: 1) lattice positional
encoding and 2) lattice-aware self-attention. These two methods can be used
together and show complementary to each other to further improve translation
performance. Experiment results show superiorities of lattice-based encoders in
word-level and subword-level representations over conventional Transformer
encoder.
| 2,019 | Computation and Language |
How Large Are Lions? Inducing Distributions over Quantitative Attributes | Most current NLP systems have little knowledge about quantitative attributes
of objects and events. We propose an unsupervised method for collecting
quantitative information from large amounts of web data, and use it to create a
new, very large resource consisting of distributions over physical quantities
associated with objects, adjectives, and verbs which we call Distributions over
Quantitative (DoQ). This contrasts with recent work in this area which has
focused on making only relative comparisons such as "Is a lion bigger than a
wolf?". Our evaluation shows that DoQ compares favorably with state of the art
results on existing datasets for relative comparisons of nouns and adjectives,
and on a new dataset we introduce.
| 2,019 | Computation and Language |
Curate and Generate: A Corpus and Method for Joint Control of Semantics
and Style in Neural NLG | Neural natural language generation (NNLG) from structured meaning
representations has become increasingly popular in recent years. While we have
seen progress with generating syntactically correct utterances that preserve
semantics, various shortcomings of NNLG systems are clear: new tasks require
new training data which is not available or straightforward to acquire, and
model outputs are simple and may be dull and repetitive. This paper addresses
these two critical challenges in NNLG by: (1) scalably (and at no cost)
creating training datasets of parallel meaning representations and reference
texts with rich style markup by using data from freely available and naturally
descriptive user reviews, and (2) systematically exploring how the style markup
enables joint control of semantic and stylistic aspects of neural model output.
We present YelpNLG, a corpus of 300,000 rich, parallel meaning representations
and highly stylistically varied reference texts spanning different restaurant
attributes, and describe a novel methodology that can be scalably reused to
generate NLG datasets for other domains. The experiments show that the models
control important aspects, including lexical choice of adjectives, output
length, and sentiment, allowing the models to successfully hit multiple style
targets without sacrificing semantics.
| 2,019 | Computation and Language |
A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence
Matching | We present a latent variable model for predicting the relationship between a
pair of text sequences. Unlike previous auto-encoding--based approaches that
consider each sequence separately, our proposed framework utilizes both
sequences within a single model by generating a sequence that has a given
relationship with a source sequence. We further extend the cross-sentence
generating framework to facilitate semi-supervised training. We also define
novel semantic constraints that lead the decoder network to generate
semantically plausible and diverse sequences. We demonstrate the effectiveness
of the proposed model from quantitative and qualitative experiments, while
achieving state-of-the-art results on semi-supervised natural language
inference and paraphrase identification.
| 2,019 | Computation and Language |
TalkSumm: A Dataset and Scalable Annotation Method for Scientific Paper
Summarization Based on Conference Talks | Currently, no large-scale training data is available for the task of
scientific paper summarization. In this paper, we propose a novel method that
automatically generates summaries for scientific papers, by utilizing videos of
talks at scientific conferences. We hypothesize that such talks constitute a
coherent and concise description of the papers' content, and can form the basis
for good summaries. We collected 1716 papers and their corresponding videos,
and created a dataset of paper summaries. A model trained on this dataset
achieves similar performance as models trained on a dataset of summaries
created manually. In addition, we validated the quality of our summaries by
human experts.
| 2,019 | Computation and Language |
NNE: A Dataset for Nested Named Entity Recognition in English Newswire | Named entity recognition (NER) is widely used in natural language processing
applications and downstream tasks. However, most NER tools target flat
annotation from popular datasets, eschewing the semantic information available
in nested entity mentions. We describe NNE---a fine-grained, nested named
entity dataset over the full Wall Street Journal portion of the Penn Treebank
(PTB). Our annotation comprises 279,795 mentions of 114 entity types with up to
6 layers of nesting. We hope the public release of this large dataset for
English newswire will encourage development of new techniques for nested NER.
| 2,019 | Computation and Language |
HighRES: Highlight-based Reference-less Evaluation of Summarization | There has been substantial progress in summarization research enabled by the
availability of novel, often large-scale, datasets and recent advances on
neural network-based approaches. However, manual evaluation of the system
generated summaries is inconsistent due to the difficulty the task poses to
human non-expert readers. To address this issue, we propose a novel approach
for manual evaluation, Highlight-based Reference-less Evaluation of
Summarization (HighRES), in which summaries are assessed by multiple annotators
against the source document via manually highlighted salient content in the
latter. Thus summary assessment on the source document by human judges is
facilitated, while the highlights can be used for evaluating multiple systems.
To validate our approach we employ crowd-workers to augment with highlights a
recently proposed dataset and compare two state-of-the-art systems. We
demonstrate that HighRES improves inter-annotator agreement in comparison to
using the source document directly, while they help emphasize differences among
systems that would be ignored under other evaluation approaches.
| 2,019 | Computation and Language |
Relational Word Embeddings | While word embeddings have been shown to implicitly encode various forms of
attributional knowledge, the extent to which they capture relational
information is far more limited. In previous work, this limitation has been
addressed by incorporating relational knowledge from external knowledge bases
when learning the word embedding. Such strategies may not be optimal, however,
as they are limited by the coverage of available resources and conflate
similarity with other forms of relatedness. As an alternative, in this paper we
propose to encode relational knowledge in a separate word embedding, which is
aimed to be complementary to a given standard word embedding. This relational
word embedding is still learned from co-occurrence statistics, and can thus be
used even when no external knowledge base is available. Our analysis shows that
relational word vectors do indeed capture information that is complementary to
what is encoded in standard word embeddings.
| 2,019 | Computation and Language |
Distantly Supervised Named Entity Recognition using Positive-Unlabeled
Learning | In this work, we explore the way to perform named entity recognition (NER)
using only unlabeled data and named entity dictionaries. To this end, we
formulate the task as a positive-unlabeled (PU) learning problem and
accordingly propose a novel PU learning algorithm to perform the task. We prove
that the proposed algorithm can unbiasedly and consistently estimate the task
loss as if there is fully labeled data. A key feature of the proposed method is
that it does not require the dictionaries to label every entity within a
sentence, and it even does not require the dictionaries to label all of the
words constituting an entity. This greatly reduces the requirement on the
quality of the dictionaries and makes our method generalize well with quite
simple dictionaries. Empirical studies on four public NER datasets demonstrate
the effectiveness of our proposed method. We have published the source code at
\url{https://github.com/v-mipeng/LexiconNER}.
| 2,019 | Computation and Language |
Recognising Agreement and Disagreement between Stances with Reason
Comparing Networks | We identify agreement and disagreement between utterances that express
stances towards a topic of discussion. Existing methods focus mainly on
conversational settings, where dialogic features are used for (dis)agreement
inference. We extend this scope and seek to detect stance (dis)agreement in a
broader setting, where independent stance-bearing utterances, which prevail in
many stance corpora and real-world scenarios, are compared. To cope with such
non-dialogic utterances, we find that the reasons uttered to back up a specific
stance can help predict stance (dis)agreements. We propose a reason comparing
network (RCN) to leverage reason information for stance comparison. Empirical
results on a well-known stance corpus show that our method can discover useful
reason information, enabling it to outperform several baselines in stance
(dis)agreement detection.
| 2,019 | Computation and Language |
SherLIiC: A Typed Event-Focused Lexical Inference Benchmark for
Evaluating Natural Language Inference | We present SherLIiC, a testbed for lexical inference in context (LIiC),
consisting of 3985 manually annotated inference rule candidates (InfCands),
accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual
relations between Freebase entities extracted from the large entity-linked
corpus ClueWeb09. Each InfCand consists of one of these relations, expressed as
a lemmatized dependency path, and two argument placeholders, each linked to one
or more Freebase types. Due to our candidate selection process based on strong
distributional evidence, SherLIiC is much harder than existing testbeds because
distributional evidence is of little utility in the classification of InfCands.
We also show that, due to its construction, many of SherLIiC's correct InfCands
are novel and missing from existing rule bases. We evaluate a number of strong
baselines on SherLIiC, ranging from semantic vector space models to state of
the art neural models of natural language inference (NLI). We show that
SherLIiC poses a tough challenge to existing NLI systems.
| 2,019 | Computation and Language |
Tracing Antisemitic Language Through Diachronic Embedding Projections:
France 1789-1914 | We investigate some aspects of the history of antisemitism in France, one of
the cradles of modern antisemitism, using diachronic word embeddings. We
constructed a large corpus of French books and periodicals issues that contain
a keyword related to Jews and performed a diachronic word embedding over the
1789-1914 period. We studied the changes over time in the semantic spaces of 4
target words and performed embedding projections over 6 streams of antisemitic
discourse. This allowed us to track the evolution of antisemitic bias in the
religious, economic, socio-politic, racial, ethic and conspiratorial domains.
Projections show a trend of growing antisemitism, especially in the years
starting in the mid-80s and culminating in the Dreyfus affair. Our analysis
also allows us to highlight the peculiar adverse bias towards Judaism in the
broader context of other religions.
| 2,019 | Computation and Language |
Evaluating Discourse in Structured Text Representations | Discourse structure is integral to understanding a text and is helpful in
many NLP tasks. Learning latent representations of discourse is an attractive
alternative to acquiring expensive labeled discourse data. Liu and Lapata
(2018) propose a structured attention mechanism for text classification that
derives a tree over a text, akin to an RST discourse tree. We examine this
model in detail, and evaluate on additional discourse-relevant tasks and
datasets, in order to assess whether the structured attention improves
performance on the end task and whether it captures a text's discourse
structure. We find the learned latent trees have little to no structure and
instead focus on lexical cues; even after obtaining more structured trees with
proposed model modifications, the trees are still far from capturing discourse
structure when compared to discourse dependency trees from an existing
discourse parser. Finally, ablation studies show the structured attention
provides little benefit, sometimes even hurting performance.
| 2,019 | Computation and Language |
Regularization Advantages of Multilingual Neural Language Models for Low
Resource Domains | Neural language modeling (LM) has led to significant improvements in several
applications, including Automatic Speech Recognition. However, they typically
require large amounts of training data, which is not available for many domains
and languages. In this study, we propose a multilingual neural language model
architecture, trained jointly on the domain-specific data of several
low-resource languages. The proposed multilingual LM consists of language
specific word embeddings in the encoder and decoder, and one language specific
LSTM layer, plus two LSTM layers with shared parameters across the languages.
This multilingual LM model facilitates transfer learning across the languages,
acting as an extra regularizer in very low-resource scenarios. We integrate our
proposed multilingual approach with a state-of-the-art highly-regularized
neural LM, and evaluate on the conversational data domain for four languages
over a range of training data sizes. Compared to monolingual LMs, the results
show significant improvements of our proposed multilingual LM when the amount
of available training data is limited, indicating the advantages of
cross-lingual parameter sharing in very low-resource language modeling.
| 2,019 | Computation and Language |
Multimodal Ensemble Approach to Incorporate Various Types of Clinical
Notes for Predicting Readmission | Electronic Health Records (EHRs) have been heavily used to predict various
downstream clinical tasks such as readmission or mortality. One of the
modalities in EHRs, clinical notes, has not been fully explored for these tasks
due to its unstructured and inexplicable nature. Although recent advances in
deep learning (DL) enables models to extract interpretable features from
unstructured data, they often require a large amount of training data. However,
many tasks in medical domains inherently consist of small sample data with
lengthy documents; for a kidney transplant as an example, data from only a few
thousand of patients are available and each patient's document consists of a
couple of millions of words in major hospitals. Thus, complex DL methods cannot
be applied to these kinds of domains. In this paper, we present a comprehensive
ensemble model using vector space modeling and topic modeling. Our proposed
model is evaluated on the readmission task of kidney transplant patients and
improves 0.0211 in terms of c-statistics from the previous state-of-the-art
approach using structured data, while typical DL methods fail to beat this
approach. The proposed architecture provides the interpretable score for each
feature from both modalities, structured and unstructured data, which is shown
to be meaningful through a physician's evaluation.
| 2,019 | Computation and Language |
How multilingual is Multilingual BERT? | In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et
al. (2018) as a single language model pre-trained from monolingual corpora in
104 languages, is surprisingly good at zero-shot cross-lingual model transfer,
in which task-specific annotations in one language are used to fine-tune the
model for evaluation in another language. To understand why, we present a large
number of probing experiments, showing that transfer is possible even to
languages in different scripts, that transfer works best between typologically
similar languages, that monolingual corpora can train models for
code-switching, and that the model can find translation pairs. From these
results, we can conclude that M-BERT does create multilingual representations,
but that these representations exhibit systematic deficiencies affecting
certain language pairs.
| 2,019 | Computation and Language |
LeafNATS: An Open-Source Toolkit and Live Demo System for Neural
Abstractive Text Summarization | Neural abstractive text summarization (NATS) has received a lot of attention
in the past few years from both industry and academia. In this paper, we
introduce an open-source toolkit, namely LeafNATS, for training and evaluation
of different sequence-to-sequence based models for the NATS task, and for
deploying the pre-trained models to real-world applications. The toolkit is
modularized and extensible in addition to maintaining competitive performance
in the NATS task. A live news blogging system has also been implemented to
demonstrate how these models can aid blog/news editors by providing them
suggestions of headlines and summaries of their articles.
| 2,019 | Computation and Language |
Adaptive Region Embedding for Text Classification | Deep learning models such as convolutional neural networks and recurrent
networks are widely applied in text classification. In spite of their great
success, most deep learning models neglect the importance of modeling context
information, which is crucial to understanding texts. In this work, we propose
the Adaptive Region Embedding to learn context representation to improve text
classification. Specifically, a metanetwork is learned to generate a context
matrix for each region, and each word interacts with its corresponding context
matrix to produce the regional representation for further classification.
Compared to previous models that are designed to capture context information,
our model contains less parameters and is more flexible. We extensively
evaluate our method on 8 benchmark datasets for text classification. The
experimental results prove that our method achieves state-of-the-art
performances and effectively avoids word ambiguity.
| 2,019 | Computation and Language |
TMLab SRPOL at SemEval-2019 Task 8: Fact Checking in Community Question
Answering Forums | The article describes our submission to SemEval 2019 Task 8 on Fact-Checking
in Community Forums. The systems under discussion participated in Subtask A:
decide whether a question asks for factual information, opinion/advice or is
just socializing. Our primary submission was ranked as the second one among all
participants in the official evaluation phase. The article presents our primary
solution: Deeply Regularized Residual Neural Network (DRR NN) with Universal
Sentence Encoder embeddings. This is followed by a description of two
contrastive solutions based on ensemble methods.
| 2,019 | Computation and Language |
The PhotoBook Dataset: Building Common Ground through Visually-Grounded
Dialogue | This paper introduces the PhotoBook dataset, a large-scale collection of
visually-grounded, task-oriented dialogues in English designed to investigate
shared dialogue history accumulating during conversation. Taking inspiration
from seminal work on dialogue analysis, we propose a data-collection task
formulated as a collaborative game prompting two online participants to refer
to images utilising both their visual context as well as previously established
referring expressions. We provide a detailed description of the task setup and
a thorough analysis of the 2,500 dialogues collected. To further illustrate the
novel features of the dataset, we propose a baseline model for reference
resolution which uses a simple method to take into account shared information
accumulated in a reference chain. Our results show that this information is
particularly important to resolve later descriptions and underline the need to
develop more sophisticated models of common ground in dialogue interaction.
| 2,019 | Computation and Language |
Training Neural Response Selection for Task-Oriented Dialogue Systems | Despite their popularity in the chatbot literature, retrieval-based models
have had modest impact on task-oriented dialogue systems, with the main
obstacle to their application being the low-data regime of most task-oriented
dialogue tasks. Inspired by the recent success of pretraining in language
modelling, we propose an effective method for deploying response selection in
task-oriented dialogue. To train response selection models for task-oriented
dialogue tasks, we propose a novel method which: 1) pretrains the response
selection model on large general-domain conversational corpora; and then 2)
fine-tunes the pretrained model for the target dialogue domain, relying only on
the small in-domain dataset to capture the nuances of the given dialogue
domain. Our evaluation on six diverse application domains, ranging from
e-commerce to banking, demonstrates the effectiveness of the proposed training
method.
| 2,019 | Computation and Language |
Optimal coding and the origins of Zipfian laws | The problem of compression in standard information theory consists of
assigning codes as short as possible to numbers. Here we consider the problem
of optimal coding -- under an arbitrary coding scheme -- and show that it
predicts Zipf's law of abbreviation, namely a tendency in natural languages for
more frequent words to be shorter. We apply this result to investigate optimal
coding also under so-called non-singular coding, a scheme where unique
segmentation is not warranted but codes stand for a distinct number. Optimal
non-singular coding predicts that the length of a word should grow
approximately as the logarithm of its frequency rank, which is again consistent
with Zipf's law of abbreviation. Optimal non-singular coding in combination
with the maximum entropy principle also predicts Zipf's rank-frequency
distribution. Furthermore, our findings on optimal non-singular coding
challenge common beliefs about random typing. It turns out that random typing
is in fact an optimal coding process, in stark contrast with the common
assumption that it is detached from cost cutting considerations. Finally, we
discuss the implications of optimal coding for the construction of a compact
theory of Zipfian laws and other linguistic laws.
| 2,020 | Computation and Language |
Task-Guided Pair Embedding in Heterogeneous Network | Many real-world tasks solved by heterogeneous network embedding methods can
be cast as modeling the likelihood of pairwise relationship between two nodes.
For example, the goal of author identification task is to model the likelihood
of a paper being written by an author (paper-author pairwise relationship).
Existing task-guided embedding methods are node-centric in that they simply
measure the similarity between the node embeddings to compute the likelihood of
a pairwise relationship between two nodes. However, we claim that for
task-guided embeddings, it is crucial to focus on directly modeling the
pairwise relationship. In this paper, we propose a novel task-guided pair
embedding framework in heterogeneous network, called TaPEm, that directly
models the relationship between a pair of nodes that are related to a specific
task (e.g., paper-author relationship in author identification). To this end,
we 1) propose to learn a pair embedding under the guidance of its associated
context path, i.e., a sequence of nodes between the pair, and 2) devise the
pair validity classifier to distinguish whether the pair is valid with respect
to the specific task at hand. By introducing pair embeddings that capture the
semantics behind the pairwise relationships, we are able to learn the
fine-grained pairwise relationship between two nodes, which is paramount for
task-guided embedding methods. Extensive experiments on author identification
task demonstrate that TaPEm outperforms the state-of-the-art methods,
especially for authors with few publication records.
| 2,019 | Computation and Language |
Sequence Tagging with Contextual and Non-Contextual Subword
Representations: A Multilingual Evaluation | Pretrained contextual and non-contextual subword embeddings have become
available in over 250 languages, allowing massively multilingual NLP. However,
while there is no dearth of pretrained embeddings, the distinct lack of
systematic evaluations makes it difficult for practitioners to choose between
them. In this work, we conduct an extensive evaluation comparing non-contextual
subword embeddings, namely FastText and BPEmb, and a contextual representation
method, namely BERT, on multilingual named entity recognition and
part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and
character representations works best across languages and tasks. A more
detailed analysis reveals different strengths and weaknesses: Multilingual BERT
performs well in medium- to high-resource languages, but is outperformed by
non-contextual subword embeddings in a low-resource setting.
| 2,019 | Computation and Language |
A Study of Feature Extraction techniques for Sentiment Analysis | Sentiment Analysis refers to the study of systematically extracting the
meaning of subjective text . When analysing sentiments from the subjective text
using Machine Learning techniques,feature extraction becomes a significant
part. We perform a study on the performance of feature extraction techniques
TF-IDF(Term Frequency-Inverse Document Frequency) and Doc2vec (Document to
Vector) using Cornell movie review datasets, UCI sentiment labeled datasets,
stanford movie review datasets,effectively classifying the text into positive
and negative polarities by using various pre-processing methods like
eliminating StopWords and Tokenization which increases the performance of
sentiment analysis in terms of accuracy and time taken by the classifier.The
features obtained after applying feature extraction techniques on the text
sentences are trained and tested using the classifiers Logistic
Regression,Support Vector Machines,K-Nearest Neighbours , Decision Tree and
Bernoulli Nave Bayes
| 2,019 | Computation and Language |
Pitfalls in the Evaluation of Sentence Embeddings | Deep learning models continuously break new records across different NLP
tasks. At the same time, their success exposes weaknesses of model evaluation.
Here, we compile several key pitfalls of evaluation of sentence embeddings, a
currently very popular NLP paradigm. These pitfalls include the comparison of
embeddings of different sizes, normalization of embeddings, and the low (and
diverging) correlations between transfer and probing tasks. Our motivation is
to challenge the current evaluation of sentence embeddings and to provide an
easy-to-access reference for future research. Based on our insights, we also
recommend better practices for better future evaluations of sentence
embeddings.
| 2,019 | Computation and Language |
Finding Syntactic Representations in Neural Stacks | Neural network architectures have been augmented with differentiable stacks
in order to introduce a bias toward learning hierarchy-sensitive regularities.
It has, however, proven difficult to assess the degree to which such a bias is
effective, as the operation of the differentiable stack is not always
interpretable. In this paper, we attempt to detect the presence of latent
representations of hierarchical structure through an exploration of the
unsupervised learning of constituency structure. Using a technique due to Shen
et al. (2018a,b), we extract syntactic trees from the pushing behavior of stack
RNNs trained on language modeling and classification objectives. We find that
our models produce parses that reflect natural language syntactic
constituencies, demonstrating that stack RNNs do indeed infer linguistically
relevant hierarchical structure.
| 2,019 | Computation and Language |
Do Neural Dialog Systems Use the Conversation History Effectively? An
Empirical Study | Neural generative models have been become increasingly popular when building
conversational agents. They offer flexibility, can be easily adapted to new
domains, and require minimal domain engineering. A common criticism of these
systems is that they seldom understand or use the available dialog history
effectively. In this paper, we take an empirical approach to understanding how
these models use the available dialog history by studying the sensitivity of
the models to artificially introduced unnatural changes or perturbations to
their context at test time. We experiment with 10 different types of
perturbations on 4 multi-turn dialog datasets and find that commonly used
neural dialog architectures like recurrent and transformer-based seq2seq models
are rarely sensitive to most perturbations such as missing or reordering
utterances, shuffling words, etc. Also, by open-sourcing our code, we believe
that it will serve as a useful diagnostic tool for evaluating dialog systems in
the future.
| 2,019 | Computation and Language |
KERMIT: Generative Insertion-Based Modeling for Sequences | We present KERMIT, a simple insertion-based approach to generative modeling
for sequences and sequence pairs. KERMIT models the joint distribution and its
decompositions (i.e., marginals and conditionals) using a single neural network
and, unlike much prior work, does not rely on a prespecified factorization of
the data distribution. During training, one can feed KERMIT paired data $(x,
y)$ to learn the joint distribution $p(x, y)$, and optionally mix in unpaired
data $x$ or $y$ to refine the marginals $p(x)$ or $p(y)$. During inference, we
have access to the conditionals $p(x \mid y)$ and $p(y \mid x)$ in both
directions. We can also sample from the joint distribution or the marginals.
The model supports both serial fully autoregressive decoding and parallel
partially autoregressive decoding, with the latter exhibiting an empirically
logarithmic runtime. We demonstrate through experiments in machine translation,
representation learning, and zero-shot cloze question answering that our
unified approach is capable of matching or exceeding the performance of
dedicated state-of-the-art systems across a wide range of tasks without the
need for problem-specific architectural adaptation.
| 2,019 | Computation and Language |
Transferable Neural Projection Representations | Neural word representations are at the core of many state-of-the-art natural
language processing models. A widely used approach is to pre-train, store and
look up word or character embedding matrices. While useful, such
representations occupy huge memory making it hard to deploy on-device and often
do not generalize to unknown words due to vocabulary pruning.
In this paper, we propose a skip-gram based architecture coupled with
Locality-Sensitive Hashing (LSH) projections to learn efficient dynamically
computable representations. Our model does not need to store lookup tables as
representations are computed on-the-fly and require low memory footprint. The
representations can be trained in an unsupervised fashion and can be easily
transferred to other NLP tasks. For qualitative evaluation, we analyze the
nearest neighbors of the word representations and discover semantically similar
words even with misspellings. For quantitative evaluation, we plug our
transferable projections into a simple LSTM and run it on multiple NLP tasks
and show how our transferable projections achieve better performance compared
to prior work.
| 2,019 | Computation and Language |
Sequential Neural Networks as Automata | This work attempts to explain the types of computation that neural networks
can perform by relating them to automata. We first define what it means for a
real-time network with bounded precision to accept a language. A measure of
network memory follows from this definition. We then characterize the classes
of languages acceptable by various recurrent networks, attention, and
convolutional networks. We find that LSTMs function like counter machines and
relate convolutional networks to the subregular hierarchy. Overall, this work
attempts to increase our understanding and ability to interpret neural networks
through the lens of theory. These theoretical insights help explain neural
computation, as well as the relationship between neural networks and natural
language grammar.
| 2,021 | Computation and Language |
Self-Attentional Models for Lattice Inputs | Lattices are an efficient and effective method to encode ambiguity of
upstream systems in natural language processing tasks, for example to compactly
capture multiple speech recognition hypotheses, or to represent multiple
linguistic analyses. Previous work has extended recurrent neural networks to
model lattice inputs and achieved improvements in various tasks, but these
models suffer from very slow computation speeds. This paper extends the
recently proposed paradigm of self-attention to handle lattice inputs.
Self-attention is a sequence modeling technique that relates inputs to one
another by computing pairwise similarities and has gained popularity for both
its strong results and its computational efficiency. To extend such models to
handle lattices, we introduce probabilistic reachability masks that incorporate
lattice structure into the model and support lattice scores if available. We
also propose a method for adapting positional embeddings to lattice structures.
We apply the proposed model to a speech translation task and find that it
outperforms all examined baselines while being much faster to compute than
previous neural lattice models during both training and inference.
| 2,019 | Computation and Language |
Are Girls Neko or Sh\=ojo? Cross-Lingual Alignment of Non-Isomorphic
Embeddings with Iterative Normalization | Cross-lingual word embeddings (CLWE) underlie many multilingual natural
language processing systems, often through orthogonal transformations of
pre-trained monolingual embeddings. However, orthogonal mapping only works on
language pairs whose embeddings are naturally isomorphic. For non-isomorphic
pairs, our method (Iterative Normalization) transforms monolingual embeddings
to make orthogonal alignment easier by simultaneously enforcing that (1)
individual word vectors are unit length, and (2) each language's average vector
is zero. Iterative Normalization consistently improves word translation
accuracy of three CLWE methods, with the largest improvement observed on
English-Japanese (from 2% to 44% test accuracy).
| 2,019 | Computation and Language |
On the Realization of Compositionality in Neural Networks | We present a detailed comparison of two types of sequence to sequence models
trained to conduct a compositional task. The models are architecturally
identical at inference time, but differ in the way that they are trained: our
baseline model is trained with a task-success signal only, while the other
model receives additional supervision on its attention mechanism (Attentive
Guidance), which has shown to be an effective method for encouraging more
compositional solutions (Hupkes et al.,2019). We first confirm that the models
with attentive guidance indeed infer more compositional solutions than the
baseline, by training them on the lookup table task presented by Li\v{s}ka et
al. (2019). We then do an in-depth analysis of the structural differences
between the two model types, focusing in particular on the organisation of the
parameter space and the hidden layer activations and find noticeable
differences in both these aspects. Guided networks focus more on the components
of the input rather than the sequence as a whole and develop small functional
groups of neurons with specific purposes that use their gates more selectively.
Results from parameter heat maps, component swapping and graph analysis also
indicate that guided networks exhibit a more modular structure with a small
number of specialized, strongly connected neurons.
| 2,019 | Computation and Language |
Detecting Ghostwriters in High Schools | Students hiring ghostwriters to write their assignments is an increasing
problem in educational institutions all over the world, with companies selling
these services as a product. In this work, we develop automatic techniques with
special focus on detecting such ghostwriting in high school assignments. This
is done by training deep neural networks on an unprecedented large amount of
data supplied by the Danish company MaCom, which covers 90% of Danish high
schools. We achieve an accuracy of 0.875 and a AUC score of 0.947 on an evenly
split data set.
| 2,019 | Computation and Language |
Towards Lossless Encoding of Sentences | A lot of work has been done in the field of image compression via machine
learning, but not much attention has been given to the compression of natural
language. Compressing text into lossless representations while making features
easily retrievable is not a trivial task, yet has huge benefits. Most methods
designed to produce feature rich sentence embeddings focus solely on performing
well on downstream tasks and are unable to properly reconstruct the original
sequence from the learned embedding. In this work, we propose a near lossless
method for encoding long sequences of texts as well as all of their
sub-sequences into feature rich representations. We test our method on
sentiment analysis and show good performance across all sub-sentence and
sentence embeddings.
| 2,019 | Computation and Language |
Detecting Syntactic Change Using a Neural Part-of-Speech Tagger | We train a diachronic long short-term memory (LSTM) part-of-speech tagger on
a large corpus of American English from the 19th, 20th, and 21st centuries. We
analyze the tagger's ability to implicitly learn temporal structure between
years, and the extent to which this knowledge can be transferred to date new
sentences. The learned year embeddings show a strong linear correlation between
their first principal component and time. We show that temporal information
encoded in the model can be used to predict novel sentences' years of
composition relatively well. Comparisons to a feedforward baseline suggest that
the temporal change learned by the LSTM is syntactic rather than purely
lexical. Thus, our results suggest that our tagger is implicitly learning to
model syntactic change in American English over the course of the 19th, 20th,
and early 21st centuries.
| 2,019 | Computation and Language |
Post-editing Productivity with Neural Machine Translation: An Empirical
Assessment of Speed and Quality in the Banking and Finance Domain | Neural machine translation (NMT) has set new quality standards in automatic
translation, yet its effect on post-editing productivity is still pending
thorough investigation. We empirically test how the inclusion of NMT, in
addition to domain-specific translation memories and termbases, impacts speed
and quality in professional translation of financial texts. We find that even
with language pairs that have received little attention in research settings
and small amounts of in-domain data for system adaptation, NMT post-editing
allows for substantial time savings and leads to equal or slightly better
quality.
| 2,019 | Computation and Language |
Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic
Change | State-of-the-art models of lexical semantic change detection suffer from
noise stemming from vector space alignment. We have empirically tested the
Temporal Referencing method for lexical semantic change and show that, by
avoiding alignment, it is less affected by this noise. We show that, trained on
a diachronic corpus, the skip-gram with negative sampling architecture with
temporal referencing outperforms alignment models on a synthetic task as well
as a manual testset. We introduce a principled way to simulate lexical semantic
change and systematically control for possible biases.
| 2,020 | Computation and Language |
Open Sesame: Getting Inside BERT's Linguistic Knowledge | How and to what extent does BERT encode syntactically-sensitive hierarchical
information or positionally-sensitive linear information? Recent work has shown
that contextual representations like BERT perform well on tasks that require
sensitivity to linguistic structure. We present here two studies which aim to
provide a better understanding of the nature of BERT's representations. The
first of these focuses on the identification of structurally-defined elements
using diagnostic classifiers, while the second explores BERT's representation
of subject-verb agreement and anaphor-antecedent dependencies through a
quantitative assessment of self-attention vectors. In both cases, we find that
BERT encodes positional information about word tokens well on its lower layers,
but switches to a hierarchically-oriented encoding on higher layers. We
conclude then that BERT's representations do indeed model linguistically
relevant aspects of hierarchical structure, though they do not appear to show
the sharp sensitivity to hierarchical structure that is found in human
processing of reflexive anaphora.
| 2,019 | Computation and Language |
Improving Neural Language Models by Segmenting, Attending, and
Predicting the Future | Common language models typically predict the next word given the context. In
this work, we propose a method that improves language modeling by learning to
align the given context and the following phrase. The model does not require
any linguistic annotation of phrase segmentation. Instead, we define syntactic
heights and phrase segmentation rules, enabling the model to automatically
induce phrases, recognize their task-specific heads, and generate phrase
embeddings in an unsupervised learning manner. Our method can easily be applied
to language models with different network architectures since an independent
module is used for phrase induction and context-phrase alignment, and no change
is required in the underlying language modeling network. Experiments have shown
that our model outperformed several strong baseline models on different data
sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the
Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase
induction module showed that our model is able to learn approximate
phrase-level structural knowledge without any annotation.
| 2,019 | Computation and Language |
An Introduction to a New Text Classification and Visualization for
Natural Language Processing Using Topological Data Analysis | Topological Data Analysis (TDA) is a novel new and fast growing field of data
science providing a set of new topological and geometric tools to derive
relevant features out of complex high-dimensional data. In this paper we apply
two of best methods in topological data analysis, "Persistent Homology" and
"Mapper", in order to classify persian poems which has been composed by two of
the best Iranian poets namely "Ferdowsi" and "Hafez". This article has two main
parts, in the first part we explain the mathematics behind these two methods
which is easy to understand for general audience and in the second part we
describe our models and the results of applying TDA tools to NLP.
| 2,019 | Computation and Language |
SemEval-2019 Task 8: Fact Checking in Community Question Answering
Forums | We present SemEval-2019 Task 8 on Fact Checking in Community Question
Answering Forums, which features two subtasks. Subtask A is about deciding
whether a question asks for factual information vs. an opinion/advice vs. just
socializing. Subtask B asks to predict whether an answer to a factual question
is true, false or not a proper answer. We received 17 official submissions for
subtask A and 11 official submissions for Subtask B. For subtask A, all systems
improved over the majority class baseline. For Subtask B, all systems were
below a majority class baseline, but several systems were very close to it. The
leaderboard and the data from the competition can be found at
http://competitions.codalab.org/competitions/20022
| 2,019 | Computation and Language |
The Unreasonable Effectiveness of Transformer Language Models in
Grammatical Error Correction | Recent work on Grammatical Error Correction (GEC) has highlighted the
importance of language modeling in that it is certainly possible to achieve
good performance by comparing the probabilities of the proposed edits. At the
same time, advancements in language modeling have managed to generate
linguistic output, which is almost indistinguishable from that of
human-generated text. In this paper, we up the ante by exploring the potential
of more sophisticated language models in GEC and offer some key insights on
their strengths and weaknesses. We show that, in line with recent results in
other NLP tasks, Transformer architectures achieve consistently high
performance and provide a competitive baseline for future machine learning
models.
| 2,019 | Computation and Language |
Multi-News: a Large-Scale Multi-Document Summarization Dataset and
Abstractive Hierarchical Model | Automatic generation of summaries from multiple news articles is a valuable
tool as the number of online publications grows rapidly. Single document
summarization (SDS) systems have benefited from advances in neural
encoder-decoder model thanks to the availability of large datasets. However,
multi-document summarization (MDS) of news articles has been limited to
datasets of a couple of hundred examples. In this paper, we introduce
Multi-News, the first large-scale MDS news dataset. Additionally, we propose an
end-to-end model which incorporates a traditional extractive summarization
model with a standard SDS model and achieves competitive results on MDS
datasets. We benchmark several methods on Multi-News and release our data and
code in hope that this work will promote advances in summarization in the
multi-document setting.
| 2,019 | Computation and Language |
Revisiting Joint Modeling of Cross-document Entity and Event Coreference
Resolution | Recognizing coreferring events and entities across multiple texts is crucial
for many NLP applications. Despite the task's importance, research focus was
given mostly to within-document entity coreference, with rather little
attention to the other variants. We propose a neural architecture for
cross-document coreference resolution. Inspired by Lee et al (2012), we jointly
model entity and event coreference. We represent an event (entity) mention
using its lexical span, surrounding context, and relation to entity (event)
mentions via predicate-arguments structures. Our model outperforms the previous
state-of-the-art event coreference model on ECB+, while providing the first
entity coreference results on this corpus. Our analysis confirms that all our
representation elements, including the mention span itself, its context, and
the relation to other mentions contribute to the model's success.
| 2,019 | Computation and Language |
Entity-Centric Contextual Affective Analysis | While contextualized word representations have improved state-of-the-art
benchmarks in many NLP tasks, their potential usefulness for social-oriented
tasks remains largely unexplored. We show how contextualized word embeddings
can be used to capture affect dimensions in portrayals of people. We evaluate
our methodology quantitatively, on held-out affect lexicons, and qualitatively,
through case examples. We find that contextualized word representations do
encode meaningful affect information, but they are heavily biased towards their
training data, which limits their usefulness to in-domain analyses. We
ultimately use our method to examine differences in portrayals of men and
women.
| 2,019 | Computation and Language |
Visual Story Post-Editing | We introduce the first dataset for human edits of machine-generated visual
stories and explore how these collected edits may be used for the visual story
post-editing task. The dataset, VIST-Edit, includes 14,905 human edited
versions of 2,981 machine-generated visual stories. The stories were generated
by two state-of-the-art visual storytelling models, each aligned to 5
human-edited versions. We establish baselines for the task, showing how a
relatively small set of human edits can be leveraged to boost the performance
of large visual storytelling models. We also discuss the weak correlation
between automatic evaluation scores and human ratings, motivating the need for
new automatic metrics.
| 2,019 | Computation and Language |
Generating Multiple Diverse Responses with Multi-Mapping and Posterior
Mapping Selection | In human conversation an input post is open to multiple potential responses,
which is typically regarded as a one-to-many problem. Promising approaches
mainly incorporate multiple latent mechanisms to build the one-to-many
relationship. However, without accurate selection of the latent mechanism
corresponding to the target response during training, these methods suffer from
a rough optimization of latent mechanisms. In this paper, we propose a
multi-mapping mechanism to better capture the one-to-many relationship, where
multiple mapping modules are employed as latent mechanisms to model the
semantic mappings from an input post to its diverse responses. For accurate
optimization of latent mechanisms, a posterior mapping selection module is
designed to select the corresponding mapping module according to the target
response for further optimization. We also introduce an auxiliary matching loss
to facilitate the optimization of posterior mapping selection. Empirical
results demonstrate the superiority of our model in generating multiple diverse
and informative responses over the state-of-the-art methods.
| 2,019 | Computation and Language |
Learning Deep Transformer Models for Machine Translation | Transformer is the state-of-the-art model in recent machine translation
evaluations. Two strands of research are promising to improve models of this
kind: the first uses wide networks (a.k.a. Transformer-Big) and has been the de
facto standard for the development of the Transformer system, and the other
uses deeper language representation but faces the difficulty arising from
learning deep networks. Here, we continue the line of research on the latter.
We claim that a truly deep Transformer model can surpass the Transformer-Big
counterpart by 1) proper use of layer normalization and 2) a novel way of
passing the combination of previous layers to the next. On WMT'16 English-
German, NIST OpenMT'12 Chinese-English and larger WMT'18 Chinese-English tasks,
our deep system (30/25-layer encoder) outperforms the shallow
Transformer-Big/Base baseline (6-layer encoder) by 0.4-2.4 BLEU points. As
another bonus, the deep model is 1.6X smaller in size and 3X faster in training
than Transformer-Big.
| 2,019 | Computation and Language |
Memory Consolidation for Contextual Spoken Language Understanding with
Dialogue Logistic Inference | Dialogue contexts are proven helpful in the spoken language understanding
(SLU) system and they are typically encoded with explicit memory
representations. However, most of the previous models learn the context memory
with only one objective to maximizing the SLU performance, leaving the context
memory under-exploited. In this paper, we propose a new dialogue logistic
inference (DLI) task to consolidate the context memory jointly with SLU in the
multi-task framework. DLI is defined as sorting a shuffled dialogue session
into its original logical order and shares the same memory encoder and
retrieval mechanism as the SLU model. Our experimental results show that
various popular contextual SLU models can benefit from our approach, and
improvements are quite impressive, especially in slot filling.
| 2,019 | Computation and Language |
DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction | This paper focuses on two related subtasks of aspect-based sentiment
analysis, namely aspect term extraction and aspect sentiment classification,
which we call aspect term-polarity co-extraction. The former task is to extract
aspects of a product or service from an opinion document, and the latter is to
identify the polarity expressed in the document about these extracted aspects.
Most existing algorithms address them as two separate tasks and solve them one
by one, or only perform one task, which can be complicated for real
applications. In this paper, we treat these two tasks as two sequence labeling
problems and propose a novel Dual crOss-sharEd RNN framework (DOER) to generate
all aspect term-polarity pairs of the input sentence simultaneously.
Specifically, DOER involves a dual recurrent neural network to extract the
respective representation of each task, and a cross-shared unit to consider the
relationship between them. Experimental results demonstrate that the proposed
framework outperforms state-of-the-art baselines on three benchmark datasets.
| 2,019 | Computation and Language |
Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper) | Sarcasm is often expressed through several verbal and non-verbal cues, e.g.,
a change of tone, overemphasis in a word, a drawn-out syllable, or a straight
looking face. Most of the recent work in sarcasm detection has been carried out
on textual data. In this paper, we argue that incorporating multimodal cues can
improve the automatic classification of sarcasm. As a first step towards
enabling the development of multimodal approaches for sarcasm detection, we
propose a new sarcasm dataset, Multimodal Sarcasm Detection Dataset (MUStARD),
compiled from popular TV shows. MUStARD consists of audiovisual utterances
annotated with sarcasm labels. Each utterance is accompanied by its context of
historical utterances in the dialogue, which provides additional information on
the scenario where the utterance occurs. Our initial results show that the use
of multimodal information can reduce the relative error rate of sarcasm
detection by up to 12.9% in F-score when compared to the use of individual
modalities. The full dataset is publicly available for use at
https://github.com/soujanyaporia/MUStARD
| 2,019 | Computation and Language |
ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in
Arabic Levantine Tweets | Sentiment analysis is a highly subjective and challenging task. Its
complexity further increases when applied to the Arabic language, mainly
because of the large variety of dialects that are unstandardized and widely
used in the Web, especially in social media. While many datasets have been
released to train sentiment classifiers in Arabic, most of these datasets
contain shallow annotation, only marking the sentiment of the text unit, as a
word, a sentence or a document. In this paper, we present the Arabic Sentiment
Twitter Dataset for the Levantine dialect (ArSenTD-LEV). Based on findings from
analyzing tweets from the Levant region, we created a dataset of 4,000 tweets
with the following annotations: the overall sentiment of the tweet, the target
to which the sentiment was expressed, how the sentiment was expressed, and the
topic of the tweet. Results confirm the importance of these annotations at
improving the performance of a baseline sentiment classifier. They also confirm
the gap of training in a certain domain, and testing in another domain.
| 2,019 | Computation and Language |
A Hierarchical Reinforced Sequence Operation Method for Unsupervised
Text Style Transfer | Unsupervised text style transfer aims to alter text styles while preserving
the content, without aligned data for supervision. Existing seq2seq methods
face three challenges: 1) the transfer is weakly interpretable, 2) generated
outputs struggle in content preservation, and 3) the trade-off between content
and style is intractable. To address these challenges, we propose a
hierarchical reinforced sequence operation method, named Point-Then-Operate
(PTO), which consists of a high-level agent that proposes operation positions
and a low-level agent that alters the sentence. We provide comprehensive
training objectives to control the fluency, style, and content of the outputs
and a mask-based inference algorithm that allows for multi-step revision based
on the single-step trained agents. Experimental results on two text style
transfer datasets show that our method significantly outperforms recent methods
and effectively addresses the aforementioned challenges.
| 2,019 | Computation and Language |
Automatic Generation of High Quality CCGbanks for Parser Domain
Adaptation | We propose a new domain adaptation method for Combinatory Categorial Grammar
(CCG) parsing, based on the idea of automatic generation of CCG corpora
exploiting cheaper resources of dependency trees. Our solution is conceptually
simple, and not relying on a specific parser architecture, making it applicable
to the current best-performing parsers. We conduct extensive parsing
experiments with detailed discussion; on top of existing benchmark datasets on
(1) biomedical texts and (2) question sentences, we create experimental
datasets of (3) speech conversation and (4) math problems. When applied to the
proposed method, an off-the-shelf CCG parser shows significant performance
gains, improving from 90.7% to 96.6% on speech conversation, and from 88.5% to
96.8% on math problems.
| 2,019 | Computation and Language |
Improving Textual Network Embedding with Global Attention via Optimal
Transport | Constituting highly informative network embeddings is an important tool for
network analysis. It encodes network topology, along with other useful side
information, into low-dimensional node-based feature representations that can
be exploited by statistical modeling. This work focuses on learning
context-aware network embeddings augmented with text data. We reformulate the
network-embedding problem, and present two novel strategies to improve over
traditional attention mechanisms: ($i$) a content-aware sparse attention module
based on optimal transport, and ($ii$) a high-level attention parsing module.
Our approach yields naturally sparse and self-normalized relational inference.
It can capture long-term interactions between sequences, thus addressing the
challenges faced by existing textual network embedding schemes. Extensive
experiments are conducted to demonstrate our model can consistently outperform
alternative state-of-the-art methods.
| 2,019 | Computation and Language |
Terminology-based Text Embedding for Computing Document Similarities on
Technical Content | We propose in this paper a new, hybrid document embedding approach in order
to address the problem of document similarities with respect to the technical
content. To do so, we employ a state-of-the-art graph techniques to first
extract the keyphrases (composite keywords) of documents and, then, use them to
score the sentences. Using the ranked sentences, we propose two approaches to
embed documents and show their performances with respect to two baselines. With
domain expert annotations, we illustrate that the proposed methods can find
more relevant documents and outperform the baselines up to 27% in terms of
NDCG.
| 2,019 | Computation and Language |
A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings
Based on Graph Modularity | Cross-lingual word embeddings encode the meaning of words from different
languages into a shared low-dimensional space. An important requirement for
many downstream tasks is that word similarity should be independent of language
- i.e., word vectors within one language should not be more similar to each
other than to words in another language. We measure this characteristic using
modularity, a network measurement that measures the strength of clusters in a
graph. Modularity has a moderate to strong correlation with three downstream
tasks, even though modularity is based only on the structure of embeddings and
does not require any external resources. We show through experiments that
modularity can serve as an intrinsic validation metric to improve unsupervised
cross-lingual word embeddings, particularly on distant language pairs in
low-resource settings.
| 2,022 | Computation and Language |
Learning Bilingual Sentence Embeddings via Autoencoding and Computing
Similarities with a Multilayer Perceptron | We propose a novel model architecture and training algorithm to learn
bilingual sentence embeddings from a combination of parallel and monolingual
data. Our method connects autoencoding and neural machine translation to force
the source and target sentence embeddings to share the same space without the
help of a pivot language or an additional transformation. We train a multilayer
perceptron on top of the sentence embeddings to extract good bilingual sentence
pairs from nonparallel or noisy parallel data. Our approach shows promising
performance on sentence alignment recovery and the WMT 2018 parallel corpus
filtering tasks with only a single model.
| 2,019 | Computation and Language |
Automated Speech Generation from UN General Assembly Statements: Mapping
Risks in AI Generated Texts | Automated text generation has been applied broadly in many domains such as
marketing and robotics, and used to create chatbots, product reviews and write
poetry. The ability to synthesize text, however, presents many potential risks,
while access to the technology required to build generative models is becoming
increasingly easy. This work is aligned with the efforts of the United Nations
and other civil society organisations to highlight potential political and
societal risks arising through the malicious use of text generation software,
and their potential impact on human rights. As a case study, we present the
findings of an experiment to generate remarks in the style of political leaders
by fine-tuning a pretrained AWD- LSTM model on a dataset of speeches made at
the UN General Assembly. This work highlights the ease with which this can be
accomplished, as well as the threats of combining these techniques with other
technologies.
| 2,019 | Computation and Language |
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer
Self-Attentions | We inspect the multi-head self-attention in Transformer NMT encoders for
three source languages, looking for patterns that could have a syntactic
interpretation. In many of the attention heads, we frequently find sequences of
consecutive states attending to the same position, which resemble syntactic
phrases. We propose a transparent deterministic method of quantifying the
amount of syntactic information present in the self-attentions, based on
automatically building and evaluating phrase-structure trees from the
phrase-like sequences. We compare the resulting trees to existing constituency
treebanks, both manually and by computing precision and recall.
| 2,019 | Computation and Language |
Triple-to-Text: Converting RDF Triples into High-Quality Natural
Languages via Optimizing an Inverse KL Divergence | Knowledge base is one of the main forms to represent information in a
structured way. A knowledge base typically consists of Resource Description
Frameworks (RDF) triples which describe the entities and their relations.
Generating natural language description of the knowledge base is an important
task in NLP, which has been formulated as a conditional language generation
task and tackled using the sequence-to-sequence framework. Current works mostly
train the language models by maximum likelihood estimation, which tends to
generate lousy sentences. In this paper, we argue that such a problem of
maximum likelihood estimation is intrinsic, which is generally irrevocable via
changing network structures. Accordingly, we propose a novel Triple-to-Text
(T2T) framework, which approximately optimizes the inverse Kullback-Leibler
(KL) divergence between the distributions of the real and generated sentences.
Due to the nature that inverse KL imposes large penalty on fake-looking
samples, the proposed method can significantly reduce the probability of
generating low-quality sentences. Our experiments on three real-world datasets
demonstrate that T2T can generate higher-quality sentences and outperform
baseline models in several evaluation metrics.
| 2,019 | Computation and Language |
A Hierarchical Decoder with Three-level Hierarchical Attention to
Generate Abstractive Summaries of Interleaved Texts | Interleaved texts, where posts belonging to different threads occur in one
sequence, are a common occurrence, e.g., online chat conversations. To quickly
obtain an overview of such texts, existing systems first disentangle the posts
by threads and then extract summaries from those threads. The major issues with
such systems are error propagation and non-fluent summary. To address those, we
propose an end-to-end trainable hierarchical encoder-decoder system. We also
introduce a novel hierarchical attention mechanism which combines three levels
of information from an interleaved text, i.e, posts, phrases and words, and
implicitly disentangles the threads. We evaluated the proposed system on
multiple interleaved text datasets, and it out-performs a SOTA two-step system
by 20-40%.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.