Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
The Computational Structure of Unintentional Meaning | Speech-acts can have literal meaning as well as pragmatic meaning, but these
both involve consequences typically intended by a speaker. Speech-acts can also
have unintentional meaning, in which what is conveyed goes above and beyond
what was intended. Here, we present a Bayesian analysis of how, to a listener,
the meaning of an utterance can significantly differ from a speaker's intended
meaning. Our model emphasizes how comprehending the intentional and
unintentional meaning of speech-acts requires listeners to engage in
sophisticated model-based perspective-taking and reasoning about the history of
the state of the world, each other's actions, and each other's observations. To
test our model, we have human participants make judgments about vignettes where
speakers make utterances that could be interpreted as intentional insults or
unintentional faux pas. In elucidating the mechanics of speech-acts with
unintentional meanings, our account provides insight into how communication
both functions and malfunctions.
| 2,019 | Computation and Language |
Every child should have parents: a taxonomy refinement algorithm based
on hyperbolic term embeddings | We introduce the use of Poincar\'e embeddings to improve existing
state-of-the-art approaches to domain-specific taxonomy induction from text as
a signal for both relocating wrong hyponym terms within a (pre-induced)
taxonomy as well as for attaching disconnected terms in a taxonomy. This method
substantially improves previous state-of-the-art results on the SemEval-2016
Task 13 on taxonomy extraction. We demonstrate the superiority of Poincar\'e
embeddings over distributional semantic representations, supporting the
hypothesis that they can better capture hierarchical lexical-semantic
relationships than embeddings in the Euclidean space.
| 2,019 | Computation and Language |
Imitation Learning for Non-Autoregressive Neural Machine Translation | Non-autoregressive translation models (NAT) have achieved impressive
inference speedup. A potential issue of the existing NAT algorithms, however,
is that the decoding is conducted in parallel, without directly considering
previous context. In this paper, we propose an imitation learning framework for
non-autoregressive machine translation, which still enjoys the fast translation
speed but gives comparable translation performance compared to its
auto-regressive counterpart. We conduct experiments on the IWSLT16, WMT14 and
WMT16 datasets. Our proposed model achieves a significant speedup over the
autoregressive models, while keeping the translation quality comparable to the
autoregressive models. By sampling sentence length in parallel at inference
time, we achieve the performance of 31.85 BLEU on WMT16 Ro$\rightarrow$En and
30.68 BLEU on IWSLT16 En$\rightarrow$De.
| 2,019 | Computation and Language |
The FRENK Datasets of Socially Unacceptable Discourse in Slovene and
English | In this paper we present datasets of Facebook comment threads to mainstream
media posts in Slovene and English developed inside the Slovene national
project FRENK which cover two topics, migrants and LGBT, and are manually
annotated for different types of socially unacceptable discourse (SUD). The
main advantages of these datasets compared to the existing ones are identical
sampling procedures, producing comparable data across languages and an
annotation schema that takes into account six types of SUD and five targets at
which SUD is directed. We describe the sampling and annotation procedures, and
analyze the annotation distributions and inter-annotator agreements. We
consider this dataset to be an important milestone in understanding and
combating SUD for both languages.
| 2,019 | Computation and Language |
KAS-term: Extracting Slovene Terms from Doctoral Theses via Supervised
Machine Learning | This paper presents a dataset and supervised learning experiments for term
extraction from Slovene academic texts. Term candidates in the dataset were
extracted via morphosyntactic patterns and annotated for their termness by four
annotators. Experiments on the dataset show that most co-occurrence statistics,
applied after morphosyntactic patterns and a frequency threshold, perform close
to random and that the results can be significantly improved by combining, with
supervised machine learning, all the seven statistic measures included in the
dataset. On multi-word terms the model using all statistics obtains an AUC of
0.736 while the best single statistic produces only AUC 0.590. Among many
additional candidate features, only adding multi-word morphosyntactic pattern
information and length of the single-word term candidates achieves further
improvements of the results.
| 2,019 | Computation and Language |
Neural Legal Judgment Prediction in English | Legal judgment prediction is the task of automatically predicting the outcome
of a court case, given a text describing the case's facts. Previous work on
using neural models for this task has focused on Chinese; only feature-based
models (e.g., using bags of words and topics) have been considered in English.
We release a new English legal judgment prediction dataset, containing cases
from the European Court of Human Rights. We evaluate a broad variety of neural
models on the new dataset, establishing strong baselines that surpass previous
feature-based models in three tasks: (1) binary violation classification; (2)
multi-label classification; (3) case importance prediction. We also explore if
models are biased towards demographic information via data anonymization. As a
side-product, we propose a hierarchical version of BERT, which bypasses BERT's
length limitation.
| 2,019 | Computation and Language |
Learning to Rank for Plausible Plausibility | Researchers illustrate improvements in contextual encoding strategies via
resultant performance on a battery of shared Natural Language Understanding
(NLU) tasks. Many of these tasks are of a categorical prediction variety: given
a conditioning context (e.g., an NLI premise), provide a label based on an
associated prompt (e.g., an NLI hypothesis). The categorical nature of these
tasks has led to common use of a cross entropy log-loss objective during
training. We suggest this loss is intuitively wrong when applied to
plausibility tasks, where the prompt by design is neither categorically
entailed nor contradictory given the context. Log-loss naturally drives models
to assign scores near 0.0 or 1.0, in contrast to our proposed use of a
margin-based loss. Following a discussion of our intuition, we describe a
confirmation study based on an extreme, synthetically curated task derived from
MultiNLI. We find that a margin-based loss leads to a more plausible model of
plausibility. Finally, we illustrate improvements on the Choice Of Plausible
Alternative (COPA) task through this change in loss.
| 2,019 | Computation and Language |
Classifying Norm Conflicts using Learned Semantic Representations | While most social norms are informal, they are often formalized by companies
in contracts to regulate trades of goods and services. When poorly written,
contracts may contain normative conflicts resulting from opposing deontic
meanings or contradict specifications. As contracts tend to be long and contain
many norms, manually identifying such conflicts requires human-effort, which is
time-consuming and error-prone. Automating such task benefits contract makers
increasing productivity and making conflict identification more reliable. To
address this problem, we introduce an approach to detect and classify norm
conflicts in contracts by converting them into latent representations that
preserve both syntactic and semantic information and training a model to
classify norm conflicts in four conflict types. Our results reach the new state
of the art when compared to a previous approach.
| 2,019 | Computation and Language |
SP-10K: A Large-scale Evaluation Set for Selectional Preference
Acquisition | Selectional Preference (SP) is a commonly observed language phenomenon and
proved to be useful in many natural language processing tasks. To provide a
better evaluation method for SP models, we introduce SP-10K, a large-scale
evaluation set that provides human ratings for the plausibility of 10,000 SP
pairs over five SP relations, covering 2,500 most frequent verbs, nouns, and
adjectives in American English. Three representative SP acquisition methods
based on pseudo-disambiguation are evaluated with SP-10K. To demonstrate the
importance of our dataset, we investigate the relationship between SP-10K and
the commonsense knowledge in ConceptNet5 and show the potential of using SP to
represent the commonsense knowledge. We also use the Winograd Schema Challenge
to prove that the proposed new SP relations are essential for the hard pronoun
coreference resolution problem.
| 2,019 | Computation and Language |
PatentBERT: Patent Classification with Fine-Tuning a pre-trained BERT
Model | In this work we focus on fine-tuning a pre-trained BERT model and applying it
to patent classification. When applied to large datasets of over two millions
patents, our approach outperforms the state of the art by an approach using CNN
with word embeddings. In addition, we focus on patent claims without other
parts in patent documents. Our contributions include: (1) a new
state-of-the-art method based on pre-trained BERT model and fine-tuning for
patent classification, (2) a large dataset USPTO-3M at the CPC subclass level
with SQL statements that can be used by future researchers, (3) showing that
patent claims alone are sufficient for classification task, in contrast to
conventional wisdom.
| 2,019 | Computation and Language |
Strong and Simple Baselines for Multimodal Utterance Embeddings | Human language is a rich multimodal signal consisting of spoken words, facial
expressions, body gestures, and vocal intonations. Learning representations for
these spoken utterances is a complex research problem due to the presence of
multiple heterogeneous sources of information. Recent advances in multimodal
learning have followed the general trend of building more complex models that
utilize various attention, memory and recurrent components. In this paper, we
propose two simple but strong baselines to learn embeddings of multimodal
utterances. The first baseline assumes a conditional factorization of the
utterance into unimodal factors. Each unimodal factor is modeled using the
simple form of a likelihood function obtained via a linear transformation of
the embedding. We show that the optimal embedding can be derived in closed form
by taking a weighted average of the unimodal features. In order to capture
richer representations, our second baseline extends the first by factorizing
into unimodal, bimodal, and trimodal factors, while retaining simplicity and
efficiency during learning and inference. From a set of experiments across two
tasks, we show strong performance on both supervised and semi-supervised
multimodal prediction, as well as significant (10 times) speedups over neural
models during inference. Overall, we believe that our strong baseline models
offer new benchmarking options for future research in multimodal learning.
| 2,020 | Computation and Language |
Extractive Summarization via Weighted Dissimilarity and Importance
Aligned Key Iterative Algorithm | We present importance aligned key iterative algorithm for extractive
summarization that is faster than conventional algorithms keeping its accuracy.
The computational complexity of our algorithm is O($SNlogN$) to summarize
original $N$ sentences into final $S$ sentences. Our algorithm maximizes the
weighted dissimilarity defined by the product of importance and cosine
dissimilarity so that the summary represents the document and at the same time
the sentences of the summary are not similar to each other. The weighted
dissimilarity is heuristically maximized by iterative greedy search and binary
search to the sentences ordered by importance. We finally show a benchmark
score based on summarization of customer reviews of products, which highlights
the quality of our algorithm comparable to human and existing algorithms. We
provide the source code of our algorithm on github
https://github.com/qhapaq-49/imakita .
| 2,019 | Computation and Language |
An Approach for Process Model Extraction By Multi-Grained Text
Classification | Process model extraction (PME) is a recently emerged interdiscipline between
natural language processing (NLP) and business process management (BPM), which
aims to extract process models from textual descriptions. Previous process
extractors heavily depend on manual features and ignore the potential relations
between clues of different text granularities. In this paper, we formalize the
PME task into the multi-grained text classification problem, and propose a
hierarchical neural network to effectively model and extract multi-grained
information without manually-defined procedural features. Under this structure,
we accordingly propose the coarse-to-fine (grained) learning mechanism,
training multi-grained tasks in coarse-to-fine grained order to share the
high-level knowledge for the low-level tasks. To evaluate our approach, we
construct two multi-grained datasets from two different domains and conduct
extensive experiments from different dimensions. The experimental results
demonstrate that our approach outperforms the state-of-the-art methods with
statistical significance and further investigations demonstrate its
effectiveness.
| 2,020 | Computation and Language |
Recovering Dropped Pronouns in Chinese Conversations via Modeling Their
Referents | Pronouns are often dropped in Chinese sentences, and this happens more
frequently in conversational genres as their referents can be easily understood
from context. Recovering dropped pronouns is essential to applications such as
Information Extraction where the referents of these dropped pronouns need to be
resolved, or Machine Translation when Chinese is the source language. In this
work, we present a novel end-to-end neural network model to recover dropped
pronouns in conversational data. Our model is based on a structured attention
mechanism that models the referents of dropped pronouns utilizing both
sentence-level and word-level information. Results on three different
conversational genres show that our approach achieves a significant improvement
over the current state of the art.
| 2,019 | Computation and Language |
Ex-Twit: Explainable Twitter Mining on Health Data | Since most machine learning models provide no explanations for the
predictions, their predictions are obscure for the human. The ability to
explain a model's prediction has become a necessity in many applications
including Twitter mining. In this work, we propose a method called Explainable
Twitter Mining (Ex-Twit) combining Topic Modeling and Local Interpretable
Model-agnostic Explanation (LIME) to predict the topic and explain the model
predictions. We demonstrate the effectiveness of Ex-Twit on Twitter
health-related data.
| 2,020 | Computation and Language |
Theme-aware generation model for chinese lyrics | With rapid development of neural networks, deep-learning has been extended to
various natural language generation fields, such as machine translation,
dialogue generation and even literature creation. In this paper, we propose a
theme-aware language generation model for Chinese music lyrics, which improves
the theme-connectivity and coherence of generated paragraphs greatly. A
multi-channel sequence-to-sequence (seq2seq) model encodes themes and previous
sentences as global and local contextual information. Moreover, attention
mechanism is incorporated for sequence decoding, enabling to fuse context into
predicted next texts. To prepare appropriate train corpus, LDA (Latent
Dirichlet Allocation) is applied for theme extraction. Generated lyrics is
grammatically correct and semantically coherent with selected themes, which
offers a valuable modelling method in other fields including multi-turn
chatbots, long paragraph generation and etc.
| 2,019 | Computation and Language |
Deep learning based mood tagging for Chinese song lyrics | Nowadays, listening music has been and will always be an indispensable part
of our daily life. In recent years, sentiment analysis of music has been widely
used in the information retrieval systems, personalized recommendation systems
and so on. Due to the development of deep learning, this paper commits to find
an effective approach for mood tagging of Chinese song lyrics. To achieve this
goal, both machine-learning and deep-learning models have been studied and
compared. Eventually, a CNN-based model with pre-trained word embedding has
been demonstrated to effectively extract the distribution of emotional features
of Chinese lyrics, with at least 15 percentage points higher than traditional
machine-learning methods (i.e. TF-IDF+SVM and LIWC+SVM), and 7 percentage
points higher than other deep-learning models (i.e. RNN, LSTM). In this paper,
more than 160,000 lyrics corpus has been leveraged for pre-training word
embedding for mood tagging boost.
| 2,019 | Computation and Language |
LMF Reloaded | Lexical Markup Framework (LMF) or ISO 24613 [1] is a de jure standard that
provides a framework for modelling and encoding lexical information in
retrodigitised print dictionaries and NLP lexical databases. An in-depth review
is currently underway within the standardisation subcommittee ,
ISO-TC37/SC4/WG4, to find a more modular, flexible and durable follow up to the
original LMF standard published in 2008. In this paper we will present some of
the major improvements which have so far been implemented in the new version of
LMF.
| 2,019 | Computation and Language |
Large-Scale Multi-Label Text Classification on EU Legislation | We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal
domain. We release a new dataset of 57k legislative documents from EURLEX,
annotated with ~4.3k EUROVOC labels, which is suitable for LMTC, few- and
zero-shot learning. Experimenting with several neural classifiers, we show that
BIGRUs with label-wise attention perform better than other current state of the
art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings
further improve performance. We also find that considering only particular
zones of the documents is sufficient. This allows us to bypass BERT's maximum
text length limit and fine-tune BERT, obtaining the best results in all but
zero-shot learning cases.
| 2,019 | Computation and Language |
Extracting Symptoms and their Status from Clinical Conversations | This paper describes novel models tailored for a new application, that of
extracting the symptoms mentioned in clinical conversations along with their
status. Lack of any publicly available corpus in this privacy-sensitive domain
led us to develop our own corpus, consisting of about 3K conversations
annotated by professional medical scribes. We propose two novel deep learning
approaches to infer the symptom names and their status: (1) a new hierarchical
span-attribute tagging (\SAT) model, trained using curriculum learning, and (2)
a variant of sequence-to-sequence model which decodes the symptoms and their
status from a few speaker turns within a sliding window over the conversation.
This task stems from a realistic application of assisting medical providers in
capturing symptoms mentioned by patients from their clinical conversations. To
reflect this application, we define multiple metrics. From inter-rater
agreement, we find that the task is inherently difficult. We conduct
comprehensive evaluations on several contrasting conditions and observe that
the performance of the models range from an F-score of 0.5 to 0.8 depending on
the condition. Our analysis not only reveals the inherent challenges of the
task, but also provides useful directions to improve the models.
| 2,019 | Computation and Language |
Variational Pretraining for Semi-supervised Text Classification | We introduce VAMPIRE, a lightweight pretraining framework for effective text
classification when data and computing resources are limited. We pretrain a
unigram document model as a variational autoencoder on in-domain, unlabeled
data and use its internal states as features in a downstream classifier.
Empirically, we show the relative strength of VAMPIRE against computationally
expensive contextual embeddings and other popular semi-supervised baselines
under low resource settings. We also find that fine-tuning to in-domain data is
crucial to achieving decent performance from contextual embeddings when working
with limited supervision. We accompany this paper with code to pretrain and use
VAMPIRE embeddings in downstream tasks.
| 2,019 | Computation and Language |
Energy and Policy Considerations for Deep Learning in NLP | Recent progress in hardware and methodology for training neural networks has
ushered in a new generation of large networks trained on abundant data. These
models have obtained notable gains in accuracy across many NLP tasks. However,
these accuracy improvements depend on the availability of exceptionally large
computational resources that necessitate similarly substantial energy
consumption. As a result these models are costly to train and develop, both
financially, due to the cost of hardware and electricity or cloud compute time,
and environmentally, due to the carbon footprint required to fuel modern tensor
processing hardware. In this paper we bring this issue to the attention of NLP
researchers by quantifying the approximate financial and environmental costs of
training a variety of recently successful neural network models for NLP. Based
on these findings, we propose actionable recommendations to reduce costs and
improve equity in NLP research and practice.
| 2,019 | Computation and Language |
An Imitation Learning Approach to Unsupervised Parsing | Recently, there has been an increasing interest in unsupervised parsers that
optimize semantically oriented objectives, typically using reinforcement
learning. Unfortunately, the learned trees often do not match actual syntax
trees well. Shen et al. (2018) propose a structured attention mechanism for
language modeling (PRPN), which induces better syntactic structures but relies
on ad hoc heuristics. Also, their model lacks interpretability as it is not
grounded in parsing actions. In our work, we propose an imitation learning
approach to unsupervised parsing, where we transfer the syntactic knowledge
induced by the PRPN to a Tree-LSTM model with discrete parsing actions. Its
policy is then refined by Gumbel-Softmax training towards a semantically
oriented objective. We evaluate our approach on the All Natural Language
Inference dataset and show that it achieves a new state of the art in terms of
parsing $F$-score, outperforming our base models, including the PRPN.
| 2,019 | Computation and Language |
SParC: Cross-Domain Semantic Parsing in Context | We present SParC, a dataset for cross-domainSemanticParsing inContext that
consists of 4,298 coherent question sequences (12k+ individual questions
annotated with SQL queries). It is obtained from controlled user interactions
with 200 complex databases over 138 domains. We provide an in-depth analysis of
SParC and show that it introduces new challenges compared to existing datasets.
SParC demonstrates complex contextual dependencies, (2) has greater semantic
diversity, and (3) requires generalization to unseen domains due to its
cross-domain nature and the unseen databases at test time. We experiment with
two state-of-the-art text-to-SQL models adapted to the context-dependent,
cross-domain setup. The best model obtains an exact match accuracy of 20.2%
over all questions and less than10% over all interaction sequences, indicating
that the cross-domain setting and the con-textual phenomena of the dataset
present significant challenges for future research. The dataset, baselines, and
leaderboard are released at https://yale-lily.github.io/sparc.
| 2,019 | Computation and Language |
Survey on Publicly Available Sinhala Natural Language Processing Tools
and Research | Sinhala is the native language of the Sinhalese people who make up the
largest ethnic group of Sri Lanka. The language belongs to the globe-spanning
language tree, Indo-European. However, due to poverty in both linguistic and
economic capital, Sinhala, in the perspective of Natural Language Processing
tools and research, remains a resource-poor language which has neither the
economic drive its cousin English has nor the sheer push of the law of numbers
a language such as Chinese has. A number of research groups from Sri Lanka have
noticed this dearth and the resultant dire need for proper tools and research
for Sinhala natural language processing. However, due to various reasons, these
attempts seem to lack coordination and awareness of each other. The objective
of this paper is to fill that gap of a comprehensive literature survey of the
publicly available Sinhala natural language tools and research so that the
researchers working in this field can better utilize contributions of their
peers. As such, we shall be uploading this paper to arXiv and perpetually
update it periodically to reflect the advances made in the field.
| 2,024 | Computation and Language |
Explain Yourself! Leveraging Language Models for Commonsense Reasoning | Deep learning models perform poorly on tasks that require commonsense
reasoning, which often necessitates some form of world-knowledge or reasoning
over information not immediately present in the input. We collect human
explanations for commonsense reasoning in the form of natural language
sequences and highlighted annotations in a new dataset called Common Sense
Explanations (CoS-E). We use CoS-E to train language models to automatically
generate explanations that can be used during training and inference in a novel
Commonsense Auto-Generated Explanation (CAGE) framework. CAGE improves the
state-of-the-art by 10% on the challenging CommonsenseQA task. We further study
commonsense reasoning in DNNs using both human and auto-generated explanations
including transfer to out-of-domain tasks. Empirical results indicate that we
can effectively leverage language models for commonsense reasoning.
| 2,019 | Computation and Language |
Training Temporal Word Embeddings with a Compass | Temporal word embeddings have been proposed to support the analysis of word
meaning shifts during time and to study the evolution of languages. Different
approaches have been proposed to generate vector representations of words that
embed their meaning during a specific time interval. However, the training
process used in these approaches is complex, may be inefficient or it may
require large text corpora. As a consequence, these approaches may be difficult
to apply in resource-scarce domains or by scientists with limited in-depth
knowledge of embedding models. In this paper, we propose a new heuristic to
train temporal word embeddings based on the Word2vec model. The heuristic
consists in using atemporal vectors as a reference, i.e., as a compass, when
training the representations specific to a given time interval. The use of the
compass simplifies the training process and makes it more efficient.
Experiments conducted using state-of-the-art datasets and methodologies suggest
that our approach outperforms or equals comparable approaches while being more
robust in terms of the required corpus size.
| 2,019 | Computation and Language |
An Analysis of Emotion Communication Channels in Fan Fiction: Towards
Emotional Storytelling | Centrality of emotion for the stories told by humans is underpinned by
numerous studies in literature and psychology. The research in automatic
storytelling has recently turned towards emotional storytelling, in which
characters' emotions play an important role in the plot development. However,
these studies mainly use emotion to generate propositional statements in the
form "A feels affection towards B" or "A confronts B". At the same time,
emotional behavior does not boil down to such propositional descriptions, as
humans display complex and highly variable patterns in communicating their
emotions, both verbally and non-verbally. In this paper, we analyze how
emotions are expressed non-verbally in a corpus of fan fiction short stories.
Our analysis shows that stories written by humans convey character emotions
along various non-verbal channels. We find that some non-verbal channels, such
as facial expressions and voice characteristics of the characters, are more
strongly associated with joy, while gestures and body postures are more likely
to occur with trust. Based on our analysis, we argue that automatic
storytelling systems should take variability of emotion into account when
generating descriptions of characters' emotions.
| 2,019 | Computation and Language |
Shift-of-Perspective Identification Within Legal Cases | Arguments, counter-arguments, facts, and evidence obtained via documents
related to previous court cases are of essential need for legal professionals.
Therefore, the process of automatic information extraction from documents
containing legal opinions related to court cases can be considered to be of
significant importance. This study is focused on the identification of
sentences in legal opinion texts which convey different perspectives on a
certain topic or entity. We combined several approaches based on semantic
analysis, open information extraction, and sentiment analysis to achieve our
objective. Then, our methodology was evaluated with the help of human judges.
The outcomes of the evaluation demonstrate that our system is successful in
detecting situations where two sentences deliver different opinions on the same
topic or entity. The proposed methodology can be used to facilitate other
information extraction tasks related to the legal domain. One such task is the
automated detection of counter arguments for a given argument. Another is the
identification of opponent parties in a court case.
| 2,019 | Computation and Language |
GCDT: A Global Context Enhanced Deep Transition Architecture for
Sequence Labeling | Current state-of-the-art systems for sequence labeling are typically based on
the family of Recurrent Neural Networks (RNNs). However, the shallow
connections between consecutive hidden states of RNNs and insufficient modeling
of global information restrict the potential performance of those models. In
this paper, we try to address these issues, and thus propose a Global Context
enhanced Deep Transition architecture for sequence labeling named GCDT. We
deepen the state transition path at each position in a sentence, and further
assign every token with a global representation learned from the entire
sentence. Experiments on two standard sequence labeling tasks show that, given
only training data and the ubiquitous word embeddings (Glove), our GCDT
achieves 91.96 F1 on the CoNLL03 NER task and 95.43 F1 on the CoNLL2000
Chunking task, which outperforms the best reported results under the same
settings. Furthermore, by leveraging BERT as an additional resource, we
establish new state-of-the-art results with 93.47 F1 on NER and 97.30 F1 on
Chunking.
| 2,019 | Computation and Language |
Robust Neural Machine Translation with Doubly Adversarial Inputs | Neural machine translation (NMT) often suffers from the vulnerability to
noisy perturbations in the input. We propose an approach to improving the
robustness of NMT models, which consists of two parts: (1) attack the
translation model with adversarial source examples; (2) defend the translation
model with adversarial target inputs to improve its robustness against the
adversarial source inputs.For the generation of adversarial inputs, we propose
a gradient-based method to craft adversarial examples informed by the
translation loss over the clean inputs.Experimental results on Chinese-English
and English-German translation tasks demonstrate that our approach achieves
significant improvements ($2.8$ and $1.6$ BLEU points) over Transformer on
standard clean benchmarks as well as exhibiting higher robustness on noisy
data.
| 2,019 | Computation and Language |
Bridging the Gap between Training and Inference for Neural Machine
Translation | Neural Machine Translation (NMT) generates target words sequentially in the
way of predicting the next word conditioned on the context words. At training
time, it predicts with the ground truth words as context while at inference it
has to generate the entire sequence from scratch. This discrepancy of the fed
context leads to error accumulation among the way. Furthermore, word-level
training requires strict matching between the generated sequence and the ground
truth sequence which leads to overcorrection over different but reasonable
translations. In this paper, we address these issues by sampling context words
not only from the ground truth sequence but also from the predicted sequence by
the model during training, where the predicted sequence is selected with a
sentence-level optimum. Experiment results on Chinese->English and WMT'14
English->German translation tasks demonstrate that our approach can achieve
significant improvements on multiple datasets.
| 2,019 | Computation and Language |
Unsupervised Pivot Translation for Distant Languages | Unsupervised neural machine translation (NMT) has attracted a lot of
attention recently. While state-of-the-art methods for unsupervised translation
usually perform well between similar languages (e.g., English-German
translation), they perform poorly between distant languages, because
unsupervised alignment does not work well for distant languages. In this work,
we introduce unsupervised pivot translation for distant languages, which
translates a language to a distant language through multiple hops, and the
unsupervised translation on each hop is relatively easier than the original
direct translation. We propose a learning to route (LTR) method to choose the
translation path between the source and target languages. LTR is trained on
language pairs whose best translation path is available and is applied on the
unseen language pairs for path selection. Experiments on 20 languages and 294
distant language pairs demonstrate the advantages of the unsupervised pivot
translation for distant languages, as well as the effectiveness of the proposed
LTR for path selection. Specifically, in the best case, LTR achieves an
improvement of 5.58 BLEU points over the conventional direct unsupervised
method.
| 2,019 | Computation and Language |
Second-order Co-occurrence Sensitivity of Skip-Gram with Negative
Sampling | We simulate first- and second-order context overlap and show that Skip-Gram
with Negative Sampling is similar to Singular Value Decomposition in capturing
second-order co-occurrence information, while Pointwise Mutual Information is
agnostic to it. We support the results with an empirical study finding that the
models react differently when provided with additional second-order
information. Our findings reveal a basic property of Skip-Gram with Negative
Sampling and point towards an explanation of its success on a variety of tasks.
| 2,019 | Computation and Language |
Fine-Grained Entity Typing in Hyperbolic Space | How can we represent hierarchical information present in large type
inventories for entity typing? We study the ability of hyperbolic embeddings to
capture hierarchical relations between mentions in context and their target
types in a shared vector space. We evaluate on two datasets and investigate two
different techniques for creating a large hierarchical entity type inventory:
from an expert-generated ontology and by automatically mining type
co-occurrences. We find that the hyperbolic model yields improvements over its
Euclidean counterpart in some, but not all cases. Our analysis suggests that
the adequacy of this geometry depends on the granularity of the type inventory
and the way hierarchical relations are inferred.
| 2,019 | Computation and Language |
Derivational Morphological Relations in Word Embeddings | Derivation is a type of a word-formation process which creates new words from
existing ones by adding, changing or deleting affixes. In this paper, we
explore the potential of word embeddings to identify properties of word
derivations in the morphologically rich Czech language. We extract derivational
relations between pairs of words from DeriNet, a Czech lexical network, which
organizes almost one million Czech lemmata into derivational trees. For each
such pair, we compute the difference of the embeddings of the two words, and
perform unsupervised clustering of the resulting vectors. Our results show that
these clusters largely match manually annotated semantic categories of the
derivational relations (e.g. the relation 'bake--baker' belongs to category
'actor', and a correct clustering puts it into the same cluster as
'govern--governor').
| 2,019 | Computation and Language |
Cross-Lingual Training for Automatic Question Generation | Automatic question generation (QG) is a challenging problem in natural
language understanding. QG systems are typically built assuming access to a
large number of training instances where each instance is a question and its
corresponding answer. For a new language, such training instances are hard to
obtain making the QG problem even more challenging. Using this as our
motivation, we study the reuse of an available large QG dataset in a secondary
language (e.g. English) to learn a QG model for a primary language (e.g. Hindi)
of interest. For the primary language, we assume access to a large amount of
monolingual text but only a small QG dataset. We propose a cross-lingual QG
model which uses the following training regime: (i) Unsupervised pretraining of
language models in both primary and secondary languages and (ii) joint
supervised training for QG in both languages. We demonstrate the efficacy of
our proposed approach using two different primary languages, Hindi and Chinese.
We also create and release a new question answering dataset for Hindi
consisting of 6555 sentences.
| 2,019 | Computation and Language |
Measuring the compositionality of noun-noun compounds over time | We present work in progress on the temporal progression of compositionality
in noun-noun compounds. Previous work has proposed computational methods for
determining the compositionality of compounds. These methods try to
automatically determine how transparent the meaning of the compound as a whole
is with respect to the meaning of its parts. We hypothesize that such a
property might change over time. We use the time-stamped Google Books corpus
for our diachronic investigations, and first examine whether the vector-based
semantic spaces extracted from this corpus are able to predict compositionality
ratings, despite their inherent limitations. We find that using temporal
information helps predicting the ratings, although correlation with the ratings
is lower than reported for other corpora. Finally, we show changes in
compositionality over time for a selection of compounds.
| 2,019 | Computation and Language |
Analysis of Automatic Annotation Suggestions for Hard Discourse-Level
Tasks in Expert Domains | Many complex discourse-level tasks can aid domain experts in their work but
require costly expert annotations for data creation. To speed up and ease
annotations, we investigate the viability of automatically generated annotation
suggestions for such tasks. As an example, we choose a task that is
particularly hard for both humans and machines: the segmentation and
classification of epistemic activities in diagnostic reasoning texts. We create
and publish a new dataset covering two domains and carefully analyse the
suggested annotations. We find that suggestions have positive effects on
annotation speed and performance, while not introducing noteworthy biases.
Envisioning suggestion models that improve with newly annotated texts, we
contrast methods for continuous model adjustment and suggest the most effective
setup for suggestions in future expert tasks.
| 2,019 | Computation and Language |
Generating Question-Answer Hierarchies | The process of knowledge acquisition can be viewed as a question-answer game
between a student and a teacher in which the student typically starts by asking
broad, open-ended questions before drilling down into specifics (Hintikka,
1981; Hakkarainen and Sintonen, 2002). This pedagogical perspective motivates a
new way of representing documents. In this paper, we present SQUASH
(Specificity-controlled Question-Answer Hierarchies), a novel and challenging
text generation task that converts an input document into a hierarchy of
question-answer pairs. Users can click on high-level questions (e.g., "Why did
Frodo leave the Fellowship?") to reveal related but more specific questions
(e.g., "Who did Frodo leave with?"). Using a question taxonomy loosely based on
Lehnert (1978), we classify questions in existing reading comprehension
datasets as either "general" or "specific". We then use these labels as input
to a pipelined system centered around a conditional neural language model. We
extensively evaluate the quality of the generated QA hierarchies through
crowdsourced experiments and report strong empirical results.
| 2,019 | Computation and Language |
Cross-Lingual Syntactic Transfer through Unsupervised Adaptation of
Invertible Projections | Cross-lingual transfer is an effective way to build syntactic analysis tools
in low-resource languages. However, transfer is difficult when transferring to
typologically distant languages, especially when neither annotated target data
nor parallel corpora are available. In this paper, we focus on methods for
cross-lingual transfer to distant languages and propose to learn a generative
model with a structured prior that utilizes labeled source data and unlabeled
target data jointly. The parameters of source model and target model are softly
shared through a regularized log likelihood objective. An invertible projection
is employed to learn a new interlingual latent embedding space that compensates
for imperfect cross-lingual word embedding input. We evaluate our method on two
syntactic tasks: part-of-speech (POS) tagging and dependency parsing. On the
Universal Dependency Treebanks, we use English as the only source corpus and
transfer to a wide range of target languages. On the 10 languages in this
dataset that are distant from English, our method yields an average of 5.2%
absolute improvement on POS tagging and 8.3% absolute improvement on dependency
parsing over a direct transfer method using state-of-the-art discriminative
models.
| 2,021 | Computation and Language |
Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in
Pretrained Embeddings | Given a small corpus $\mathcal D_T$ pertaining to a limited set of focused
topics, our goal is to train embeddings that accurately capture the sense of
words in the topic in spite of the limited size of $\mathcal D_T$. These
embeddings may be used in various tasks involving $\mathcal D_T$. A popular
strategy in limited data settings is to adapt pre-trained embeddings $\mathcal
E$ trained on a large corpus. To correct for sense drift, fine-tuning,
regularization, projection, and pivoting have been proposed recently. Among
these, regularization informed by a word's corpus frequency performed well, but
we improve upon it using a new regularizer based on the stability of its
cooccurrence with other words. However, a thorough comparison across ten
topics, spanning three tasks, with standardized settings of hyper-parameters,
reveals that even the best embedding adaptation strategies provide small gains
beyond well-tuned baselines, which many earlier comparisons ignored. In a bold
departure from adapting pretrained embeddings, we propose using $\mathcal D_T$
to probe, attend to, and borrow fragments from any large, topic-rich source
corpus (such as Wikipedia), which need not be the corpus used to pretrain
embeddings. This step is made scalable and practical by suitable indexing. We
reach the surprising conclusion that even limited corpus augmentation is more
useful than adapting embeddings, which suggests that non-dominant sense
information may be irrevocably obliterated from pretrained embeddings and
cannot be salvaged by adaptation.
| 2,019 | Computation and Language |
Conversing by Reading: Contentful Neural Conversation with On-demand
Machine Reading | Although neural conversation models are effective in learning how to produce
fluent responses, their primary challenge lies in knowing what to say to make
the conversation contentful and non-vacuous. We present a new end-to-end
approach to contentful neural conversation that jointly models response
generation and on-demand machine reading. The key idea is to provide the
conversation model with relevant long-form text on the fly as a source of
external knowledge. The model performs QA-style reading comprehension on this
text in response to each conversational turn, thereby allowing for more focused
integration of external knowledge than has been possible in prior approaches.
To support further research on knowledge-grounded conversation, we introduce a
new large-scale conversation dataset grounded in external web pages (2.8M
turns, 7.4M sentences of grounding). Both human evaluation and automated
metrics show that our approach results in more contentful responses compared to
a variety of previous methods, improving both the informativeness and diversity
of generated output.
| 2,019 | Computation and Language |
Syntactically Supervised Transformers for Faster Neural Machine
Translation | Standard decoders for neural machine translation autoregressively generate a
single target token per time step, which slows inference especially for long
outputs. While architectural advances such as the Transformer fully parallelize
the decoder computations at training time, inference still proceeds
sequentially. Recent developments in non- and semi- autoregressive decoding
produce multiple tokens per time step independently of the others, which
improves inference speed but deteriorates translation quality. In this work, we
propose the syntactically supervised Transformer (SynST), which first
autoregressively predicts a chunked parse tree before generating all of the
target tokens in one shot conditioned on the predicted parse. A series of
controlled experiments demonstrates that SynST decodes sentences ~ 5x faster
than the baseline autoregressive Transformer while achieving higher BLEU scores
than most competing methods on En-De and En-Fr datasets.
| 2,019 | Computation and Language |
From Receptive to Productive: Learning to Use Confusing Words through
Automatically Selected Example Sentences | Knowing how to use words appropriately has been a key to improving language
proficiency. Previous studies typically discuss how students learn receptively
to select the correct candidate from a set of confusing words in the
fill-in-the-blank task where specific context is given. In this paper, we go
one step further, assisting students to learn to use confusing words
appropriately in a productive task: sentence translation. We leverage the
GiveMeExample system, which suggests example sentences for each confusing word,
to achieve this goal. In this study, students learn to differentiate the
confusing words by reading the example sentences, and then choose the
appropriate word(s) to complete the sentence translation task. Results show
students made substantial progress in terms of sentence structure. In addition,
highly proficient students better managed to learn confusing words. In view of
the influence of the first language on learners, we further propose an
effective approach to improve the quality of the suggested sentences.
| 2,019 | Computation and Language |
Towards Scalable and Reliable Capsule Networks for Challenging NLP
Applications | Obstacles hindering the development of capsule networks for challenging NLP
applications include poor scalability to large output spaces and less reliable
routing processes. In this paper, we introduce: 1) an agreement score to
evaluate the performance of routing processes at instance level; 2) an adaptive
optimizer to enhance the reliability of routing; 3) capsule compression and
partial routing to improve the scalability of capsule networks. We validate our
approach on two NLP tasks, namely: multi-label text classification and question
answering. Experimental results show that our approach considerably improves
over strong competitors on both tasks. In addition, we gain the best results in
low-resource settings with few training instances.
| 2,019 | Computation and Language |
Modeling financial analysts' decision making via the pragmatics and
semantics of earnings calls | Every fiscal quarter, companies hold earnings calls in which company
executives respond to questions from analysts. After these calls, analysts
often change their price target recommendations, which are used in equity
research reports to help investors make decisions. In this paper, we examine
analysts' decision making behavior as it pertains to the language content of
earnings calls. We identify a set of 20 pragmatic features of analysts'
questions which we correlate with analysts' pre-call investor recommendations.
We also analyze the degree to which semantic and pragmatic features from an
earnings call complement market data in predicting analysts' post-call changes
in price targets. Our results show that earnings calls are moderately
predictive of analysts' decisions even though these decisions are influenced by
a number of other factors including private communication with company
executives and market conditions. A breakdown of model errors indicates
disparate performance on calls from different market sectors.
| 2,019 | Computation and Language |
Visually Grounded Neural Syntax Acquisition | We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach
for learning syntactic representations and structures without any explicit
supervision. The model learns by looking at natural images and reading paired
captions. VG-NSL generates constituency parse trees of texts, recursively
composes representations for constituents, and matches them with images. We
define concreteness of constituents by their matching scores with images, and
use it to guide the parsing of text. Experiments on the MSCOCO data set show
that VG-NSL outperforms various unsupervised parsing approaches that do not use
visual grounding, in terms of F1 scores against gold parse trees. We find that
VGNSL is much more stable with respect to the choice of random initialization
and the amount of training data. We also find that the concreteness acquired by
VG-NSL correlates well with a similar measure defined by linguists. Finally, we
also apply VG-NSL to multiple languages in the Multi30K data set, showing that
our model consistently outperforms prior unsupervised approaches.
| 2,019 | Computation and Language |
Semi-supervised Stochastic Multi-Domain Learning using Variational
Inference | Supervised models of NLP rely on large collections of text which closely
resemble the intended testing setting. Unfortunately matching text is often not
available in sufficient quantity, and moreover, within any domain of text, data
is often highly heterogenous. In this paper we propose a method to distill the
important domain signal as part of a multi-domain learning system, using a
latent variable model in which parts of a neural model are stochastically gated
based on the inferred domain. We compare the use of discrete versus continuous
latent variables, operating in a domain-supervised or a domain semi-supervised
setting, where the domain is known only for a subset of training inputs. We
show that our model leads to substantial performance improvements over
competitive benchmark domain adaptation methods, including methods using
adversarial learning.
| 2,019 | Computation and Language |
Compositional Questions Do Not Necessitate Multi-hop Reasoning | Multi-hop reading comprehension (RC) questions are challenging because they
require reading and reasoning over multiple paragraphs. We argue that it can be
difficult to construct large multi-hop RC datasets. For example, even highly
compositional questions can be answered with a single hop if they target
specific entity types, or the facts needed to answer them are redundant. Our
analysis is centered on HotpotQA, where we show that single-hop reasoning can
solve much more of the dataset than previously thought. We introduce a
single-hop BERT-based RC model that achieves 67 F1---comparable to
state-of-the-art multi-hop models. We also design an evaluation setting where
humans are not shown all of the necessary paragraphs for the intended multi-hop
reasoning but can still answer over 80% of questions. Together with detailed
error analysis, these results suggest there should be an increasing focus on
the role of evidence in multi-hop reasoning and possibly even a shift towards
information retrieval style evaluations with large and diverse evidence
collections.
| 2,019 | Computation and Language |
Multi-hop Reading Comprehension through Question Decomposition and
Rescoring | Multi-hop Reading Comprehension (RC) requires reasoning and aggregation
across several paragraphs. We propose a system for multi-hop RC that decomposes
a compositional question into simpler sub-questions that can be answered by
off-the-shelf single-hop RC models. Since annotations for such decomposition
are expensive, we recast sub-question generation as a span prediction problem
and show that our method, trained using only 400 labeled examples, generates
sub-questions that are as effective as human-authored sub-questions. We also
introduce a new global rescoring approach that considers each decomposition
(i.e. the sub-questions and their answers) to select the best final answer,
greatly improving overall performance. Our experiments on HotpotQA show that
this approach achieves the state-of-the-art results, while providing
explainable evidence for its decision making in the form of sub-questions.
| 2,019 | Computation and Language |
Preference-based Interactive Multi-Document Summarisation | Interactive NLP is a promising paradigm to close the gap between automatic
NLP systems and the human upper bound. Preference-based interactive learning
has been successfully applied, but the existing methods require several
thousand interaction rounds even in simulations with perfect user feedback. In
this paper, we study preference-based interactive summarisation. To reduce the
number of interaction rounds, we propose the Active Preference-based
ReInforcement Learning (APRIL) framework. APRIL uses Active Learning to query
the user, Preference Learning to learn a summary ranking function from the
preferences, and neural Reinforcement Learning to efficiently search for the
(near-)optimal summary. Our results show that users can easily provide reliable
preferences over summaries and that APRIL outperforms the state-of-the-art
preference-based interactive method in both simulation and real-user
experiments.
| 2,019 | Computation and Language |
A Wind of Change: Detecting and Evaluating Lexical Semantic Change
across Times and Domains | We perform an interdisciplinary large-scale evaluation for detecting lexical
semantic divergences in a diachronic and in a synchronic task: semantic sense
changes across time, and semantic sense changes across domains. Our work
addresses the superficialness and lack of comparison in assessing models of
diachronic lexical change, by bringing together and extending benchmark models
on a common state-of-the-art evaluation task. In addition, we demonstrate that
the same evaluation task and modelling approaches can successfully be utilised
for the synchronic detection of domain-specific sense divergences in the field
of term extraction.
| 2,019 | Computation and Language |
On the Compositionality Prediction of Noun Phrases using Poincar\'e
Embeddings | The compositionality degree of multiword expressions indicates to what extent
the meaning of a phrase can be derived from the meaning of its constituents and
their grammatical relations. Prediction of (non)-compositionality is a task
that has been frequently addressed with distributional semantic models. We
introduce a novel technique to blend hierarchical information with
distributional information for predicting compositionality. In particular, we
use hypernymy information of the multiword and its constituents encoded in the
form of the recently introduced Poincar\'e embeddings in addition to the
distributional information to detect compositionality for noun phrases. Using a
weighted average of the distributional similarity and a Poincar\'e similarity
function, we obtain consistent and substantial, statistically significant
improvement across three gold standard datasets over state-of-the-art models
based on distributional information only. Unlike traditional approaches that
solely use an unsupervised setting, we have also framed the problem as a
supervised task, obtaining comparable improvements. Further, we publicly
release our Poincar\'e embeddings, which are trained on the output of
handcrafted lexical-syntactic patterns on a large corpus.
| 2,019 | Computation and Language |
RankQA: Neural Question Answering with Answer Re-Ranking | The conventional paradigm in neural question answering (QA) for narrative
content is limited to a two-stage process: first, relevant text passages are
retrieved and, subsequently, a neural network for machine comprehension
extracts the likeliest answer. However, both stages are largely isolated in the
status quo and, hence, information from the two phases is never properly fused.
In contrast, this work proposes RankQA: RankQA extends the conventional
two-stage process in neural QA with a third stage that performs an additional
answer re-ranking. The re-ranking leverages different features that are
directly extracted from the QA pipeline, i.e., a combination of retrieval and
comprehension features. While our intentionally simple design allows for an
efficient, data-sparse estimation, it nevertheless outperforms more complex QA
systems by a significant margin: in fact, RankQA achieves state-of-the-art
performance on 3 out of 4 benchmark datasets. Furthermore, its performance is
especially superior in settings where the size of the corpus is dynamic. Here
the answer re-ranking provides an effective remedy against the underlying
noise-information trade-off due to a variable corpus size. As a consequence,
RankQA represents a novel, powerful, and thus challenging baseline for future
research in content-based QA.
| 2,019 | Computation and Language |
Improving Relation Extraction by Pre-trained Language Representations | Current state-of-the-art relation extraction methods typically rely on a set
of lexical, syntactic, and semantic features, explicitly computed in a
pre-processing step. Training feature extraction models requires additional
annotated language resources, which severely restricts the applicability and
portability of relation extraction to novel languages. Similarly,
pre-processing introduces an additional source of error. To address these
limitations, we introduce TRE, a Transformer for Relation Extraction, extending
the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike
previous relation extraction models, TRE uses pre-trained deep language
representations instead of explicit linguistic features to inform the relation
classification and combines it with the self-attentive Transformer architecture
to effectively model long-range dependencies between entity mentions. TRE
allows us to learn implicit linguistic features solely from plain text corpora
by unsupervised pre-training, before fine-tuning the learned language
representations on the relation extraction task. TRE obtains a new
state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets,
achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a
significant increase in sample efficiency. With only 20% of the training
examples, TRE matches the performance of our baselines and our model trained
from scratch on 100% of the TACRED dataset. We open-source our trained models,
experiments, and source code.
| 2,019 | Computation and Language |
Shared-Private Bilingual Word Embeddings for Neural Machine Translation | Word embedding is central to neural machine translation (NMT), which has
attracted intensive research interest in recent years. In NMT, the source
embedding plays the role of the entrance while the target embedding acts as the
terminal. These layers occupy most of the model parameters for representation
learning. Furthermore, they indirectly interface via a soft-attention
mechanism, which makes them comparatively isolated. In this paper, we propose
shared-private bilingual word embeddings, which give a closer relationship
between the source and target embeddings, and which also reduce the number of
model parameters. For similar source and target words, their embeddings tend to
share a part of the features and they cooperatively learn these common
representation units. Experiments on 5 language pairs belonging to 6 different
language families and written in 5 different alphabets demonstrate that the
proposed model provides a significant performance boost over the strong
baselines with dramatically fewer model parameters.
| 2,019 | Computation and Language |
Word-based Domain Adaptation for Neural Machine Translation | In this paper, we empirically investigate applying word-level weights to
adapt neural machine translation to e-commerce domains, where small e-commerce
datasets and large out-of-domain datasets are available. In order to mine
in-domain like words in the out-of-domain datasets, we compute word weights by
using a domain-specific and a non-domain-specific language model followed by
smoothing and binary quantization. The baseline model is trained on mixed
in-domain and out-of-domain datasets. Experimental results on English to
Chinese e-commerce domain translation show that compared to continuing training
without word weights, it improves MT quality by up to 2.11% BLEU absolute and
1.59% TER. We have also trained models using fine-tuning on the in-domain data.
Pre-training a model with word weights improves fine-tuning up to 1.24% BLEU
absolute and 1.64% TER, respectively.
| 2,018 | Computation and Language |
Word Embeddings for the Armenian Language: Intrinsic and Extrinsic
Evaluation | In this work, we intrinsically and extrinsically evaluate and compare
existing word embedding models for the Armenian language. Alongside, new
embeddings are presented, trained using GloVe, fastText, CBOW, SkipGram
algorithms. We adapt and use the word analogy task in intrinsic evaluation of
embeddings. For extrinsic evaluation, two tasks are employed: morphological
tagging and text classification. Tagging is performed on a deep neural network,
using ArmTDP v2.3 dataset. For text classification, we propose a corpus of news
articles categorized into 7 classes. The datasets are made public to serve as
benchmarks for future models.
| 2,019 | Computation and Language |
Matching the Blanks: Distributional Similarity for Relation Learning | General purpose relation extractors, which can model arbitrary relations, are
a core aspiration in information extraction. Efforts have been made to build
general purpose extractors that represent relations with their surface forms,
or which jointly embed surface forms with relations from an existing knowledge
graph. However, both of these approaches are limited in their ability to
generalize. In this paper, we build on extensions of Harris' distributional
hypothesis to relations, as well as recent advances in learning text
representations (specifically, BERT), to build task agnostic relation
representations solely from entity-linked text. We show that these
representations significantly outperform previous work on exemplar based
relation extraction (FewRel) even without using any of that task's training
data. We also show that models initialized with our task agnostic
representations, and then tuned on supervised relation extraction datasets,
significantly outperform the previous methods on SemEval 2010 Task 8, KBP37,
and TACRED.
| 2,019 | Computation and Language |
Building a Production Model for Retrieval-Based Chatbots | Response suggestion is an important task for building human-computer
conversation systems. Recent approaches to conversation modeling have
introduced new model architectures with impressive results, but relatively
little attention has been paid to whether these models would be practical in a
production setting. In this paper, we describe the unique challenges of
building a production retrieval-based conversation system, which selects
outputs from a whitelist of candidate responses. To address these challenges,
we propose a dual encoder architecture which performs rapid inference and
scales well with the size of the whitelist. We also introduce and compare two
methods for generating whitelists, and we carry out a comprehensive analysis of
the model and whitelists. Experimental results on a large, proprietary help
desk chat dataset, including both offline metrics and a human evaluation,
indicate production-quality performance and illustrate key lessons about
conversation modeling in practice.
| 2,019 | Computation and Language |
Data-to-text Generation with Entity Modeling | Recent approaches to data-to-text generation have shown great promise thanks
to the use of large-scale datasets and the application of neural network
architectures which are trained end-to-end. These models rely on representation
learning to select content appropriately, structure it coherently, and
verbalize it grammatically, treating entities as nothing more than vocabulary
tokens. In this work we propose an entity-centric neural architecture for
data-to-text generation. Our model creates entity-specific representations
which are dynamically updated. Text is generated conditioned on the data input
and entity memory representations using hierarchical attention at each time
step. We present experiments on the RotoWire benchmark and a (five times
larger) new dataset on the baseball domain which we create. Our results show
that the proposed model outperforms competitive baselines in automatic and
human evaluation.
| 2,019 | Computation and Language |
Learning Word Embeddings with Domain Awareness | Word embeddings are traditionally trained on a large corpus in an
unsupervised setting, with no specific design for incorporating domain
knowledge. This can lead to unsatisfactory performances when training data
originate from heterogeneous domains. In this paper, we propose two novel
mechanisms for domain-aware word embedding training, namely domain indicator
and domain attention, which integrate domain-specific knowledge into the widely
used SG and CBOW models, respectively. The two methods are based on a joint
learning paradigm and ensure that words in a target domain are intensively
focused when trained on a source domain corpus. Qualitative and quantitative
evaluation confirm the validity and effectiveness of our models. Compared to
baseline methods, our method is particularly effective in near-cold-start
scenarios.
| 2,019 | Computation and Language |
Assessing incrementality in sequence-to-sequence models | Since their inception, encoder-decoder models have successfully been applied
to a wide array of problems in computational linguistics. The most recent
successes are predominantly due to the use of different variations of attention
mechanisms, but their cognitive plausibility is questionable. In particular,
because past representations can be revisited at any point in time,
attention-centric methods seem to lack an incentive to build up incrementally
more informative representations of incoming sentences. This way of processing
stands in stark contrast with the way in which humans are believed to process
language: continuously and rapidly integrating new information as it is
encountered. In this work, we propose three novel metrics to assess the
behavior of RNNs with and without an attention mechanism and identify key
differences in the way the different model types process sentences.
| 2,019 | Computation and Language |
Dissecting Content and Context in Argumentative Relation Analysis | When assessing relations between argumentative units (e.g., support or
attack), computational systems often exploit disclosing indicators or markers
that are not part of elementary argumentative units (EAUs) themselves, but are
gained from their context (position in paragraph, preceding tokens, etc.). We
show that this dependency is much stronger than previously assumed. In fact, we
show that by completely masking the EAU text spans and only feeding information
from their context, a competitive system may function even better. We argue
that an argument analysis system that relies more on discourse context than the
argument's content is unsafe, since it can easily be tricked. To alleviate this
issue, we separate argumentative units from their context such that the system
is forced to model and rely on an EAU's content. We show that the resulting
classification system is more robust, and argue that such models are better
suited for predicting argumentative relations across documents.
| 2,019 | Computation and Language |
Classifying the reported ability in clinical mobility descriptions | Assessing how individuals perform different activities is key information for
modeling health states of individuals and populations. Descriptions of activity
performance in clinical free text are complex, including syntactic negation and
similarities to textual entailment tasks. We explore a variety of methods for
the novel task of classifying four types of assertions about activity
performance: Able, Unable, Unclear, and None (no information). We find that
ensembling an SVM trained with lexical features and a CNN achieves 77.9% macro
F1 score on our task, and yields nearly 80% recall on the rare Unclear and
Unable samples. Finally, we highlight several challenges in classifying
performance assertions, including capturing information about sources of
assistance, incorporating syntactic structure and negation scope, and handling
new modalities at test time. Our findings establish a strong baseline for this
novel task, and identify intriguing areas for further research.
| 2,019 | Computation and Language |
Deep Contextualized Biomedical Abbreviation Expansion | Automatic identification and expansion of ambiguous abbreviations are
essential for biomedical natural language processing applications, such as
information retrieval and question answering systems. In this paper, we present
DEep Contextualized Biomedical. Abbreviation Expansion (DECBAE) model. DECBAE
automatically collects substantial and relatively clean annotated contexts for
950 ambiguous abbreviations from PubMed abstracts using a simple heuristic.
Then it utilizes BioELMo to extract the contextualized features of words, and
feed those features to abbreviation-specific bidirectional LSTMs, where the
hidden states of the ambiguous abbreviations are used to assign the exact
definitions. Our DECBAE model outperforms other baselines by large margins,
achieving average accuracy of 0.961 and macro-F1 of 0.917 on the dataset. It
also surpasses human performance for expanding a sample abbreviation, and
remains robust in imbalanced, low-resources and clinical settings.
| 2,019 | Computation and Language |
Clinical Concept Extraction for Document-Level Coding | The text of clinical notes can be a valuable source of patient information
and clinical assessments. Historically, the primary approach for exploiting
clinical notes has been information extraction: linking spans of text to
concepts in a detailed domain ontology. However, recent work has demonstrated
the potential of supervised machine learning to extract document-level codes
directly from the raw text of clinical notes. We propose to bridge the gap
between the two approaches with two novel syntheses: (1) treating extracted
concepts as features, which are used to supplement or replace the text of the
note; (2) treating extracted concepts as labels, which are used to learn a
better representation of the text. Unfortunately, the resulting concepts do not
yield performance gains on the document-level clinical coding task. We explore
possible explanations and future research directions.
| 2,019 | Computation and Language |
Effective Use of Variational Embedding Capacity in Expressive End-to-End
Speech Synthesis | Recent work has explored sequence-to-sequence latent variable models for
expressive speech synthesis (supporting control and transfer of prosody and
style), but has not presented a coherent framework for understanding the
trade-offs between the competing methods. In this paper, we propose embedding
capacity (the amount of information the embedding contains about the data) as a
unified method of analyzing the behavior of latent variable models of speech,
comparing existing heuristic (non-variational) methods to variational methods
that are able to explicitly constrain capacity using an upper bound on
representational mutual information. In our proposed model (Capacitron), we
show that by adding conditional dependencies to the variational posterior such
that it matches the form of the true posterior, the same model can be used for
high-precision prosody transfer, text-agnostic style transfer, and generation
of natural-sounding prior samples. For multi-speaker models, Capacitron is able
to preserve target speaker identity during inter-speaker prosody transfer and
when drawing samples from the latent prior. Lastly, we introduce a method for
decomposing embedding capacity hierarchically across two sets of latents,
allowing a portion of the latent variability to be specified and the remaining
variability sampled from a learned prior. Audio examples are available on the
web.
| 2,019 | Computation and Language |
Improving Low-Resource Cross-lingual Document Retrieval by Reranking
with Deep Bilingual Representations | In this paper, we propose to boost low-resource cross-lingual document
retrieval performance with deep bilingual query-document representations. We
match queries and documents in both source and target languages with four
components, each of which is implemented as a term interaction-based deep
neural network with cross-lingual word embeddings as input. By including query
likelihood scores as extra features, our model effectively learns to rerank the
retrieved documents by using a small number of relevance labels for
low-resource language pairs. Due to the shared cross-lingual word embedding
space, the model can also be directly applied to another language pair without
any training label. Experimental results on the MATERIAL dataset show that our
model outperforms the competitive translation-based baselines on
English-Swahili, English-Tagalog, and English-Somali cross-lingual information
retrieval tasks.
| 2,019 | Computation and Language |
Making Asynchronous Stochastic Gradient Descent Work for Transformers | Asynchronous stochastic gradient descent (SGD) is attractive from a speed
perspective because workers do not wait for synchronization. However, the
Transformer model converges poorly with asynchronous SGD, resulting in
substantially lower quality compared to synchronous SGD. To investigate why
this is the case, we isolate differences between asynchronous and synchronous
methods to investigate batch size and staleness effects. We find that summing
several asynchronous updates, rather than applying them immediately, restores
convergence behavior. With this hybrid method, Transformer training for neural
machine translation task reaches a near-convergence level 1.36x faster in
single-node multi-GPU training with no impact on model quality.
| 2,019 | Computation and Language |
This Email Could Save Your Life: Introducing the Task of Email Subject
Line Generation | Given the overwhelming number of emails, an effective subject line becomes
essential to better inform the recipient of the email's content. In this paper,
we propose and study the task of email subject line generation: automatically
generating an email subject line from the email body. We create the first
dataset for this task and find that email subject line generation favor
extremely abstractive summary which differentiates it from news headline
generation or news single document summarization. We then develop a novel deep
learning method and compare it to several baselines as well as recent
state-of-the-art text summarization systems. We also investigate the efficacy
of several automatic metrics based on correlations with human judgments and
propose a new automatic evaluation metric. Our system outperforms competitive
baselines given both automatic and human evaluations. To our knowledge, this is
the first work to tackle the problem of effective email subject line
generation.
| 2,019 | Computation and Language |
Sentence Centrality Revisited for Unsupervised Summarization | Single document summarization has enjoyed renewed interests in recent years
thanks to the popularity of neural network models and the availability of
large-scale datasets. In this paper we develop an unsupervised approach arguing
that it is unrealistic to expect large-scale and high-quality training data to
be available or created for different types of summaries, domains, or
languages. We revisit a popular graph-based ranking algorithm and modify how
node (aka sentence) centrality is computed in two ways: (a)~we employ BERT, a
state-of-the-art neural representation learning model to better capture
sentential meaning and (b)~we build graphs with directed edges arguing that the
contribution of any two nodes to their respective centrality is influenced by
their relative position in a document. Experimental results on three news
summarization datasets representative of different languages and writing styles
show that our approach outperforms strong baselines by a wide margin.
| 2,019 | Computation and Language |
Domain Adaptive Dialog Generation via Meta Learning | Domain adaptation is an essential task in dialog system building because
there are so many new dialog tasks created for different needs every day.
Collecting and annotating training data for these new tasks is costly since it
involves real user interactions. We propose a domain adaptive dialog generation
method based on meta-learning (DAML). DAML is an end-to-end trainable dialog
system model that learns from multiple rich-resource tasks and then adapts to
new domains with minimal training samples. We train a dialog system model using
multiple rich-resource single-domain dialog data by applying the model-agnostic
meta-learning algorithm to dialog domain. The model is capable of learning a
competitive dialog system on a new domain with only a few training examples in
an efficient manner. The two-step gradient updates in DAML enable the model to
learn general features across multiple tasks. We evaluate our method on a
simulated dialog dataset and achieve state-of-the-art performance, which is
generalizable to new tasks.
| 2,019 | Computation and Language |
Seeing Things from a Different Angle: Discovering Diverse Perspectives
about Claims | One key consequence of the information revolution is a significant increase
and a contamination of our information supply. The practice of fact checking
won't suffice to eliminate the biases in text data we observe, as the degree of
factuality alone does not determine whether biases exist in the spectrum of
opinions visible to us. To better understand controversial issues, one needs to
view them from a diverse yet comprehensive set of perspectives. For example,
there are many ways to respond to a claim such as "animals should have lawful
rights", and these responses form a spectrum of perspectives, each with a
stance relative to this claim and, ideally, with evidence supporting it.
Inherently, this is a natural language understanding task, and we propose to
address it as such. Specifically, we propose the task of substantiated
perspective discovery where, given a claim, a system is expected to discover a
diverse set of well-corroborated perspectives that take a stance with respect
to the claim. Each perspective should be substantiated by evidence paragraphs
which summarize pertinent results and facts. We construct PERSPECTRUM, a
dataset of claims, perspectives and evidence, making use of online debate
websites to create the initial data collection, and augmenting it using search
engines in order to expand and diversify our dataset. We use crowd-sourcing to
filter out noise and ensure high-quality data. Our dataset contains 1k claims,
accompanied with pools of 10k and 8k perspective sentences and evidence
paragraphs, respectively. We provide a thorough analysis of the dataset to
highlight key underlying language understanding challenges, and show that human
baselines across multiple subtasks far outperform ma-chine baselines built upon
state-of-the-art NLP techniques. This poses a challenge and opportunity for the
NLP community to address.
| 2,019 | Computation and Language |
A Survey on Neural Network Language Models | As the core component of Natural Language Processing (NLP) system, Language
Model (LM) can provide word representation and probability indication of word
sequences. Neural Network Language Models (NNLMs) overcome the curse of
dimensionality and improve the performance of traditional LMs. A survey on
NNLMs is performed in this paper. The structure of classic NNLMs is described
firstly, and then some major improvements are introduced and analyzed. We
summarize and compare corpora and toolkits of NNLMs. Further, some research
directions of NNLMs are discussed.
| 2,019 | Computation and Language |
Probing for Semantic Classes: Diagnosing the Meaning Content of Word
Embeddings | Word embeddings typically represent different meanings of a word in a single
conflated vector. Empirical analysis of embeddings of ambiguous words is
currently limited by the small size of manually annotated resources and by the
fact that word senses are treated as unrelated individual concepts. We present
a large dataset based on manual Wikipedia annotations and word senses, where
word senses from different words are related by semantic classes. This is the
basis for novel diagnostic tests for an embedding's content: we probe word
embeddings for semantic classes and analyze the embedding space by classifying
embeddings into semantic classes. Our main findings are: (i) Information about
a sense is generally represented well in a single-vector embedding - if the
sense is frequent. (ii) A classifier can accurately predict whether a word is
single-sense or multi-sense, based only on its embedding. (iii) Although rare
senses are not well represented in single-vector embeddings, this does not have
negative impact on an NLP application whose performance depends on frequent
senses.
| 2,019 | Computation and Language |
Learning to Predict Novel Noun-Noun Compounds | We introduce temporally and contextually-aware models for the novel task of
predicting unseen but plausible concepts, as conveyed by noun-noun compounds in
a time-stamped corpus. We train compositional models on observed compounds,
more specifically the composed distributed representations of their
constituents across a time-stamped corpus, while giving it corrupted instances
(where head or modifier are replaced by a random constituent) as negative
evidence. The model captures generalisations over this data and learns what
combinations give rise to plausible compounds and which ones do not. After
training, we query the model for the plausibility of automatically generated
novel combinations and verify whether the classifications are accurate. For our
best model, we find that in around 85% of the cases, the novel compounds
generated are attested in previously unseen data. An additional estimated 5%
are plausible despite not being attested in the recent corpus, based on
judgments from independent human raters.
| 2,019 | Computation and Language |
LSTM Networks Can Perform Dynamic Counting | In this paper, we systematically assess the ability of standard recurrent
networks to perform dynamic counting and to encode hierarchical
representations. All the neural models in our experiments are designed to be
small-sized networks both to prevent them from memorizing the training sets and
to visualize and interpret their behaviour at test time. Our results
demonstrate that the Long Short-Term Memory (LSTM) networks can learn to
recognize the well-balanced parenthesis language (Dyck-$1$) and the shuffles of
multiple Dyck-$1$ languages, each defined over different parenthesis-pairs, by
emulating simple real-time $k$-counter machines. To the best of our knowledge,
this work is the first study to introduce the shuffle languages to analyze the
computational power of neural networks. We also show that a single-layer LSTM
with only one hidden unit is practically sufficient for recognizing the
Dyck-$1$ language. However, none of our recurrent networks was able to yield a
good performance on the Dyck-$2$ language learning task, which requires a model
to have a stack-like mechanism for recognition.
| 2,019 | Computation and Language |
Encouraging Paragraph Embeddings to Remember Sentence Identity Improves
Classification | While paragraph embedding models are remarkably effective for downstream
classification tasks, what they learn and encode into a single vector remains
opaque. In this paper, we investigate a state-of-the-art paragraph embedding
method proposed by Zhang et al. (2017) and discover that it cannot reliably
tell whether a given sentence occurs in the input paragraph or not. We
formulate a sentence content task to probe for this basic linguistic property
and find that even a much simpler bag-of-words method has no trouble solving
it. This result motivates us to replace the reconstruction-based objective of
Zhang et al. (2017) with our sentence content probe objective in a
semi-supervised setting. Despite its simplicity, our objective improves over
paragraph reconstruction in terms of (1) downstream classification accuracies
on benchmark datasets, (2) faster training, and (3) better generalization
ability.
| 2,019 | Computation and Language |
Question Answering as Global Reasoning over Semantic Abstractions | We propose a novel method for exploiting the semantic structure of text to
answer multiple-choice questions. The approach is especially suitable for
domains that require reasoning over a diverse set of linguistic constructs but
have limited training data. To address these challenges, we present the first
system, to the best of our knowledge, that reasons over a wide range of
semantic abstractions of the text, which are derived using off-the-shelf,
general-purpose, pre-trained natural language modules such as semantic role
labelers, coreference resolvers, and dependency parsers. Representing multiple
abstractions as a family of graphs, we translate question answering (QA) into a
search for an optimal subgraph that satisfies certain global and local
properties. This formulation generalizes several prior structured QA systems.
Our system, SEMANTICILP, demonstrates strong performance on two domains
simultaneously. In particular, on a collection of challenging science QA
datasets, it outperforms various state-of-the-art approaches, including neural
models, broad coverage information retrieval, and specialized techniques using
structured knowledge bases, by 2%-6%.
| 2,019 | Computation and Language |
Happy Together: Learning and Understanding Appraisal From Natural
Language | In this paper, we explore various approaches for learning two types of
appraisal components from happy language. We focus on 'agency' of the author
and the 'sociality' involved in happy moments based on the HappyDB dataset. We
develop models based on deep neural networks for the task, including uni- and
bi-directional long short-term memory networks, with and without attention. We
also experiment with a number of novel embedding methods, such as embedding
from neural machine translation (as in CoVe) and embedding from language models
(as in ELMo). We compare our results to those acquired by several traditional
machine learning methods. Our best models achieve 87.97% accuracy on agency and
93.13% accuracy on sociality, both of which are significantly higher than our
baselines.
| 2,019 | Computation and Language |
UBC-NLP at SemEval-2019 Task 6:Ensemble Learning of Offensive Content
With Enhanced Training Data | We examine learning offensive content on Twitter with limited, imbalanced
data. For the purpose, we investigate the utility of using various data
enhancement methods with a host of classical ensemble classifiers. Among the 75
participating teams in SemEval-2019 sub-task B, our system ranks 6th (with
0.706 macro F1-score). For sub-task C, among the 65 participating teams, our
system ranks 9th (with 0.587 macro F1-score).
| 2,019 | Computation and Language |
Gendered Pronoun Resolution using BERT and an extractive question
answering formulation | The resolution of ambiguous pronouns is a longstanding challenge in Natural
Language Understanding. Recent studies have suggested gender bias among
state-of-the-art coreference resolution systems. As an example, Google AI
Language team recently released a gender-balanced dataset and showed that
performance of these coreference resolvers is significantly limited on the
dataset. In this paper, we propose an extractive question answering (QA)
formulation of pronoun resolution task that overcomes this limitation and shows
much lower gender bias (0.99) on their dataset. This system uses fine-tuned
representations from the pre-trained BERT model and outperforms the existing
baseline by a significant margin (22.2% absolute improvement in F1 score)
without using any hand-engineered features. This QA framework is equally
performant even without the knowledge of the candidate antecedents of the
pronoun. An ensemble of QA and BERT-based multiple choice and sequence
classification models further improves the F1 (23.3% absolute improvement upon
the baseline). This ensemble model was submitted to the shared task for the 1st
ACL workshop on Gender Bias for Natural Language Processing. It ranked 9th on
the final official leaderboard. Source code is available at
https://github.com/rakeshchada/corefqa
| 2,019 | Computation and Language |
Argument Generation with Retrieval, Planning, and Realization | Automatic argument generation is an appealing but challenging task. In this
paper, we study the specific problem of counter-argument generation, and
present a novel framework, CANDELA. It consists of a powerful retrieval system
and a novel two-step generation model, where a text planning decoder first
decides on the main talking points and a proper language style for each
sentence, then a content realization decoder reflects the decisions and
constructs an informative paragraph-level argument. Furthermore, our generation
model is empowered by a retrieval system indexed with 12 million articles
collected from Wikipedia and popular English news media, which provides access
to high-quality content with diversity. Automatic evaluation on a large-scale
dataset collected from Reddit shows that our model yields significantly higher
BLEU, ROUGE, and METEOR scores than the state-of-the-art and non-trivial
comparisons. Human evaluation further indicates that our system arguments are
more appropriate for refutation and richer in content.
| 2,019 | Computation and Language |
Is Attention Interpretable? | Attention mechanisms have recently boosted performance on a range of NLP
tasks. Because attention layers explicitly weight input components'
representations, it is also often assumed that attention can be used to
identify information that models found important (e.g., specific contextualized
word tokens). We test whether that assumption holds by manipulating attention
weights in already-trained text classification models and analyzing the
resulting differences in their predictions. While we observe some ways in which
higher attention weights correlate with greater impact on model predictions, we
also find many ways in which this does not hold, i.e., where gradient-based
rankings of attention weights better predict their effects than their
magnitudes. We conclude that while attention noisily predicts input components'
overall importance to a model, it is by no means a fail-safe indicator.
| 2,019 | Computation and Language |
BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent
Summarization | Most existing text summarization datasets are compiled from the news domain,
where summaries have a flattened discourse structure. In such datasets,
summary-worthy content often appears in the beginning of input articles.
Moreover, large segments from input articles are present verbatim in their
respective summaries. These issues impede the learning and evaluation of
systems that can understand an article's global content structure as well as
produce abstractive summaries with high compression ratio. In this work, we
present a novel dataset, BIGPATENT, consisting of 1.3 million records of U.S.
patent documents along with human written abstractive summaries. Compared to
existing summarization datasets, BIGPATENT has the following properties: i)
summaries contain a richer discourse structure with more recurring entities,
ii) salient content is evenly distributed in the input, and iii) lesser and
shorter extractive fragments are present in the summaries. Finally, we train
and evaluate baselines and popular learning models on BIGPATENT to shed light
on new challenges and motivate future directions for summarization research.
| 2,019 | Computation and Language |
Out-of-Vocabulary Embedding Imputation with Grounded Language
Information by Graph Convolutional Networks | Due to the ubiquitous use of embeddings as input representations for a wide
range of natural language tasks, imputation of embeddings for rare and unseen
words is a critical problem in language processing. Embedding imputation
involves learning representations for rare or unseen words during the training
of an embedding model, often in a post-hoc manner. In this paper, we propose an
approach for embedding imputation which uses grounded information in the form
of a knowledge graph. This is in contrast to existing approaches which
typically make use of vector space properties or subword information. We
propose an online method to construct a graph from grounded information and
design an algorithm to map from the resulting graphical structure to the space
of the pre-trained embeddings. Finally, we evaluate our approach on a range of
rare and unseen word tasks across various domains and show that our model can
learn better representations. For example, on the Card-660 task our method
improves Pearson's and Spearman's correlation coefficients upon the
state-of-the-art by 11% and 17.8% respectively using GloVe embeddings.
| 2,020 | Computation and Language |
Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region
Networks | Sequential labeling-based NER approaches restrict each word belonging to at
most one entity mention, which will face a serious problem when recognizing
nested entity mentions. In this paper, we propose to resolve this problem by
modeling and leveraging the head-driven phrase structures of entity mentions,
i.e., although a mention can nest other mentions, they will not share the same
head word. Specifically, we propose Anchor-Region Networks (ARNs), a
sequence-to-nuggets architecture for nested mention detection. ARNs first
identify anchor words (i.e., possible head words) of all mentions, and then
recognize the mention boundaries for each anchor word by exploiting regular
phrase structures. Furthermore, we also design Bag Loss, an objective function
which can train ARNs in an end-to-end manner without using any anchor word
annotation. Experiments show that ARNs achieve the state-of-the-art performance
on three standard nested entity mention detection benchmarks.
| 2,019 | Computation and Language |
Generalized Data Augmentation for Low-Resource Translation | Translation to or from low-resource languages LRLs poses challenges for
machine translation in terms of both adequacy and fluency. Data augmentation
utilizing large amounts of monolingual data is regarded as an effective way to
alleviate these problems. In this paper, we propose a general framework for
data augmentation in low-resource machine translation that not only uses
target-side monolingual data, but also pivots through a related high-resource
language HRL. Specifically, we experiment with a two-step pivoting method to
convert high-resource data to the LRL, making use of available resources to
better approximate the true data distribution of the LRL. First, we inject LRL
words into HRL sentences through an induced bilingual dictionary. Second, we
further edit these modified sentences using a modified unsupervised machine
translation framework. Extensive experiments on four low-resource datasets show
that under extreme low-resource settings, our data augmentation techniques
improve translation quality by up to~1.5 to~8 BLEU points compared to
supervised back-translation baselines
| 2,019 | Computation and Language |
Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and
Classification | Open-domain targeted sentiment analysis aims to detect opinion targets along
with their sentiment polarities from a sentence. Prior work typically
formulates this task as a sequence tagging problem. However, such formulation
suffers from problems such as huge search space and sentiment inconsistency. To
address these problems, we propose a span-based extract-then-classify
framework, where multiple opinion targets are directly extracted from the
sentence under the supervision of target span boundaries, and corresponding
polarities are then classified using their span representations. We further
investigate three approaches under this framework, namely the pipeline, joint,
and collapsed models. Experiments on three benchmark datasets show that our
approach consistently outperforms the sequence tagging baseline. Moreover, we
find that the pipeline model achieves the best performance compared with the
other two models.
| 2,019 | Computation and Language |
A Survey on Neural Machine Reading Comprehension | Enabling a machine to read and comprehend the natural language documents so
that it can answer some questions remains an elusive challenge. In recent
years, the popularity of deep learning and the establishment of large-scale
datasets have both promoted the prosperity of Machine Reading Comprehension.
This paper aims to present how to utilize the Neural Network to build a Reader
and introduce some classic models, analyze what improvements they make.
Further, we also point out the defects of existing models and future research
directions
| 2,019 | Computation and Language |
Topic-Aware Neural Keyphrase Generation for Social Media Language | A huge volume of user-generated content is daily produced on social media. To
facilitate automatic language understanding, we study keyphrase prediction,
distilling salient information from massive posts. While most existing methods
extract words from source posts to form keyphrases, we propose a
sequence-to-sequence (seq2seq) based neural keyphrase generation framework,
enabling absent keyphrases to be created. Moreover, our model, being
topic-aware, allows joint modeling of corpus-level latent topic
representations, which helps alleviate the data sparsity that widely exhibited
in social media language. Experiments on three datasets collected from English
and Chinese social media platforms show that our model significantly
outperforms both extraction and generation models that do not exploit latent
topics. Further discussions show that our model learns meaningful topics, which
interprets its superiority in social media keyphrase generation.
| 2,019 | Computation and Language |
Automatically Identifying Complaints in Social Media | Complaining is a basic speech act regularly used in human and computer
mediated communication to express a negative mismatch between reality and
expectations in a particular situation. Automatically identifying complaints in
social media is of utmost importance for organizations or brands to improve the
customer experience or in developing dialogue systems for handling and
responding to complaints. In this paper, we introduce the first systematic
analysis of complaints in computational linguistics. We collect a new annotated
data set of written complaints expressed in English on Twitter.\footnote{Data
and code is available here:
\url{https://github.com/danielpreotiuc/complaints-social-media}} We present an
extensive linguistic analysis of complaining as a speech act in social media
and train strong feature-based and neural models of complaints across nine
domains achieving a predictive performance of up to 79 F1 using distant
supervision.
| 2,019 | Computation and Language |
Learning to combine Grammatical Error Corrections | The field of Grammatical Error Correction (GEC) has produced various systems
to deal with focused phenomena or general text editing. We propose an automatic
way to combine black-box systems. Our method automatically detects the strength
of a system or the combination of several systems per error type, improving
precision and recall while optimizing $F$ score directly. We show consistent
improvement over the best standalone system in all the configurations tested.
This approach also outperforms average ensembling of different RNN models with
random initializations.
In addition, we analyze the use of BERT for GEC - reporting promising results
on this end. We also present a spellchecker created for this task which
outperforms standard spellcheckers tested on the task of spellchecking.
This paper describes a system submission to Building Educational Applications
2019 Shared Task: Grammatical Error Correction.
Combining the output of top BEA 2019 shared task systems using our approach,
currently holds the highest reported score in the open phase of the BEA 2019
shared task, improving F0.5 by 3.7 points over the best result reported.
| 2,019 | Computation and Language |
Multimodal Logical Inference System for Visual-Textual Entailment | A large amount of research about multimodal inference across text and vision
has been recently developed to obtain visually grounded word and sentence
representations. In this paper, we use logic-based representations as unified
meaning representations for texts and images and present an unsupervised
multimodal logical inference system that can effectively prove entailment
relations between them. We show that by combining semantic parsing and theorem
proving, the system can handle semantically complex sentences for
visual-textual inference.
| 2,019 | Computation and Language |
The University of Helsinki submissions to the WMT19 news translation
task | In this paper, we present the University of Helsinki submissions to the WMT
2019 shared task on news translation in three language pairs: English-German,
English-Finnish and Finnish-English. This year, we focused first on cleaning
and filtering the training data using multiple data-filtering approaches,
resulting in much smaller and cleaner training sets. For English-German, we
trained both sentence-level transformer models and compared different
document-level translation approaches. For Finnish-English and English-Finnish
we focused on different segmentation approaches, and we also included a
rule-based system for English-Finnish.
| 2,019 | Computation and Language |
CAiRE_HKUST at SemEval-2019 Task 3: Hierarchical Attention for Dialogue
Emotion Classification | Detecting emotion from dialogue is a challenge that has not yet been
extensively surveyed. One could consider the emotion of each dialogue turn to
be independent, but in this paper, we introduce a hierarchical approach to
classify emotion, hypothesizing that the current emotional state depends on
previous latent emotions. We benchmark several feature-based classifiers using
pre-trained word and emotion embeddings, state-of-the-art end-to-end neural
network models, and Gaussian processes for automatic hyper-parameter search. In
our experiments, hierarchical architectures consistently give significant
improvements, and our best model achieves a 76.77% F1-score on the test set.
| 2,019 | Computation and Language |
GLTR: Statistical Detection and Visualization of Generated Text | The rapid improvement of language models has raised the specter of abuse of
text generation systems. This progress motivates the development of simple
methods for detecting generated text that can be used by and explained to
non-experts. We develop GLTR, a tool to support humans in detecting whether a
text was generated by a model. GLTR applies a suite of baseline statistical
methods that can detect generation artifacts across common sampling schemes. In
a human-subjects study, we show that the annotation scheme provided by GLTR
improves the human detection-rate of fake text from 54% to 72% without any
prior training. GLTR is open-source and publicly deployed, and has already been
widely used to detect generated outputs
| 2,019 | Computation and Language |
Hierarchical Representation in Neural Language Models: Suppression and
Recovery of Expectations | Deep learning sequence models have led to a marked increase in performance
for a range of Natural Language Processing tasks, but it remains an open
question whether they are able to induce proper hierarchical generalizations
for representing natural language from linear input alone. Work using
artificial languages as training input has shown that LSTMs are capable of
inducing the stack-like data structures required to represent context-free and
certain mildly context-sensitive languages---formal language classes which
correspond in theory to the hierarchical structures of natural language. Here
we present a suite of experiments probing whether neural language models
trained on linguistic data induce these stack-like data structures and deploy
them while incrementally predicting words. We study two natural language
phenomena: center embedding sentences and syntactic island constraints on the
filler--gap dependency. In order to properly predict words in these structures,
a model must be able to temporarily suppress certain expectations and then
recover those expectations later, essentially pushing and popping these
expectations on a stack. Our results provide evidence that models can
successfully suppress and recover expectations in many cases, but do not fully
recover their previous grammatical state.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.