Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Neural Metric Learning for Fast End-to-End Relation Extraction | Relation extraction (RE) is an indispensable information extraction task in
several disciplines. RE models typically assume that named entity recognition
(NER) is already performed in a previous step by another independent model.
Several recent efforts, under the theme of end-to-end RE, seek to exploit
inter-task correlations by modeling both NER and RE tasks jointly. Earlier work
in this area commonly reduces the task to a table-filling problem wherein an
additional expensive decoding step involving beam search is applied to obtain
globally consistent cell labels. In efforts that do not employ table-filling,
global optimization in the form of CRFs with Viterbi decoding for the NER
component is still necessary for competitive performance. We introduce a novel
neural architecture utilizing the table structure, based on repeated
applications of 2D convolutions for pooling local dependency and metric-based
features, that improves on the state-of-the-art without the need for global
optimization. We validate our model on the ADE and CoNLL04 datasets for
end-to-end RE and demonstrate $\approx 1\%$ gain (in F-score) over prior best
results with training and testing times that are seven to ten times faster ---
the latter highly advantageous for time-sensitive end user applications.
| 2,019 | Computation and Language |
A Multi-Task Learning Framework for Extracting Drugs and Their
Interactions from Drug Labels | Preventable adverse drug reactions as a result of medical errors present a
growing concern in modern medicine. As drug-drug interactions (DDIs) may cause
adverse reactions, being able to extracting DDIs from drug labels into
machine-readable form is an important effort in effectively deploying drug
safety information. The DDI track of TAC 2018 introduces two large
hand-annotated test sets for the task of extracting DDIs from structured
product labels with linkage to standard terminologies. Herein, we describe our
approach to tackling tasks one and two of the DDI track, which corresponds to
named entity recognition (NER) and sentence-level relation extraction
respectively. Namely, our approach resembles a multi-task learning framework
designed to jointly model various sub-tasks including NER and interaction type
and outcome prediction. On NER, our system ranked second (among eight teams) at
33.00% and 38.25% F1 on Test Sets 1 and 2 respectively. On relation extraction,
our system ranked second (among four teams) at 21.59% and 23.55% on Test Sets 1
and 2 respectively.
| 2,019 | Computation and Language |
Story Ending Prediction by Transferable BERT | Recent advances, such as GPT and BERT, have shown success in incorporating a
pre-trained transformer language model and fine-tuning operation to improve
downstream NLP systems. However, this framework still has some fundamental
problems in effectively incorporating supervised knowledge from other related
tasks. In this study, we investigate a transferable BERT (TransBERT) training
framework, which can transfer not only general language knowledge from
large-scale unlabeled data but also specific kinds of knowledge from various
semantically related supervised tasks, for a target task. Particularly, we
propose utilizing three kinds of transfer tasks, including natural language
inference, sentiment classification, and next action prediction, to further
train BERT based on a pre-trained model. This enables the model to get a better
initialization for the target task. We take story ending prediction as the
target task to conduct experiments. The final result, an accuracy of 91.8%,
dramatically outperforms previous state-of-the-art baseline methods. Several
comparative experiments give some helpful suggestions on how to select transfer
tasks. Error analysis shows what are the strength and weakness of BERT-based
models for story ending prediction.
| 2,019 | Computation and Language |
Cross-referencing using Fine-grained Topic Modeling | Cross-referencing, which links passages of text to other related passages,
can be a valuable study aid for facilitating comprehension of a text. However,
cross-referencing requires first, a comprehensive thematic knowledge of the
entire corpus, and second, a focused search through the corpus specifically to
find such useful connections. Due to this, cross-reference resources are
prohibitively expensive and exist only for the most well-studied texts (e.g.
religious texts). We develop a topic-based system for automatically producing
candidate cross-references which can be easily verified by human annotators.
Our system utilizes fine-grained topic modeling with thousands of highly
nuanced and specific topics to identify verse pairs which are topically
related. We demonstrate that our system can be cost effective compared to
having annotators acquire the expertise necessary to produce cross-reference
resources unaided.
| 2,019 | Computation and Language |
Human-like machine thinking: Language guided imagination | Human thinking requires the brain to understand the meaning of language
expression and to properly organize the thoughts flow using the language.
However, current natural language processing models are primarily limited in
the word probability estimation. Here, we proposed a Language guided
imagination (LGI) network to incrementally learn the meaning and usage of
numerous words and syntaxes, aiming to form a human-like machine thinking
process. LGI contains three subsystems: (1) vision system that contains an
encoder to disentangle the input or imagined scenarios into abstract population
representations, and an imagination decoder to reconstruct imagined scenario
from higher level representations; (2) Language system, that contains a
binarizer to transfer symbol texts into binary vectors, an IPS (mimicking the
human IntraParietal Sulcus, implemented by an LSTM) to extract the quantity
information from the input texts, and a textizer to convert binary vectors into
text symbols; (3) a PFC (mimicking the human PreFrontal Cortex, implemented by
an LSTM) to combine inputs of both language and vision representations, and
predict text symbols and manipulated images accordingly. LGI has incrementally
learned eight different syntaxes (or tasks), with which a machine thinking loop
has been formed and validated by the proper interaction between language and
vision system. The paper provides a new architecture to let the machine learn,
understand and use language in a human-like way that could ultimately enable a
machine to construct fictitious 'mental' scenario and possess intelligence.
| 2,019 | Computation and Language |
Microblog Hashtag Generation via Encoding Conversation Contexts | Automatic hashtag annotation plays an important role in content understanding
for microblog posts. To date, progress made in this field has been restricted
to phrase selection from limited candidates, or word-level hashtag discovery
using topic models. Different from previous work considering hashtags to be
inseparable, our work is the first effort to annotate hashtags with a novel
sequence generation framework via viewing the hashtag as a short sequence of
words. Moreover, to address the data sparsity issue in processing short
microblog posts, we propose to jointly model the target posts and the
conversation contexts initiated by them with bidirectional attention. Extensive
experimental results on two large-scale datasets, newly collected from English
Twitter and Chinese Weibo, show that our model significantly outperforms
state-of-the-art models based on classification. Further studies demonstrate
our ability to effectively generate rare and even unseen hashtags, which is
however not possible for most existing methods.
| 2,019 | Computation and Language |
BERTSel: Answer Selection with Pre-trained Models | Recently, pre-trained models have been the dominant paradigm in natural
language processing. They achieved remarkable state-of-the-art performance
across a wide range of related tasks, such as textual entailment, natural
language inference, question answering, etc. BERT, proposed by Devlin et.al.,
has achieved a better marked result in GLUE leaderboard with a deep transformer
architecture. Despite its soaring popularity, however, BERT has not yet been
applied to answer selection. This task is different from others with a few
nuances: first, modeling the relevance and correctness of candidates matters
compared to semantic relatedness and syntactic structure; second, the length of
an answer may be different from other candidates and questions. In this paper.
we are the first to explore the performance of fine-tuning BERT for answer
selection. We achieved STOA results across five popular datasets, demonstrating
the success of pre-trained models in this task.
| 2,019 | Computation and Language |
Semantic flow in language networks | In this study we propose a framework to characterize documents based on their
semantic flow. The proposed framework encompasses a network-based model that
connected sentences based on their semantic similarity. Semantic fields are
detected using standard community detection methods. as the story unfolds,
transitions between semantic fields are represent in Markov networks, which in
turned are characterized via network motifs (subgraphs). Here we show that the
proposed framework can be used to classify books according to their style and
publication dates. Remarkably, even without a systematic optimization of
parameters, philosophy and investigative books were discriminated with an
accuracy rate of 92.5%. Because this model captures semantic features of texts,
it could be used as an additional feature in traditional network-based models
of texts that capture only syntactical/stylistic information, as it is the case
of word adjacency (co-occurrence) networks.
| 2,020 | Computation and Language |
Learning to Memorize in Neural Task-Oriented Dialogue Systems | In this thesis, we leverage the neural copy mechanism and memory-augmented
neural networks (MANNs) to address existing challenge of neural task-oriented
dialogue learning. We show the effectiveness of our strategy by achieving good
performance in multi-domain dialogue state tracking, retrieval-based dialogue
systems, and generation-based dialogue systems. We first propose a transferable
dialogue state generator (TRADE) that leverages its copy mechanism to get rid
of dialogue ontology and share knowledge between domains. We also evaluate
unseen domain dialogue state tracking and show that TRADE enables zero-shot
dialogue state tracking and can adapt to new few-shot domains without
forgetting the previous domains. Second, we utilize MANNs to improve
retrieval-based dialogue learning. They are able to capture dialogue sequential
dependencies and memorize long-term information. We also propose a recorded
delexicalization copy strategy to replace real entity values with ordered
entity types. Our models are shown to surpass other retrieval baselines,
especially when the conversation has a large number of turns. Lastly, we tackle
generation-based dialogue learning with two proposed models, the
memory-to-sequence (Mem2Seq) and global-to-local memory pointer network (GLMP).
Mem2Seq is the first model to combine multi-hop memory attention with the idea
of the copy mechanism. GLMP further introduces the concept of response
sketching and double pointers copying. We show that GLMP achieves the
state-of-the-art performance on human evaluation.
| 2,019 | Computation and Language |
DivGraphPointer: A Graph Pointer Network for Extracting Diverse
Keyphrases | Keyphrase extraction from documents is useful to a variety of applications
such as information retrieval and document summarization. This paper presents
an end-to-end method called DivGraphPointer for extracting a set of diversified
keyphrases from a document. DivGraphPointer combines the advantages of
traditional graph-based ranking methods and recent neural network-based
approaches. Specifically, given a document, a word graph is constructed from
the document based on word proximity and is encoded with graph convolutional
networks, which effectively capture document-level word salience by modeling
long-range dependency between words in the document and aggregating multiple
appearances of identical words into one node. Furthermore, we propose a
diversified point network to generate a set of diverse keyphrases out of the
word graph in the decoding process. Experimental results on five benchmark data
sets show that our proposed method significantly outperforms the existing
state-of-the-art approaches.
| 2,019 | Computation and Language |
Structured Summarization of Academic Publications | We propose SUSIE, a novel summarization method that can work with
state-of-the-art summarization models in order to produce structured scientific
summaries for academic articles. We also created PMC-SA, a new dataset of
academic publications, suitable for the task of structured summarization with
neural networks. We apply SUSIE combined with three different summarization
models on the new PMC-SA dataset and we show that the proposed method improves
the performance of all models by as much as 4 ROUGE points.
| 2,019 | Computation and Language |
Earlier Attention? Aspect-Aware LSTM for Aspect-Based Sentiment Analysis | Aspect-based sentiment analysis (ABSA) aims to predict fine-grained
sentiments of comments with respect to given aspect terms or categories. In
previous ABSA methods, the importance of aspect has been realized and verified.
Most existing LSTM-based models take aspect into account via the attention
mechanism, where the attention weights are calculated after the context is
modeled in the form of contextual vectors. However, aspect-related information
may be already discarded and aspect-irrelevant information may be retained in
classic LSTM cells in the context modeling process, which can be improved to
generate more effective context representations. This paper proposes a novel
variant of LSTM, termed as aspect-aware LSTM (AA-LSTM), which incorporates
aspect information into LSTM cells in the context modeling stage before the
attention mechanism. Therefore, our AA-LSTM can dynamically produce
aspect-aware contextual representations. We experiment with several
representative LSTM-based models by replacing the classic LSTM cells with the
AA-LSTM cells. Experimental results on SemEval-2014 Datasets demonstrate the
effectiveness of AA-LSTM.
| 2,019 | Computation and Language |
Correlation Coefficients and Semantic Textual Similarity | A large body of research into semantic textual similarity has focused on
constructing state-of-the-art embeddings using sophisticated modelling, careful
choice of learning signals and many clever tricks. By contrast, little
attention has been devoted to similarity measures between these embeddings,
with cosine similarity being used unquestionably in the majority of cases. In
this work, we illustrate that for all common word vectors, cosine similarity is
essentially equivalent to the Pearson correlation coefficient, which provides
some justification for its use. We thoroughly characterise cases where Pearson
correlation (and thus cosine similarity) is unfit as similarity measure.
Importantly, we show that Pearson correlation is appropriate for some word
vectors but not others. When it is not appropriate, we illustrate how common
non-parametric rank correlation coefficients can be used instead to
significantly improve performance. We support our analysis with a series of
evaluations on word-level and sentence-level semantic textual similarity
benchmarks. On the latter, we show that even the simplest averaged word vectors
compared by rank correlation easily rival the strongest deep representations
compared by cosine similarity.
| 2,019 | Computation and Language |
Predicting Annotation Difficulty to Improve Task Routing and Model
Performance for Biomedical Information Extraction | Modern NLP systems require high-quality annotated data. In specialized
domains, expert annotations may be prohibitively expensive. An alternative is
to rely on crowdsourcing to reduce costs at the risk of introducing noise. In
this paper we demonstrate that directly modeling instance difficulty can be
used to improve model performance, and to route instances to appropriate
annotators. Our difficulty prediction model combines two learned
representations: a `universal' encoder trained on out-of-domain data, and a
task-specific encoder. Experiments on a complex biomedical information
extraction task using expert and lay annotators show that: (i) simply excluding
from the training data instances predicted to be difficult yields a small boost
in performance; (ii) using difficulty scores to weight instances during
training provides further, consistent gains; (iii) assigning instances
predicted to be difficult to domain experts is an effective strategy for task
routing. Our experiments confirm the expectation that for specialized tasks
expert annotations are higher quality than crowd labels, and hence preferable
to obtain if practical. Moreover, augmenting small amounts of expert data with
a larger set of lay annotations leads to further improvements in model
performance.
| 2,019 | Computation and Language |
HellaSwag: Can a Machine Really Finish Your Sentence? | Recent work by Zellers et al. (2018) introduced a new task of commonsense
natural language inference: given an event description such as "A woman sits at
a piano," a machine must select the most likely followup: "She sets her fingers
on the keys." With the introduction of BERT, near human-level performance was
reached. Does this mean that machines can perform human level commonsense
inference?
In this paper, we show that commonsense inference still proves difficult for
even state-of-the-art models, by presenting HellaSwag, a new challenge dataset.
Though its questions are trivial for humans (>95% accuracy), state-of-the-art
models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data
collection paradigm wherein a series of discriminators iteratively select an
adversarial set of machine-generated wrong answers. AF proves to be
surprisingly robust. The key insight is to scale up the length and complexity
of the dataset examples towards a critical 'Goldilocks' zone wherein generated
text is ridiculous to humans, yet often misclassified by state-of-the-art
models.
Our construction of HellaSwag, and its resulting difficulty, sheds light on
the inner workings of deep pretrained models. More broadly, it suggests a new
path forward for NLP research, in which benchmarks co-evolve with the evolving
state-of-the-art in an adversarial way, so as to present ever-harder
challenges.
| 2,019 | Computation and Language |
Target Based Speech Act Classification in Political Campaign Text | We study pragmatics in political campaign text, through analysis of speech
acts and the target of each utterance. We propose a new annotation schema
incorporating domain-specific speech acts, such as commissive-action, and
present a novel annotated corpus of media releases and speech transcripts from
the 2016 Australian election cycle. We show how speech acts and target
referents can be modeled as sequential classification, and evaluate several
techniques, exploiting contextualized word representations, semi-supervised
learning, task dependencies and speaker meta-data.
| 2,019 | Computation and Language |
PaperRobot: Incremental Draft Generation of Scientific Ideas | We present a PaperRobot who performs as an automatic research assistant by
(1) conducting deep understanding of a large collection of human-written papers
in a target domain and constructing comprehensive background knowledge graphs
(KGs); (2) creating new ideas by predicting links from the background KGs, by
combining graph attention and contextual text attention; (3) incrementally
writing some key elements of a new paper based on memory-attention networks:
from the input title along with predicted related entities to generate a paper
abstract, from the abstract to generate conclusion and future work, and finally
from future work to generate a title for a follow-on paper. Turing Tests, where
a biomedical domain expert is asked to compare a system output and a
human-authored string, show PaperRobot generated abstracts, conclusion and
future work sections, and new titles are chosen over human-written ones up to
30%, 24% and 12% of the time, respectively.
| 2,020 | Computation and Language |
The Unexpected Unexpected and the Expected Unexpected: How People's
Conception of the Unexpected is Not That Unexpected | The answers people give when asked to 'think of the unexpected' for everyday
event scenarios appear to be more expected than unexpected. There are expected
unexpected outcomes that closely adhere to the given information in a scenario,
based on familiar disruptions and common plan-failures. There are also
unexpected unexpected outcomes that are more inventive, that depart from given
information, adding new concepts/actions. However, people seem to tend to
conceive of the unexpected as the former more than the latter. Study 1 tests
these proposals by analysing the object-concepts people mention in their
reports of the unexpected and the agreement between their answers. Study 2
shows that object-choices are weakly influenced by recency, the order of
sentences in the scenario. The implications of these results for ideas in
philosophy, psychology and computing is discussed
| 2,019 | Computation and Language |
Interpretable Neural Predictions with Differentiable Binary Variables | The success of neural networks comes hand in hand with a desire for more
interpretability. We focus on text classifiers and make them more interpretable
by having them provide a justification, a rationale, for their predictions. We
approach this problem by jointly training two neural network models: a latent
model that selects a rationale (i.e. a short and informative part of the input
text), and a classifier that learns from the words in the rationale alone.
Previous work proposed to assign binary latent masks to input positions and to
promote short selections via sparsity-inducing penalties such as L0
regularisation. We propose a latent model that mixes discrete and continuous
behaviour allowing at the same time for binary selections and gradient-based
training without REINFORCE. In our formulation, we can tractably compute the
expected value of penalties such as L0, which allows us to directly optimise
the model towards a pre-specified text selection rate. We show that our
approach is competitive with previous work on rationale extraction, and explore
further uses in attention mechanisms.
| 2,020 | Computation and Language |
A Neural, Interactive-predictive System for Multimodal Sequence to
Sequence Tasks | We present a demonstration of a neural interactive-predictive system for
tackling multimodal sequence to sequence tasks. The system generates text
predictions to different sequence to sequence tasks: machine translation, image
and video captioning. These predictions are revised by a human agent, who
introduces corrections in the form of characters. The system reacts to each
correction, providing alternative hypotheses, compelling with the feedback
provided by the user. The final objective is to reduce the human effort
required during this correction process.
This system is implemented following a client-server architecture. For
accessing the system, we developed a website, which communicates with the
neural model, hosted in a local server. From this website, the different tasks
can be tackled following the interactive-predictive framework. We open-source
all the code developed for building this system. The demonstration in hosted in
http://casmacat.prhlt.upv.es/interactive-seq2seq.
| 2,019 | Computation and Language |
Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate
Representation | We present a neural approach called IRNet for complex and cross-domain
Text-to-SQL. IRNet aims to address two challenges: 1) the mismatch between
intents expressed in natural language (NL) and the implementation details in
SQL; 2) the challenge in predicting columns caused by the large number of
out-of-domain words. Instead of end-to-end synthesizing a SQL query, IRNet
decomposes the synthesis process into three phases. In the first phase, IRNet
performs a schema linking over a question and a database schema. Then, IRNet
adopts a grammar-based neural model to synthesize a SemQL query which is an
intermediate representation that we design to bridge NL and SQL. Finally, IRNet
deterministically infers a SQL query from the synthesized SemQL query with
domain knowledge. On the challenging Text-to-SQL benchmark Spider, IRNet
achieves 46.7% accuracy, obtaining 19.5% absolute improvement over previous
state-of-the-art approaches. At the time of writing, IRNet achieves the first
position on the Spider leaderboard.
| 2,019 | Computation and Language |
Target Conditioned Sampling: Optimizing Data Selection for Multilingual
Neural Machine Translation | To improve low-resource Neural Machine Translation (NMT) with multilingual
corpora, training on the most related high-resource language only is often more
effective than using all data available (Neubig and Hu, 2018). However, it is
possible that an intelligent data selection strategy can further improve
low-resource NMT with data from other auxiliary languages. In this paper, we
seek to construct a sampling distribution over all multilingual data, so that
it minimizes the training loss of the low-resource language. Based on this
formulation, we propose an efficient algorithm, Target Conditioned Sampling
(TCS), which first samples a target sentence, and then conditionally samples
its source sentence. Experiments show that TCS brings significant gains of up
to 2 BLEU on three of four languages we test, with minimal training overhead.
| 2,019 | Computation and Language |
Enriching Pre-trained Language Model with Entity Information for
Relation Classification | Relation classification is an important NLP task to extract relations between
entities. The state-of-the-art methods for relation classification are
primarily based on Convolutional or Recurrent Neural Networks. Recently, the
pre-trained BERT model achieves very successful results in many NLP
classification / sequence labeling tasks. Relation classification differs from
those tasks in that it relies on information of both the sentence and the two
target entities. In this paper, we propose a model that both leverages the
pre-trained BERT language model and incorporates information from the target
entities to tackle the relation classification task. We locate the target
entities and transfer the information through the pre-trained architecture and
incorporate the corresponding encoding of the two entities. We achieve
significant improvement over the state-of-the-art method on the SemEval-2010
task 8 relational dataset.
| 2,019 | Computation and Language |
Word Usage Similarity Estimation with Sentence Representations and
Automatic Substitutes | Usage similarity estimation addresses the semantic proximity of word
instances in different contexts. We apply contextualized (ELMo and BERT) word
and sentence embeddings to this task, and propose supervised models that
leverage these representations for prediction. Our models are further assisted
by lexical substitute annotations automatically assigned to word instances by
context2vec, a neural model that relies on a bidirectional LSTM. We perform an
extensive comparison of existing word and sentence representations on benchmark
datasets addressing both graded and binary similarity. The best performing
models outperform previous methods in both settings.
| 2,019 | Computation and Language |
Generating Logical Forms from Graph Representations of Text and Entities | Structured information about entities is critical for many semantic parsing
tasks. We present an approach that uses a Graph Neural Network (GNN)
architecture to incorporate information about relevant entities and their
relations during parsing. Combined with a decoder copy mechanism, this approach
provides a conceptually simple mechanism to generate logical forms with
entities. We demonstrate that this approach is competitive with the
state-of-the-art across several tasks without pre-training, and outperforms
existing approaches when combined with BERT pre-training.
| 2,019 | Computation and Language |
A Seq-to-Seq Transformer Premised Temporal Convolutional Network for
Chinese Word Segmentation | The prevalent approaches of Chinese word segmentation task almost rely on the
Bi-LSTM neural network. However, the methods based the Bi-LSTM have some
inherent drawbacks: hard to parallel computing, little efficient in applying
the Dropout method to inhibit the Overfitting and little efficient in capturing
the character information at the more distant site of a long sentence for the
word segmentation task. In this work, we propose a sequence-to-sequence
transformer model for Chinese word segmentation, which is premised a type of
convolutional neural network named temporal convolutional network. The model
uses the temporal convolutional network to construct an encoder, and uses one
layer of fully-connected neural network to build a decoder, and applies the
Dropout method to inhibit the Overfitting, and captures the character
information at the distant site of a sentence by adding the layers of the
encoder, and binds Conditional Random Fields model to train parameters, and
uses the Viterbi algorithm to infer the final result of the Chinese word
segmentation. The experiments on traditional Chinese corpora and simplified
Chinese corpora show that the performance of Chinese word segmentation of the
model is equivalent to the performance of the methods based the Bi-LSTM, and
the model has a tremendous growth in parallel computing than the models based
the Bi-LSTM.
| 2,019 | Computation and Language |
Non-Autoregressive Neural Text-to-Speech | In this work, we propose ParaNet, a non-autoregressive seq2seq model that
converts text to spectrogram. It is fully convolutional and brings 46.7 times
speed-up over the lightweight Deep Voice 3 at synthesis, while obtaining
reasonably good speech quality. ParaNet also produces stable alignment between
text and speech on the challenging test sentences by iteratively improving the
attention in a layer-by-layer manner. Furthermore, we build the parallel
text-to-speech system and test various parallel neural vocoders, which can
synthesize speech from text through a single feed-forward pass. We also explore
a novel VAE-based approach to train the inverse autoregressive flow (IAF) based
parallel vocoder from scratch, which avoids the need for distillation from a
separately trained WaveNet as previous work.
| 2,020 | Computation and Language |
Answering while Summarizing: Multi-task Learning for Multi-hop QA with
Evidence Extraction | Question answering (QA) using textual sources for purposes such as reading
comprehension (RC) has attracted much attention. This study focuses on the task
of explainable multi-hop QA, which requires the system to return the answer
with evidence sentences by reasoning and gathering disjoint pieces of the
reference texts. It proposes the Query Focused Extractor (QFE) model for
evidence extraction and uses multi-task learning with the QA model. QFE is
inspired by extractive summarization models; compared with the existing method,
which extracts each evidence sentence independently, it sequentially extracts
evidence sentences by using an RNN with an attention mechanism on the question
sentence. It enables QFE to consider the dependency among the evidence
sentences and cover important information in the question sentence.
Experimental results show that QFE with a simple RC baseline model achieves a
state-of-the-art evidence extraction score on HotpotQA. Although designed for
RC, it also achieves a state-of-the-art evidence extraction score on FEVER,
which is a recognizing textual entailment task on a large textual database.
| 2,019 | Computation and Language |
CNNs found to jump around more skillfully than RNNs: Compositional
generalization in seq2seq convolutional networks | Lake and Baroni (2018) introduced the SCAN dataset probing the ability of
seq2seq models to capture compositional generalizations, such as inferring the
meaning of "jump around" 0-shot from the component words. Recurrent networks
(RNNs) were found to completely fail the most challenging generalization cases.
We test here a convolutional network (CNN) on these tasks, reporting hugely
improved performance with respect to RNNs. Despite the big improvement, the CNN
has however not induced systematic rules, suggesting that the difference
between compositional and non-compositional behaviour is not clear-cut.
| 2,019 | Computation and Language |
Generic Multilayer Network Data Analysis with the Fusion of Content and
Structure | Multi-feature data analysis (e.g., on Facebook, LinkedIn) is challenging
especially if one wants to do it efficiently and retain the flexibility by
choosing features of interest for analysis. Features (e.g., age, gender,
relationship, political view etc.) can be explicitly given from datasets, but
also can be derived from content (e.g., political view based on Facebook
posts). Analysis from multiple perspectives is needed to understand the
datasets (or subsets of it) and to infer meaningful knowledge. For example, the
influence of age, location, and marital status on political views may need to
be inferred separately (or in combination). In this paper, we adapt multilayer
network (MLN) analysis, a nontraditional approach, to model the Facebook
datasets, integrate content analysis, and conduct analysis, which is driven by
a list of desired application based queries. Our experimental analysis shows
the flexibility and efficiency of the proposed approach when modeling and
analyzing datasets with multiple features.
| 2,019 | Computation and Language |
MultiWiki: Interlingual Text Passage Alignment in Wikipedia | In this article we address the problem of text passage alignment across
interlingual article pairs in Wikipedia. We develop methods that enable the
identification and interlinking of text passages written in different languages
and containing overlapping information. Interlingual text passage alignment can
enable Wikipedia editors and readers to better understand language-specific
context of entities, provide valuable insights in cultural differences and
build a basis for qualitative analysis of the articles. An important challenge
in this context is the trade-off between the granularity of the extracted text
passages and the precision of the alignment. Whereas short text passages can
result in more precise alignment, longer text passages can facilitate a better
overview of the differences in an article pair. To better understand these
aspects from the user perspective, we conduct a user study at the example of
the German, Russian and the English Wikipedia and collect a user-annotated
benchmark. Then we propose MultiWiki -- a method that adopts an integrated
approach to the text passage alignment using semantic similarity measures and
greedy algorithms and achieves precise results with respect to the user-defined
alignment. MultiWiki demonstration is publicly available and currently supports
four language pairs.
| 2,017 | Computation and Language |
Approximating probabilistic models as weighted finite automata | Weighted finite automata (WFA) are often used to represent probabilistic
models, such as $n$-gram language models, since they are efficient for
recognition tasks in time and space. The probabilistic source to be represented
as a WFA, however, may come in many forms. Given a generic probabilistic model
over sequences, we propose an algorithm to approximate it as a weighted finite
automaton such that the Kullback-Leiber divergence between the source model and
the WFA target model is minimized. The proposed algorithm involves a counting
step and a difference of convex optimization step, both of which can be
performed efficiently. We demonstrate the usefulness of our approach on various
tasks, including distilling $n$-gram models from neural models, building
compact language models, and building open-vocabulary character models. The
algorithms used for these experiments are available in an open-source software
library.
| 2,021 | Computation and Language |
AMR Parsing as Sequence-to-Graph Transduction | We propose an attention-based model that treats AMR parsing as
sequence-to-graph transduction. Unlike most AMR parsers that rely on
pre-trained aligners, external semantic resources, or data augmentation, our
proposed parser is aligner-free, and it can be effectively trained with limited
amounts of labeled AMR data. Our experimental results outperform all previously
reported SMATCH scores, on both AMR 2.0 (76.3% F1 on LDC2017T10) and AMR 1.0
(70.2% F1 on LDC2014T12).
| 2,019 | Computation and Language |
A realistic and robust model for Chinese word segmentation | A realistic Chinese word segmentation tool must adapt to textual variations
with minimal training input and yet robust enough to yield reliable
segmentation result for all variants. Various lexicon-driven approaches to
Chinese segmentation, e.g. [1,16], achieve high f-scores yet require massive
training for any variation. Text-driven approach, e.g. [12], can be easily
adapted for domain and genre changes yet has difficulty matching the high
f-scores of the lexicon-driven approaches. In this paper, we refine and
implement an innovative text-driven word boundary decision (WBD) segmentation
model proposed in [15]. The WBD model treats word segmentation simply and
efficiently as a binary decision on whether to realize the natural textual
break between two adjacent characters as a word boundary. The WBD model allows
simple and quick training data preparation converting characters as contextual
vectors for learning the word boundary decision. Machine learning experiments
with four different classifiers show that training with 1,000 vectors and 1
million vectors achieve comparable and reliable results. In addition, when
applied to SigHAN Bakeoff 3 competition data, the WBD model produces OOV recall
rates that are higher than all published results. Unlike all previous work, our
OOV recall rate is comparable to our own F-score. Both experiments support the
claim that the WBD model is a realistic model for Chinese word segmentation as
it can be easily adapted for new variants with the robust result. In
conclusion, we will discuss linguistic ramifications as well as future
implications for the WBD approach.
| 2,019 | Computation and Language |
Transferable Multi-Domain State Generator for Task-Oriented Dialogue
Systems | Over-dependence on domain ontology and lack of knowledge sharing across
domains are two practical and yet less studied problems of dialogue state
tracking. Existing approaches generally fall short in tracking unknown slot
values during inference and often have difficulties in adapting to new domains.
In this paper, we propose a Transferable Dialogue State Generator (TRADE) that
generates dialogue states from utterances using a copy mechanism, facilitating
knowledge transfer when predicting (domain, slot, value) triplets not
encountered during training. Our model is composed of an utterance encoder, a
slot gate, and a state generator, which are shared across domains. Empirical
results demonstrate that TRADE achieves state-of-the-art joint goal accuracy of
48.62% for the five domains of MultiWOZ, a human-human dialogue dataset. In
addition, we show its transferring ability by simulating zero-shot and few-shot
dialogue state tracking for unseen domains. TRADE achieves 60.58% joint goal
accuracy in one of the zero-shot domains, and is able to adapt to few-shot
cases without forgetting already trained domains.
| 2,019 | Computation and Language |
Sampling from Stochastic Finite Automata with Applications to CTC
Decoding | Stochastic finite automata arise naturally in many language and speech
processing tasks. They include stochastic acceptors, which represent certain
probability distributions over random strings. We consider the problem of
efficient sampling: drawing random string variates from the probability
distribution represented by stochastic automata and transformations of those.
We show that path-sampling is effective and can be efficient if the
epsilon-graph of a finite automaton is acyclic. We provide an algorithm that
ensures this by conflating epsilon-cycles within strongly connected components.
Sampling is also effective in the presence of non-injective transformations of
strings. We illustrate this in the context of decoding for Connectionist
Temporal Classification (CTC), where the predictive probabilities yield
auxiliary sequences which are transformed into shorter labeling strings. We can
sample efficiently from the transformed labeling distribution and use this in
two different strategies for finding the most probable CTC labeling.
| 2,019 | Computation and Language |
A Comparative Analysis of Distributional Term Representations for Author
Profiling in Social Media | Author Profiling (AP) aims at predicting specific characteristics from a
group of authors by analyzing their written documents. Many research has been
focused on determining suitable features for modeling writing patterns from
authors. Reported results indicate that content-based features continue to be
the most relevant and discriminant features for solving this task. Thus, in
this paper, we present a thorough analysis regarding the appropriateness of
different distributional term representations (DTR) for the AP task. In this
regard, we introduce a novel framework for supervised AP using these
representations and, supported on it. We approach a comparative analysis of
representations such as DOR, TCOR, SSR, and word2vec in the AP problem. We also
compare the performance of the DTRs against classic approaches including
popular topic-based methods. The obtained results indicate that DTRs are
suitable for solving the AP task in social media domains as they achieve
competitive results while providing meaningful interpretability.
| 2,019 | Computation and Language |
EventKG - the Hub of Event Knowledge on the Web - and Biographical
Timeline Generation | One of the key requirements to facilitate the semantic analytics of
information regarding contemporary and historical events on the Web, in the
news and in social media is the availability of reference knowledge
repositories containing comprehensive representations of events, entities and
temporal relations. Existing knowledge graphs, with popular examples including
DBpedia, YAGO and Wikidata, focus mostly on entity-centric information and are
insufficient in terms of their coverage and completeness with respect to events
and temporal relations. In this article we address this limitation, formalise
the concept of a temporal knowledge graph and present its instantiation -
EventKG. EventKG is a multilingual event-centric temporal knowledge graph that
incorporates over 690 thousand events and over 2.3 million temporal relations
obtained from several large-scale knowledge graphs and semi-structured sources
and makes them available through a canonical RDF representation. Whereas
popular entities often possess hundreds of relations within a temporal
knowledge graph such as EventKG, generating a concise overview of the most
important temporal relations for a given entity is a challenging task. In this
article we demonstrate an application of EventKG to biographical timeline
generation, where we adopt a distant supervision method to identify relations
most relevant for an entity biography. Our evaluation results provide insights
on the characteristics of EventKG and demonstrate the effectiveness of the
proposed biographical timeline generation method.
| 2,019 | Computation and Language |
Acoustic-to-Word Models with Conversational Context Information | Conversational context information, higher-level knowledge that spans across
sentences, can help to recognize a long conversation. However, existing speech
recognition models are typically built at a sentence level, and thus it may not
capture important conversational context information. The recent progress in
end-to-end speech recognition enables integrating context with other available
information (e.g., acoustic, linguistic resources) and directly recognizing
words from speech. In this work, we present a direct acoustic-to-word,
end-to-end speech recognition model capable of utilizing the conversational
context to better process long conversations. We evaluate our proposed approach
on the Switchboard conversational speech corpus and show that our system
outperforms a standard end-to-end speech recognition system.
| 2,019 | Computation and Language |
Sample Efficient Text Summarization Using a Single Pre-Trained
Transformer | Language model (LM) pre-training has resulted in impressive performance and
sample efficiency on a variety of language understanding tasks. However, it
remains unclear how to best use pre-trained LMs for generation tasks such as
abstractive summarization, particularly to enhance sample efficiency. In these
sequence-to-sequence settings, prior work has experimented with loading
pre-trained weights into the encoder and/or decoder networks, but used
non-pre-trained encoder-decoder attention weights. We instead use a pre-trained
decoder-only network, where the same Transformer LM both encodes the source and
generates the summary. This ensures that all parameters in the network,
including those governing attention over source states, have been pre-trained
before the fine-tuning step. Experiments on the CNN/Daily Mail dataset show
that our pre-trained Transformer LM substantially improves over pre-trained
Transformer encoder-decoder networks in limited-data settings. For instance, it
achieves 13.1 ROUGE-2 using only 1% of the training data (~3000 examples),
while pre-trained encoder-decoder models score 2.3 ROUGE-2.
| 2,019 | Computation and Language |
Look Again at the Syntax: Relational Graph Convolutional Network for
Gendered Ambiguous Pronoun Resolution | Gender bias has been found in existing coreference resolvers. In order to
eliminate gender bias, a gender-balanced dataset Gendered Ambiguous Pronouns
(GAP) has been released and the best baseline model achieves only 66.9% F1.
Bidirectional Encoder Representations from Transformers (BERT) has broken
several NLP task records and can be used on GAP dataset. However, fine-tune
BERT on a specific task is computationally expensive. In this paper, we propose
an end-to-end resolver by combining pre-trained BERT with Relational Graph
Convolutional Network (R-GCN). R-GCN is used for digesting structural syntactic
information and learning better task-specific embeddings. Empirical results
demonstrate that, under explicit syntactic supervision and without the need to
fine tune BERT, R-GCN's embeddings outperform the original BERT embeddings on
the coreference task. Our work significantly improves the snippet-context
baseline F1 score on GAP dataset from 66.9% to 80.3%. We participated in the
2019 GAP Coreference Shared Task, and our codes are available online.
| 2,019 | Computation and Language |
Domain adaptation for part-of-speech tagging of noisy user-generated
text | The performance of a Part-of-speech (POS) tagger is highly dependent on the
domain ofthe processed text, and for many domains there is no or only very
little training data available. This work addresses the problem of POS tagging
noisy user-generated text using a neural network. We propose an architecture
that trains an out-of-domain model on a large newswire corpus, and transfers
those weights by using them as a prior for a model trained on the target domain
(a data-set of German Tweets) for which there is very little an-notations
available. The neural network has two standard bidirectional LSTMs at its core.
However, we find it crucial to also encode a set of task-specific features, and
to obtain reliable (source-domain and target-domain) word representations.
Experiments with different regularization techniques such as early stopping,
dropout and fine-tuning the domain adaptation prior weights are conducted. Our
best model uses external weights from the out-of-domain model, as well as
feature embeddings, pre-trained word and sub-word embeddings and achieves a
tagging accuracy of slightly over 90%, improving on the previous state of the
art for this task.
| 2,019 | Computation and Language |
Augmenting Data with Mixup for Sentence Classification: An Empirical
Study | Mixup, a recent proposed data augmentation method through linearly
interpolating inputs and modeling targets of random samples, has demonstrated
its capability of significantly improving the predictive accuracy of the
state-of-the-art networks for image classification. However, how this technique
can be applied to and what is its effectiveness on natural language processing
(NLP) tasks have not been investigated. In this paper, we propose two
strategies for the adaption of Mixup on sentence classification: one performs
interpolation on word embeddings and another on sentence embeddings. We conduct
experiments to evaluate our methods using several benchmark datasets. Our
studies show that such interpolation strategies serve as an effective, domain
independent data augmentation approach for sentence classification, and can
result in significant accuracy improvement for both CNN and LSTM models.
| 2,019 | Computation and Language |
Corpus Augmentation by Sentence Segmentation for Low-Resource Neural
Machine Translation | Neural Machine Translation (NMT) has been proven to achieve impressive
results. The NMT system translation results depend strongly on the size and
quality of parallel corpora. Nevertheless, for many language pairs, no
rich-resource parallel corpora exist. As described in this paper, we propose a
corpus augmentation method by segmenting long sentences in a corpus using
back-translation and generating pseudo-parallel sentence pairs. The experiment
results of the Japanese-Chinese and Chinese-Japanese translation with
Japanese-Chinese scientific paper excerpt corpus (ASPEC-JC) show that the
method improves translation performance.
| 2,019 | Computation and Language |
Recent Advances in Neural Question Generation | Emerging research in Neural Question Generation (NQG) has started to
integrate a larger variety of inputs, and generating questions requiring higher
levels of cognition. These trends point to NQG as a bellwether for NLP, about
how human intelligence embodies the skills of curiosity and integration.
We present a comprehensive survey of neural question generation, examining
the corpora, methodologies, and evaluation methods. From this, we elaborate on
what we see as emerging on NQG's trend: in terms of the learning paradigms,
input modalities, and cognitive levels considered by NQG. We end by pointing
out the potential directions ahead.
| 2,019 | Computation and Language |
From web crawled text to project descriptions: automatic summarizing of
social innovation projects | In the past decade, social innovation projects have gained the attention of
policy makers, as they address important social issues in an innovative manner.
A database of social innovation is an important source of information that can
expand collaboration between social innovators, drive policy and serve as an
important resource for research. Such a database needs to have projects
described and summarized. In this paper, we propose and compare several methods
(e.g. SVM-based, recurrent neural network based, ensambled) for describing
projects based on the text that is available on project websites. We also
address and propose a new metric for automated evaluation of summaries based on
topic modelling.
| 2,019 | Computation and Language |
A Joint Named-Entity Recognizer for Heterogeneous Tag-sets Using a Tag
Hierarchy | We study a variant of domain adaptation for named-entity recognition where
multiple, heterogeneously tagged training sets are available. Furthermore, the
test tag-set is not identical to any individual training tag-set. Yet, the
relations between all tags are provided in a tag hierarchy, covering the test
tags as a combination of training tags. This setting occurs when various
datasets are created using different annotation schemes. This is also the case
of extending a tag-set with a new tag by annotating only the new tag in a new
dataset. We propose to use the given tag hierarchy to jointly learn a neural
network that shares its tagging layer among all tag-sets. We compare this model
to combining independent models and to a model based on the multitasking
approach. Our experiments show the benefit of the tag-hierarchy model,
especially when facing non-trivial consolidation of tag-sets.
| 2,019 | Computation and Language |
Sentence Length | The distribution of sentence length in ordinary language is not well captured
by the existing models. Here we survey previous models of sentence length and
present our random walk model that offers both a better fit with the data and a
better understanding of the distribution. We develop a generalization of KL
divergence, discuss measuring the noise inherent in a corpus, and present a
hyperparameter-free Bayesian model comparison method that has strong conceptual
ties to Minimal Description Length modeling. The models we obtain require only
a few dozen bits, orders of magnitude less than the naive nonparametric MDL
models would.
| 2,019 | Computation and Language |
Simplified Neural Unsupervised Domain Adaptation | Unsupervised domain adaptation (UDA) is the task of modifying a statistical
model trained on labeled data from a source domain to achieve better
performance on data from a target domain, with access to only unlabeled data in
the target domain. Existing state-of-the-art UDA approaches use neural networks
to learn representations that can predict the values of subset of important
features called "pivot features." In this work, we show that it is possible to
improve on these methods by jointly training the representation learner with
the task learner, and examine the importance of existing pivot selection
methods.
| 2,023 | Computation and Language |
FastSpeech: Fast, Robust and Controllable Text to Speech | Neural network based end-to-end text to speech (TTS) has significantly
improved the quality of synthesized speech. Prominent methods (e.g., Tacotron
2) usually first generate mel-spectrogram from text, and then synthesize speech
from the mel-spectrogram using vocoder such as WaveNet. Compared with
traditional concatenative and statistical parametric approaches, neural network
based end-to-end models suffer from slow inference speed, and the synthesized
speech is usually not robust (i.e., some words are skipped or repeated) and
lack of controllability (voice speed or prosody control). In this work, we
propose a novel feed-forward network based on Transformer to generate
mel-spectrogram in parallel for TTS. Specifically, we extract attention
alignments from an encoder-decoder based teacher model for phoneme duration
prediction, which is used by a length regulator to expand the source phoneme
sequence to match the length of the target mel-spectrogram sequence for
parallel mel-spectrogram generation. Experiments on the LJSpeech dataset show
that our parallel model matches autoregressive models in terms of speech
quality, nearly eliminates the problem of word skipping and repeating in
particularly hard cases, and can adjust voice speed smoothly. Most importantly,
compared with autoregressive Transformer TTS, our model speeds up
mel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.
Therefore, we call our model FastSpeech.
| 2,019 | Computation and Language |
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy
Lifting, the Rest Can Be Pruned | Multi-head self-attention is a key component of the Transformer, a
state-of-the-art architecture for neural machine translation. In this work we
evaluate the contribution made by individual attention heads in the encoder to
the overall performance of the model and analyze the roles played by them. We
find that the most important and confident heads play consistent and often
linguistically-interpretable roles. When pruning heads using a method based on
stochastic gates and a differentiable relaxation of the L0 penalty, we observe
that specialized heads are last to be pruned. Our novel pruning method removes
the vast majority of heads without seriously affecting performance. For
example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads
results in a drop of only 0.15 BLEU.
| 2,019 | Computation and Language |
Multi-hop Reading Comprehension via Deep Reinforcement Learning based
Document Traversal | Reading Comprehension has received significant attention in recent years as
high quality Question Answering (QA) datasets have become available. Despite
state-of-the-art methods achieving strong overall accuracy, Multi-Hop (MH)
reasoning remains particularly challenging. To address MH-QA specifically, we
propose a Deep Reinforcement Learning based method capable of learning
sequential reasoning across large collections of documents so as to pass a
query-aware, fixed-size context subset to existing models for answer
extraction. Our method is comprised of two stages: a linker, which decomposes
the provided support documents into a graph of sentences, and an extractor,
which learns where to look based on the current question and already-visited
sentences. The result of the linker is a novel graph structure at the sentence
level that preserves logical flow while still allowing rapid movement between
documents. Importantly, we demonstrate that the sparsity of the resultant graph
is invariant to context size. This translates to fewer decisions required from
the Deep-RL trained extractor, allowing the system to scale effectively to
large collections of documents.
The importance of sequential decision making in the document traversal step
is demonstrated by comparison to standard IE methods, and we additionally
introduce a BM25-based IR baseline that retrieves documents relevant to the
query only. We examine the integration of our method with existing models on
the recently proposed QAngaroo benchmark and achieve consistent increases in
accuracy across the board, as well as a 2-3x reduction in training time.
| 2,019 | Computation and Language |
GWU NLP Lab at SemEval-2019 Task 3: EmoContext: Effective Contextual
Information in Models for Emotion Detection in Sentence-level in a Multigenre
Corpus | In this paper we present an emotion classifier model submitted to the
SemEval-2019 Task 3: EmoContext. The task objective is to classify emotion
(i.e. happy, sad, angry) in a 3-turn conversational data set. We formulate the
task as a classification problem and introduce a Gated Recurrent Neural Network
(GRU) model with attention layer, which is bootstrapped with contextual
information and trained with a multigenre corpus. We utilize different word
embeddings to empirically select the most suited one to represent our features.
We train the model with a multigenre emotion corpus to leverage using all
available training sets to bootstrap the results. We achieved overall %56.05
f1-score and placed 144.
| 2,019 | Computation and Language |
MCScript2.0: A Machine Comprehension Corpus Focused on Script Events and
Participants | We introduce MCScript2.0, a machine comprehension corpus for the end-to-end
evaluation of script knowledge. MCScript2.0 contains approx. 20,000 questions
on approx. 3,500 texts, crowdsourced based on a new collection process that
results in challenging questions. Half of the questions cannot be answered from
the reading texts, but require the use of commonsense and, in particular,
script knowledge. We give a thorough analysis of our corpus and show that while
the task is not challenging to humans, existing machine comprehension models
fail to perform well on the data, even if they make use of a commonsense
knowledge base. The dataset is available at
http://www.sfb1102.uni-saarland.de/?page_id=2582
| 2,019 | Computation and Language |
An Investigation of Transfer Learning-Based Sentiment Analysis in
Japanese | Text classification approaches have usually required task-specific model
architectures and huge labeled datasets. Recently, thanks to the rise of
text-based transfer learning techniques, it is possible to pre-train a language
model in an unsupervised manner and leverage them to perform effective on
downstream tasks. In this work we focus on Japanese and show the potential use
of transfer learning techniques in text classification. Specifically, we
perform binary and multi-class sentiment classification on the Rakuten product
review and Yahoo movie review datasets. We show that transfer learning-based
approaches perform better than task-specific models trained on 3 times as much
data. Furthermore, these approaches perform just as well for language modeling
pre-trained on only 1/30 of the data. We release our pre-trained models and
code as open source.
| 2,019 | Computation and Language |
Misspelling Oblivious Word Embeddings | In this paper we present a method to learn word embeddings that are resilient
to misspellings. Existing word embeddings have limited applicability to
malformed texts, which contain a non-negligible amount of out-of-vocabulary
words. We propose a method combining FastText with subwords and a supervised
task of learning misspelling patterns. In our method, misspellings of each word
are embedded close to their correct variants. We train these embeddings on a
new dataset we are releasing publicly. Finally, we experimentally show the
advantages of this approach on both intrinsic and extrinsic NLP tasks using
public test sets.
| 2,019 | Computation and Language |
Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop
Topic Models | To address the lack of comparative evaluation of Human-in-the-Loop Topic
Modeling (HLTM) systems, we implement and evaluate three contrasting HLTM
modeling approaches using simulation experiments. These approaches extend
previously proposed frameworks, including constraints and informed prior-based
methods. Users should have a sense of control in HLTM systems, so we propose a
control metric to measure whether refinement operations' results match users'
expectations. Informed prior-based methods provide better control than
constraints, but constraints yield higher quality topics.
| 2,019 | Computation and Language |
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor | Analogies such as "man is to king as woman is to X" are often used to
illustrate the amazing power of word embeddings. Concurrently, they have also
been used to expose how strongly human biases are encoded in vector spaces
built on natural language, like "man is to computer programmer as woman is to
homemaker". Recent work has shown that analogies are in fact not such a
diagnostic for bias, and other methods have been proven to be more apt to the
task. However, beside the intrinsic problems with the analogy task as a bias
detection tool, in this paper we show that a series of issues related to how
analogies have been implemented and used might have yielded a distorted picture
of bias in word embeddings. Human biases are present in word embeddings and
need to be addressed. Analogies, though, are probably not the right tool to do
so. Also, the way they have been most often used has exacerbated some possibly
non-existing biases and perhaps hid others. Because they are still widely
popular, and some of them have become classics within and outside the NLP
community, we deem it important to provide a series of clarifications that
should put well-known, and potentially new cases into the right perspective.
| 2,019 | Computation and Language |
Training language GANs from Scratch | Generative Adversarial Networks (GANs) enjoy great success at image
generation, but have proven difficult to train in the domain of natural
language. Challenges with gradient estimation, optimization instability, and
mode collapse have lead practitioners to resort to maximum likelihood
pre-training, followed by small amounts of adversarial fine-tuning. The
benefits of GAN fine-tuning for language generation are unclear, as the
resulting models produce comparable or worse samples than traditional language
models. We show it is in fact possible to train a language GAN from scratch --
without maximum likelihood pre-training. We combine existing techniques such as
large batch sizes, dense rewards and discriminator regularization to stabilize
and improve language GANs. The resulting model, ScratchGAN, performs comparably
to maximum likelihood training on EMNLP2017 News and WikiText-103 corpora
according to quality and diversity metrics.
| 2,020 | Computation and Language |
An Analysis of Source-Side Grammatical Errors in NMT | The quality of Neural Machine Translation (NMT) has been shown to
significantly degrade when confronted with source-side noise. We present the
first large-scale study of state-of-the-art English-to-German NMT on real
grammatical noise, by evaluating on several Grammar Correction corpora. We
present methods for evaluating NMT robustness without true references, and we
use them for extensive analysis of the effects that different grammatical
errors have on the NMT output. We also introduce a technique for visualizing
the divergence distribution caused by a source-side error, which allows for
additional insights.
| 2,019 | Computation and Language |
Personalizing Dialogue Agents via Meta-Learning | Existing personalized dialogue models use human designed persona descriptions
to improve dialogue consistency. Collecting such descriptions from existing
dialogues is expensive and requires hand-crafted feature designs. In this
paper, we propose to extend Model-Agnostic Meta-Learning (MAML)(Finn et al.,
2017) to personalized dialogue learning without using any persona descriptions.
Our model learns to quickly adapt to new personas by leveraging only a few
dialogue samples collected from the same user, which is fundamentally different
from conditioning the response on the persona descriptions. Empirical results
on Persona-chat dataset (Zhang et al., 2018) indicate that our solution
outperforms non-meta-learning baselines using automatic evaluation metrics, and
in terms of human-evaluated fluency and consistency.
| 2,019 | Computation and Language |
Outline Generation: Understanding the Inherent Content Structure of
Documents | In this paper, we introduce and tackle the Outline Generation (OG) task,
which aims to unveil the inherent content structure of a multi-paragraph
document by identifying its potential sections and generating the corresponding
section headings. Without loss of generality, the OG task can be viewed as a
novel structured summarization task. To generate a sound outline, an ideal OG
model should be able to capture three levels of coherence, namely the coherence
between context paragraphs, that between a section and its heading, and that
between context headings. The first one is the foundation for section
identification, while the latter two are critical for consistent heading
generation. In this work, we formulate the OG task as a hierarchical structured
prediction problem, i.e., to first predict a sequence of section boundaries and
then a sequence of section headings accordingly. We propose a novel
hierarchical structured neural generation model, named HiStGen, for the task.
Our model attempts to capture the three-level coherence via the following ways.
First, we introduce a Markov paragraph dependency mechanism between context
paragraphs for section identification. Second, we employ a section-aware
attention mechanism to ensure the semantic coherence between a section and its
heading. Finally, we leverage a Markov heading dependency mechanism and a
review mechanism between context headings to improve the consistency and
eliminate duplication between section headings. Besides, we build a novel
WIKIOG dataset, a public collection which consists of over 1.75 million
document-outline pairs for research on the OG task. Experimental results on our
benchmark dataset demonstrate that our model can significantly outperform
several state-of-the-art sequential generation models for the OG task.
| 2,019 | Computation and Language |
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions | In this paper we study yes/no questions that are naturally occurring ---
meaning that they are generated in unprompted and unconstrained settings. We
build a reading comprehension dataset, BoolQ, of such questions, and show that
they are unexpectedly challenging. They often query for complex, non-factoid
information, and require difficult entailment-like inference to solve. We also
explore the effectiveness of a range of transfer learning baselines. We find
that transferring from entailment data is more effective than transferring from
paraphrase or extractive QA data, and that it, surprisingly, continues to be
very beneficial even when starting from massive pre-trained language models
such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on
our train set. It achieves 80.4% accuracy compared to 90% accuracy of human
annotators (and 62% majority-baseline), leaving a significant gap for future
work.
| 2,019 | Computation and Language |
A Dual Reinforcement Learning Framework for Unsupervised Text Style
Transfer | Unsupervised text style transfer aims to transfer the underlying style of
text but keep its main content unchanged without parallel data. Most existing
methods typically follow two steps: first separating the content from the
original style, and then fusing the content with the desired style. However,
the separation in the first step is challenging because the content and style
interact in subtle ways in natural language. Therefore, in this paper, we
propose a dual reinforcement learning framework to directly transfer the style
of the text via a one-step mapping model, without any separation of content and
style. Specifically, we consider the learning of the source-to-target and
target-to-source mappings as a dual task, and two rewards are designed based on
such a dual structure to reflect the style accuracy and content preservation,
respectively. In this way, the two one-step mapping models can be trained via
reinforcement learning, without any use of parallel data. Automatic evaluations
show that our model outperforms the state-of-the-art systems by a large margin,
especially with more than 8 BLEU points improvement averaged on two benchmark
datasets. Human evaluations also validate the effectiveness of our model in
terms of style accuracy, content preservation and fluency. Our code and data,
including outputs of all baselines and our model are available at
https://github.com/luofuli/DualLanST.
| 2,019 | Computation and Language |
mu-Forcing: Training Variational Recurrent Autoencoders for Text
Generation | It has been previously observed that training Variational Recurrent
Autoencoders (VRAE) for text generation suffers from serious uninformative
latent variables problem. The model would collapse into a plain language model
that totally ignore the latent variables and can only generate repeating and
dull samples. In this paper, we explore the reason behind this issue and
propose an effective regularizer based approach to address it. The proposed
method directly injects extra constraints on the posteriors of latent variables
into the learning process of VRAE, which can flexibly and stably control the
trade-off between the KL term and the reconstruction term, making the model
learn dense and meaningful latent representations. The experimental results
show that the proposed method outperforms several strong baselines and can make
the model learn interpretable latent variables and generate diverse meaningful
sentences. Furthermore, the proposed method can perform well without using
other strategies, such as KL annealing.
| 2,019 | Computation and Language |
Incorporating Context and External Knowledge for Pronoun Coreference
Resolution | Linking pronominal expressions to the correct references requires, in many
cases, better analysis of the contextual information and external knowledge. In
this paper, we propose a two-layer model for pronoun coreference resolution
that leverages both context and external knowledge, where a knowledge attention
mechanism is designed to ensure the model leveraging the appropriate source of
external knowledge based on different context. Experimental results demonstrate
the validity and effectiveness of our model, where it outperforms
state-of-the-art models by a large margin.
| 2,019 | Computation and Language |
Contextual Out-of-Domain Utterance Handling With Counterfeit Data
Augmentation | Neural dialog models often lack robustness to anomalous user input and
produce inappropriate responses which leads to frustrating user experience.
Although there are a set of prior approaches to out-of-domain (OOD) utterance
detection, they share a few restrictions: they rely on OOD data or multiple
sub-domains, and their OOD detection is context-independent which leads to
suboptimal performance in a dialog. The goal of this paper is to propose a
novel OOD detection method that does not require OOD data by utilizing
counterfeit OOD turns in the context of a dialog. For the sake of fostering
further research, we also release new dialog datasets which are 3 publicly
available dialog corpora augmented with OOD turns in a controllable way. Our
method outperforms state-of-the-art dialog models equipped with a conventional
OOD detection mechanism by a large margin in the presence of OOD utterances.
| 2,019 | Computation and Language |
Using Deep Networks and Transfer Learning to Address Disinformation | We apply an ensemble pipeline composed of a character-level convolutional
neural network (CNN) and a long short-term memory (LSTM) as a general tool for
addressing a range of disinformation problems. We also demonstrate the ability
to use this architecture to transfer knowledge from labeled data in one domain
to related (supervised and unsupervised) tasks. Character-level neural networks
and transfer learning are particularly valuable tools in the disinformation
space because of the messy nature of social media, lack of labeled data, and
the multi-channel tactics of influence campaigns. We demonstrate their
effectiveness in several tasks relevant for detecting disinformation: spam
emails, review bombing, political sentiment, and conversation clustering.
| 2,019 | Computation and Language |
Human vs. Muppet: A Conservative Estimate of Human Performance on the
GLUE Benchmark | The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding
tasks which has seen dramatic progress in the past year, with average
performance moving from 70.0 at launch to 83.9, state of the art at the time of
writing (May 24, 2019). Here, we measure human performance on the benchmark, in
order to learn whether significant headroom remains for further progress. We
provide a conservative estimate of human performance on the benchmark through
crowdsourcing: Our annotators are non-experts who must learn each task from a
brief set of instructions and 20 examples. In spite of limited training, these
annotators robustly outperform the state of the art on six of the nine GLUE
tasks and achieve an average score of 87.1. Given the fast pace of progress
however, the headroom we observe is quite limited. To reproduce the data-poor
setting that our annotators must learn in, we also train the BERT model (Devlin
et al., 2019) in limited-data regimes, and conclude that low-resource sentence
classification remains a challenge for modern neural network approaches to text
understanding.
| 2,019 | Computation and Language |
What Syntactic Structures block Dependencies in RNN Language Models? | Recurrent Neural Networks (RNNs) trained on a language modeling task have
been shown to acquire a number of non-local grammatical dependencies with some
success. Here, we provide new evidence that RNN language models are sensitive
to hierarchical syntactic structure by investigating the filler--gap dependency
and constraints on it, known as syntactic islands. Previous work is
inconclusive about whether RNNs learn to attenuate their expectations for gaps
in island constructions in particular or in any sufficiently complex syntactic
environment. This paper gives new evidence for the former by providing control
studies that have been lacking so far. We demonstrate that two state-of-the-art
RNN models are are able to maintain the filler--gap dependency through
unbounded sentential embeddings and are also sensitive to the hierarchical
relationship between the filler and the gap. Next, we demonstrate that the
models are able to maintain possessive pronoun gender expectations through
island constructions---this control case rules out the possibility that island
constructions block all information flow in these networks. We also evaluate
three untested islands constraints: coordination islands, left branch islands,
and sentential subject islands. Models are able to learn left branch islands
and learn coordination islands gradiently, but fail to learn sentential subject
islands. Through these controls and new tests, we provide evidence that model
behavior is due to finer-grained expectations than gross syntactic complexity,
but also that the models are conspicuously un-humanlike in some of their
performance characteristics.
| 2,019 | Computation and Language |
A Call for Prudent Choice of Subword Merge Operations in Neural Machine
Translation | Most neural machine translation systems are built upon subword units
extracted by methods such as Byte-Pair Encoding (BPE) or wordpiece. However,
the choice of number of merge operations is generally made by following
existing recipes. In this paper, we conduct a systematic exploration on
different numbers of BPE merge operations to understand how it interacts with
the model architecture, the strategy to build vocabularies and the language
pair. Our exploration could provide guidance for selecting proper BPE
configurations in the future. Most prominently: we show that for LSTM-based
architectures, it is necessary to experiment with a wide range of different BPE
operations as there is no typical optimal BPE configuration, whereas for
Transformer architectures, smaller BPE size tends to be a typically optimal
choice. We urge the community to make prudent choices with subword merge
operations, as our experiments indicate that a sub-optimal BPE configuration
alone could easily reduce the system performance by 3-4 BLEU points.
| 2,019 | Computation and Language |
Debiasing Word Embeddings Improves Multimodal Machine Translation | In recent years, pretrained word embeddings have proved useful for multimodal
neural machine translation (NMT) models to address the shortage of available
datasets. However, the integration of pretrained word embeddings has not yet
been explored extensively. Further, pretrained word embeddings in high
dimensional spaces have been reported to suffer from the hubness problem.
Although some debiasing techniques have been proposed to address this problem
for other natural language processing tasks, they have seldom been studied for
multimodal NMT models. In this study, we examine various kinds of word
embeddings and introduce two debiasing techniques for three multimodal NMT
models and two language pairs -- English-German translation and English-French
translation. With our optimal settings, the overall performance of multimodal
models was improved by up to +1.93 BLEU and +2.02 METEOR for English-German
translation and +1.73 BLEU and +0.95 METEOR for English-French translation.
| 2,019 | Computation and Language |
Designing a Symbolic Intermediate Representation for Neural Surface
Realization | Generated output from neural NLG systems often contain errors such as
hallucination, repetition or contradiction. This work focuses on designing a
symbolic intermediate representation to be used in multi-stage neural
generation with the intention of reducing the frequency of failed outputs. We
show that surface realization from this intermediate representation is of high
quality and when the full system is applied to the E2E dataset it outperforms
the winner of the E2E challenge. Furthermore, by breaking out the surface
realization step from typically end-to-end neural systems, we also provide a
framework for non-neural content selection and planning systems to potentially
take advantage of semi-supervised pretraining of neural surface realization
models.
| 2,019 | Computation and Language |
SuperCaptioning: Image Captioning Using Two-dimensional Word Embedding | Language and vision are processed as two different modal in current work for
image captioning. However, recent work on Super Characters method shows the
effectiveness of two-dimensional word embedding, which converts text
classification problem into image classification problem. In this paper, we
propose the SuperCaptioning method, which borrows the idea of two-dimensional
word embedding from Super Characters method, and processes the information of
language and vision together in one single CNN model. The experimental results
on Flickr30k data shows the proposed method gives high quality image captions.
An interactive demo is ready to show at the workshop.
| 2,019 | Computation and Language |
Soft Contextual Data Augmentation for Neural Machine Translation | While data augmentation is an important trick to boost the accuracy of deep
learning methods in computer vision tasks, its study in natural language tasks
is still very limited. In this paper, we present a novel data augmentation
method for neural machine translation. Different from previous augmentation
methods that randomly drop, swap or replace words with other words in a
sentence, we softly augment a randomly chosen word in a sentence by its
contextual mixture of multiple related words. More accurately, we replace the
one-hot representation of a word by a distribution (provided by a language
model) over the vocabulary, i.e., replacing the embedding of this word by a
weighted combination of multiple semantically similar words. Since the weights
of those words depend on the contextual information of the word to be replaced,
the newly generated sentences capture much richer information than previous
augmentation methods. Experimental results on both small scale and large scale
machine translation datasets demonstrate the superiority of our method over
strong baselines.
| 2,019 | Computation and Language |
ESA: Entity Summarization with Attention | Entity summarization aims at creating brief but informative descriptions of
entities from knowledge graphs. While previous work mostly focused on
traditional techniques such as clustering algorithms and graph models, we ask
how to apply deep learning methods into this task. In this paper we propose
ESA, a neural network with supervised attention mechanisms for entity
summarization. Specifically, we calculate attention weights for facts in each
entity, and rank facts to generate reliable summaries. We explore techniques to
solve difficult learning problems presented by the ESA, and demonstrate the
effectiveness of our model in comparison with the state-of-the-art methods.
Experimental results show that our model improves the quality of the entity
summaries in both F-measure and MAP.
| 2,020 | Computation and Language |
Are Sixteen Heads Really Better than One? | Attention is a powerful and ubiquitous mechanism for allowing neural models
to focus on particular salient pieces of information by taking their weighted
average when making predictions. In particular, multi-headed attention is a
driving force behind many recent state-of-the-art NLP models such as
Transformer-based MT models and BERT. These models apply multiple attention
mechanisms in parallel, with each attention "head" potentially focusing on
different parts of the input, which makes it possible to express sophisticated
functions beyond the simple weighted average. In this paper we make the
surprising observation that even if models have been trained using multiple
heads, in practice, a large percentage of attention heads can be removed at
test time without significantly impacting performance. In fact, some layers can
even be reduced to a single head. We further examine greedy algorithms for
pruning down models, and the potential speed, memory efficiency, and accuracy
improvements obtainable therefrom. Finally, we analyze the results with respect
to which parts of the model are more reliant on having multiple heads, and
provide precursory evidence that training dynamics play a role in the gains
provided by multi-head attention.
| 2,019 | Computation and Language |
Hashing based Answer Selection | Answer selection is an important subtask of question answering (QA), where
deep models usually achieve better performance. Most deep models adopt
question-answer interaction mechanisms, such as attention, to get vector
representations for answers. When these interaction based deep models are
deployed for online prediction, the representations of all answers need to be
recalculated for each question. This procedure is time-consuming for deep
models with complex encoders like BERT which usually have better accuracy than
simple encoders. One possible solution is to store the matrix representation
(encoder output) of each answer in memory to avoid recalculation. But this will
bring large memory cost. In this paper, we propose a novel method, called
hashing based answer selection (HAS), to tackle this problem. HAS adopts a
hashing strategy to learn a binary matrix representation for each answer, which
can dramatically reduce the memory cost for storing the matrix representations
of answers. Hence, HAS can adopt complex encoders like BERT in the model, but
the online prediction of HAS is still fast with a low memory cost. Experimental
results on three popular answer selection datasets show that HAS can outperform
existing models to achieve state-of-the-art performance.
| 2,019 | Computation and Language |
Gated Group Self-Attention for Answer Selection | Answer selection (answer ranking) is one of the key steps in many kinds of
question answering (QA) applications, where deep models have achieved
state-of-the-art performance. Among these deep models, recurrent neural network
(RNN) based models are most popular, typically with better performance than
convolutional neural network (CNN) based models. Nevertheless, it is difficult
for RNN based models to capture the information about long-range dependency
among words in the sentences of questions and answers. In this paper, we
propose a new deep model, called gated group self-attention (GGSA), for answer
selection. GGSA is inspired by global self-attention which is originally
proposed for machine translation and has not been explored in answer selection.
GGSA tackles the problem of global self-attention that local and global
information cannot be well distinguished. Furthermore, an interaction mechanism
between questions and answers is also proposed to enhance GGSA by a residual
structure. Experimental results on two popular QA datasets show that GGSA can
outperform existing answer selection models to achieve state-of-the-art
performance. Furthermore, GGSA can also achieve higher accuracy than global
self-attention for the answer selection task, with a lower computation cost.
| 2,019 | Computation and Language |
SemBleu: A Robust Metric for AMR Parsing Evaluation | Evaluating AMR parsing accuracy involves comparing pairs of AMR graphs. The
major evaluation metric, SMATCH (Cai and Knight, 2013), searches for one-to-one
mappings between the nodes of two AMRs with a greedy hill-climbing algorithm,
which leads to search errors. We propose SEMBLEU, a robust metric that extends
BLEU (Papineni et al., 2002) to AMRs. It does not suffer from search errors and
considers non-local correspondences in addition to local ones. SEMBLEU is fully
content-driven and punishes situations where a system's output does not
preserve most information from the input. Preliminary experiments on both
sentence and corpus levels show that SEMBLEU has slightly higher consistency
with human judgments than SMATCH. Our code is available at
http://github.com/freesunshine0316/sembleu.
| 2,019 | Computation and Language |
TIGS: An Inference Algorithm for Text Infilling with Gradient Search | Text infilling is defined as a task for filling in the missing part of a
sentence or paragraph, which is suitable for many real-world natural language
generation scenarios. However, given a well-trained sequential generative
model, generating missing symbols conditioned on the context is challenging for
existing greedy approximate inference algorithms. In this paper, we propose an
iterative inference algorithm based on gradient search, which is the first
inference algorithm that can be broadly applied to any neural sequence
generative models for text infilling tasks. We compare the proposed method with
strong baselines on three text infilling tasks with various mask ratios and
different mask strategies. The results show that our proposed method is
effective and efficient for fill-in-the-blank tasks, consistently outperforming
all baselines.
| 2,019 | Computation and Language |
Evaluation of basic modules for isolated spelling error correction in
Polish texts | Spelling error correction is an important problem in natural language
processing, as a prerequisite for good performance in downstream tasks as well
as an important feature in user-facing applications. For texts in Polish
language, there exist works on specific error correction solutions, often
developed for dealing with specialized corpora, but not evaluations of many
different approaches on big resources of errors. We begin to address this
problem by testing some basic and promising methods on PlEWi, a corpus of
annotated spelling extracted from Polish Wikipedia. These modules may be
further combined with appropriate solutions for error detection and context
awareness. Following our results, combining edit distance with cosine distance
of semantic vectors may be suggested for interpretable systems, while an LSTM,
particularly enhanced by ELMo embeddings, seems to offer the best raw
performance.
| 2,019 | Computation and Language |
Simple and Effective Curriculum Pointer-Generator Networks for Reading
Comprehension over Long Narratives | This paper tackles the problem of reading comprehension over long narratives
where documents easily span over thousands of tokens. We propose a curriculum
learning (CL) based Pointer-Generator framework for reading/sampling over large
documents, enabling diverse training of the neural model based on the notion of
alternating contextual difficulty. This can be interpreted as a form of domain
randomization and/or generative pretraining during training. To this end, the
usage of the Pointer-Generator softens the requirement of having the answer
within the context, enabling us to construct diverse training samples for
learning. Additionally, we propose a new Introspective Alignment Layer (IAL),
which reasons over decomposed alignments using block-based self-attention. We
evaluate our proposed method on the NarrativeQA reading comprehension
benchmark, achieving state-of-the-art performance, improving existing baselines
by $51\%$ relative improvement on BLEU-4 and $17\%$ relative improvement on
Rouge-L. Extensive ablations confirm the effectiveness of our proposed IAL and
CL components.
| 2,019 | Computation and Language |
When to reply? Context Sensitive Models to Predict Instructor
Interventions in MOOC Forums | Due to time constraints, course instructors often need to selectively
participate in student discussion threads, due to their limited bandwidth and
lopsided student--instructor ratio on online forums. We propose the first deep
learning models for this binary prediction problem. We propose novel attention
based models to infer the amount of latent context necessary to predict
instructor intervention. Such models also allow themselves to be tuned to
instructor's preference to intervene early or late. Our three proposed
attentive model variants to infer the latent context improve over the
state-of-the-art by a significant, large margin of 11% in F1 and 10% in recall,
on average. Further, introspection of attention help us better understand what
aspects of a discussion post propagate through the discussion thread that
prompts instructor intervention.
| 2,019 | Computation and Language |
Where's My Head? Definition, Dataset and Models for Numeric Fused-Heads
Identification and Resolution | We provide the first computational treatment of fused-heads constructions
(FH), focusing on the numeric fused-heads (NFH). FHs constructions are noun
phrases (NPs) in which the head noun is missing and is said to be `fused' with
its dependent modifier. This missing information is implicit and is important
for sentence understanding. The missing references are easily filled in by
humans but pose a challenge for computational models. We formulate the handling
of FH as a two stages process: identification of the FH construction and
resolution of the missing head. We explore the NFH phenomena in large corpora
of English text and create (1) a dataset and a highly accurate method for NFH
identification; (2) a 10k examples (1M tokens) crowd-sourced dataset of NFH
resolution; and (3) a neural baseline for the NFH resolution task. We release
our code and dataset, in hope to foster further research into this challenging
problem.
| 2,019 | Computation and Language |
Extreme Multi-Label Legal Text Classification: A case study in EU
Legislation | We consider the task of Extreme Multi-Label Text Classification (XMTC) in the
legal domain. We release a new dataset of 57k legislative documents from
EURLEX, the European Union's public document database, annotated with concepts
from EUROVOC, a multidisciplinary thesaurus. The dataset is substantially
larger than previous EURLEX datasets and suitable for XMTC, few-shot and
zero-shot learning. Experimenting with several neural classifiers, we show that
BIGRUs with self-attention outperform the current multi-label state-of-the-art
methods, which employ label-wise attention. Replacing CNNs with BIGRUs in
label-wise attention networks leads to the best overall performance.
| 2,019 | Computation and Language |
Commonsense Properties from Query Logs and Question Answering Forums | Commonsense knowledge about object properties, human behavior and general
concepts is crucial for robust AI applications. However, automatic acquisition
of this knowledge is challenging because of sparseness and bias in online
sources. This paper presents Quasimodo, a methodology and tool suite for
distilling commonsense properties from non-standard web sources. We devise
novel ways of tapping into search-engine query logs and QA forums, and
combining the resulting candidate assertions with statistical cues from
encyclopedias, books and image tags in a corroboration step. Unlike prior work
on commonsense knowledge bases, Quasimodo focuses on salient properties that
are typically associated with certain objects or concepts. Extensive
evaluations, including extrinsic use-case studies, show that Quasimodo provides
better coverage than state-of-the-art baselines with comparable quality.
| 2,019 | Computation and Language |
Levenshtein Transformer | Modern neural sequence generation models are built to either generate tokens
step-by-step from scratch or (iteratively) modify a sequence of tokens bounded
by a fixed length. In this work, we develop Levenshtein Transformer, a new
partially autoregressive model devised for more flexible and amenable sequence
generation. Unlike previous approaches, the atomic operations of our model are
insertion and deletion. The combination of them facilitates not only generation
but also sequence refinement allowing dynamic length changes. We also propose a
set of new training techniques dedicated at them, effectively exploiting one as
the other's learning signal thanks to their complementary nature. Experiments
applying the proposed model achieve comparable performance but much-improved
efficiency on both generation (e.g. machine translation, text summarization)
and refinement tasks (e.g. automatic post-editing). We further confirm the
flexibility of our model by showing a Levenshtein Transformer trained by
machine translation can straightforwardly be used for automatic post-editing.
| 2,019 | Computation and Language |
Harry Potter and the Action Prediction Challenge from Natural Language | We explore the challenge of action prediction from textual descriptions of
scenes, a testbed to approximate whether text inference can be used to predict
upcoming actions. As a case of study, we consider the world of the Harry Potter
fantasy novels and inferring what spell will be cast next given a fragment of a
story. Spells act as keywords that abstract actions (e.g. 'Alohomora' to open a
door) and denote a response to the environment. This idea is used to
automatically build HPAC, a corpus containing 82,836 samples and 85 actions. We
then evaluate different baselines. Among the tested models, an LSTM-based
approach obtains the best performance for frequent actions and large scene
descriptions, but approaches such as logistic regression behave well on
infrequent actions.
| 2,019 | Computation and Language |
CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition | In this paper, we propose a novel soft and monotonic alignment mechanism used
for sequence transduction. It is inspired by the integrate-and-fire model in
spiking neural networks and employed in the encoder-decoder framework consists
of continuous functions, thus being named as: Continuous Integrate-and-Fire
(CIF). Applied to the ASR task, CIF not only shows a concise calculation, but
also supports online recognition and acoustic boundary positioning, thus
suitable for various ASR scenarios. Several support strategies are also
proposed to alleviate the unique problems of CIF-based model. With the joint
action of these methods, the CIF-based model shows competitive performance.
Notably, it achieves a word error rate (WER) of 2.86% on the test-clean of
Librispeech and creates new state-of-the-art result on Mandarin telephone ASR
benchmark.
| 2,020 | Computation and Language |
HUMBO: Bridging Response Generation and Facial Expression Synthesis | Spoken dialogue systems that assist users to solve complex tasks such as
movie ticket booking have become an emerging research topic in artificial
intelligence and natural language processing areas. With a well-designed
dialogue system as an intelligent personal assistant, people can accomplish
certain tasks more easily via natural language interactions. Today there are
several virtual intelligent assistants in the market; however, most systems
only focus on textual or vocal interaction. In this paper, we present HUMBO, a
system aiming at generating dialogue responses and simultaneously synthesize
corresponding visual expressions on faces for better multimodal interaction.
HUMBO can (1) let users determine the appearances of virtual assistants by a
single image, and (2) generate coherent emotional utterances and facial
expressions on the user-provided image. This is not only a brand new research
direction but more importantly, an ultimate step toward more human-like virtual
assistants.
| 2,021 | Computation and Language |
AgentGraph: Towards Universal Dialogue Management with Structured Deep
Reinforcement Learning | Dialogue policy plays an important role in task-oriented spoken dialogue
systems. It determines how to respond to users. The recently proposed deep
reinforcement learning (DRL) approaches have been used for policy optimization.
However, these deep models are still challenging for two reasons: 1) Many
DRL-based policies are not sample-efficient. 2) Most models don't have the
capability of policy transfer between different domains. In this paper, we
propose a universal framework, AgentGraph, to tackle these two problems. The
proposed AgentGraph is the combination of GNN-based architecture and DRL-based
algorithm. It can be regarded as one of the multi-agent reinforcement learning
approaches. Each agent corresponds to a node in a graph, which is defined
according to the dialogue domain ontology. When making a decision, each agent
can communicate with its neighbors on the graph. Under AgentGraph framework, we
further propose Dual GNN-based dialogue policy, which implicitly decomposes the
decision in each turn into a high-level global decision and a low-level local
decision. Experiments show that AgentGraph models significantly outperform
traditional reinforcement learning approaches on most of the 18 tasks of the
PyDial benchmark. Moreover, when transferred from the source task to a target
task, these models not only have acceptable initial performance but also
converge much faster on the target task.
| 2,019 | Computation and Language |
Combating Adversarial Misspellings with Robust Word Recognition | To combat adversarial spelling mistakes, we propose placing a word
recognition model in front of the downstream classifier. Our word recognition
models build upon the RNN semi-character architecture, introducing several new
backoff strategies for handling rare and unseen words. Trained to recognize
words corrupted by random adds, drops, swaps, and keyboard mistakes, our method
achieves 32% relative (and 3.3% absolute) error reduction over the vanilla
semi-character model. Notably, our pipeline confers robustness on the
downstream classifier, outperforming both adversarial training and
off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment
analysis, a single adversarially-chosen character attack lowers accuracy from
90.3% to 45.8%. Our defense restores accuracy to 75%. Surprisingly, better word
recognition does not always entail greater robustness. Our analysis reveals
that robustness also depends upon a quantity that we denote the sensitivity.
| 2,019 | Computation and Language |
Using Neural Networks for Relation Extraction from Biomedical Literature | Using different sources of information to support automated extracting of
relations between biomedical concepts contributes to the development of our
understanding of biological systems. The primary comprehensive source of these
relations is biomedical literature. Several relation extraction approaches have
been proposed to identify relations between concepts in biomedical literature,
namely, using neural networks algorithms. The use of multichannel architectures
composed of multiple data representations, as in deep neural networks, is
leading to state-of-the-art results. The right combination of data
representations can eventually lead us to even higher evaluation scores in
relation extraction tasks. Thus, biomedical ontologies play a fundamental role
by providing semantic and ancestry information about an entity. The
incorporation of biomedical ontologies has already been proved to enhance
previous state-of-the-art results.
| 2,020 | Computation and Language |
A Self-Attention Joint Model for Spoken Language Understanding in
Situational Dialog Applications | Spoken language understanding (SLU) acts as a critical component in
goal-oriented dialog systems. It typically involves identifying the speakers
intent and extracting semantic slots from user utterances, which are known as
intent detection (ID) and slot filling (SF). SLU problem has been intensively
investigated in recent years. However, these methods just constrain SF results
grammatically, solve ID and SF independently, or do not fully utilize the
mutual impact of the two tasks. This paper proposes a multi-head self-attention
joint model with a conditional random field (CRF) layer and a prior mask. The
experiments show the effectiveness of our model, as compared with
state-of-the-art models. Meanwhile, online education in China has made great
progress in the last few years. But there are few intelligent educational
dialog applications for students to learn foreign languages. Hence, we design
an intelligent dialog robot equipped with different scenario settings to help
students learn communication skills.
| 2,019 | Computation and Language |
VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for
Zerospeech Challenge 2019 | We describe our submitted system for the ZeroSpeech Challenge 2019. The
current challenge theme addresses the difficulty of constructing a speech
synthesizer without any text or phonetic labels and requires a system that can
(1) discover subword units in an unsupervised way, and (2) synthesize the
speech with a target speaker's voice. Moreover, the system should also balance
the discrimination score ABX, the bit-rate compression rate, and the
naturalness and the intelligibility of the constructed voice. To tackle these
problems and achieve the best trade-off, we utilize a vector quantized
variational autoencoder (VQ-VAE) and a multi-scale codebook-to-spectrogram
(Code2Spec) inverter trained by mean square error and adversarial loss. The
VQ-VAE extracts the speech to a latent space, forces itself to map it into the
nearest codebook and produces compressed representation. Next, the inverter
generates a magnitude spectrogram to the target voice, given the codebook
vectors from VQ-VAE. In our experiments, we also investigated several other
clustering algorithms, including K-Means and GMM, and compared them with the
VQ-VAE result on ABX scores and bit rates. Our proposed approach significantly
improved the intelligibility (in CER), the MOS, and discrimination ABX scores
compared to the official ZeroSpeech 2019 baseline or even the topline.
| 2,019 | Computation and Language |
XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and
Question Answering | While natural language processing systems often focus on a single language,
multilingual transfer learning has the potential to improve performance,
especially for low-resource languages. We introduce XLDA, cross-lingual data
augmentation, a method that replaces a segment of the input text with its
translation in another language. XLDA enhances performance of all 14 tested
languages of the cross-lingual natural language inference (XNLI) benchmark.
With improvements of up to $4.8\%$, training with XLDA achieves
state-of-the-art performance for Greek, Turkish, and Urdu. XLDA is in contrast
to, and performs markedly better than, a more naive approach that aggregates
examples in various languages in a way that each example is solely in one
language. On the SQuAD question answering task, we see that XLDA provides a
$1.0\%$ performance increase on the English evaluation set. Comprehensive
experiments suggest that most languages are effective as cross-lingual
augmentors, that XLDA is robust to a wide range of translation quality, and
that XLDA is even more effective for randomly initialized models than for
pretrained models.
| 2,019 | Computation and Language |
One-Shot Learning for Text-to-SQL Generation | Most deep learning approaches for text-to-SQL generation are limited to the
WikiSQL dataset, which only supports very simple queries. Recently,
template-based and sequence-to-sequence approaches were proposed to support
complex queries, which contain join queries, nested queries, and other types.
However, Finegan-Dollak et al. (2018) demonstrated that both the approaches
lack the ability to generate SQL of unseen templates. In this paper, we propose
a template-based one-shot learning model for the text-to-SQL generation so that
the model can generate SQL of an untrained template based on a single example.
First, we classify the SQL template using the Matching Network that is
augmented by our novel architecture Candidate Search Network. Then, we fill the
variable slots in the predicted template using the Pointer Network. We show
that our model outperforms state-of-the-art approaches for various text-to-SQL
datasets in two aspects: 1) the SQL generation accuracy for the trained
templates, and 2) the adaptability to the unseen SQL templates based on a
single example without any additional training.
| 2,019 | Computation and Language |
Compositional pre-training for neural semantic parsing | Semantic parsing is the process of translating natural language utterances
into logical forms, which has many important applications such as question
answering and instruction following. Sequence-to-sequence models have been very
successful across many NLP tasks. However, a lack of task-specific prior
knowledge can be detrimental to the performance of these models. Prior work has
used frameworks for inducing grammars over the training examples, which capture
conditional independence properties that the model can leverage. Inspired by
the recent success stories such as BERT we set out to extend this augmentation
framework into two stages. The first stage is to pre-train using a corpus of
augmented examples in an unsupervised manner. The second stage is to fine-tune
to a domain-specific task. In addition, since the pre-training stage is
separate from the training on the main task we also expand the universe of
possible augmentations without causing catastrophic inference. We also propose
a novel data augmentation strategy that interchanges tokens that co-occur in
similar contexts to produce new training pairs. We demonstrate that the
proposed two-stage framework is beneficial for improving the parsing accuracy
in a standard dataset called GeoQuery for the task of generating logical forms
from a set of questions about the US geography.
| 2,019 | Computation and Language |
Target-Guided Open-Domain Conversation | Many real-world open-domain conversation applications have specific goals to
achieve during open-ended chats, such as recommendation, psychotherapy,
education, etc. We study the problem of imposing conversational goals on
open-domain chat agents. In particular, we want a conversational system to chat
naturally with human and proactively guide the conversation to a designated
target subject. The problem is challenging as no public data is available for
learning such a target-guided strategy. We propose a structured approach that
introduces coarse-grained keywords to control the intended content of system
responses. We then attain smooth conversation transition through turn-level
supervised learning, and drive the conversation towards the target with
discourse-level constraints. We further derive a keyword-augmented conversation
dataset for the study. Quantitative and human evaluations show our system can
produce meaningful and effective conversations, significantly improving over
other approaches.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.