Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Simplify the Usage of Lexicon in Chinese NER | Recently, many works have tried to augment the performance of Chinese named
entity recognition (NER) using word lexicons. As a representative, Lattice-LSTM
(Zhang and Yang, 2018) has achieved new benchmark results on several public
Chinese NER datasets. However, Lattice-LSTM has a complex model architecture.
This limits its application in many industrial areas where real-time NER
responses are needed.
In this work, we propose a simple but effective method for incorporating the
word lexicon into the character representations. This method avoids designing a
complicated sequence modeling architecture, and for any neural NER model, it
requires only subtle adjustment of the character representation layer to
introduce the lexicon information. Experimental studies on four benchmark
Chinese NER datasets show that our method achieves an inference speed up to
6.15 times faster than those of state-ofthe-art methods, along with a better
performance. The experimental results also show that the proposed method can be
easily incorporated with pre-trained models like BERT.
| 2,020 | Computation and Language |
Bidirectional Context-Aware Hierarchical Attention Network for Document
Understanding | The Hierarchical Attention Network (HAN) has made great strides, but it
suffers a major limitation: at level 1, each sentence is encoded in complete
isolation. In this work, we propose and compare several modifications of HAN in
which the sentence encoder is able to make context-aware attentional decisions
(CAHAN). Furthermore, we propose a bidirectional document encoder that
processes the document forwards and backwards, using the preceding and
following sentences as context. Experiments on three large-scale sentiment and
topic classification datasets show that the bidirectional version of CAHAN
outperforms HAN everywhere, with only a modest increase in computation time.
While results are promising, we expect the superiority of CAHAN to be even more
evident on tasks requiring a deeper understanding of the input documents, such
as abstractive summarization. Code is publicly available.
| 2,019 | Computation and Language |
Automatically Identifying Comparator Groups on Twitter for Digital
Epidemiology of Pregnancy Outcomes | Despite the prevalence of adverse pregnancy outcomes such as miscarriage,
stillbirth, birth defects, and preterm birth, their causes are largely unknown.
We seek to advance the use of social media for observational studies of
pregnancy outcomes by developing a natural language processing pipeline for
automatically identifying users from which to select comparator groups on
Twitter. We annotated 2361 tweets by users who have announced their pregnancy
on Twitter, which were used to train and evaluate supervised machine learning
algorithms as a basis for automatically detecting women who have reported that
their pregnancy had reached term and their baby was born at a normal weight.
Upon further processing the tweet-level predictions of a majority voting-based
ensemble classifier, the pipeline achieved a user-level F1-score of 0.933, with
a precision of 0.947 and a recall of 0.920. Our pipeline will be deployed to
identify large comparator groups for studying pregnancy outcomes on Twitter.
| 2,019 | Computation and Language |
Tackling Online Abuse: A Survey of Automated Abuse Detection Methods | Abuse on the Internet represents an important societal problem of our time.
Millions of Internet users face harassment, racism, personal attacks, and other
types of abuse on online platforms. The psychological effects of such abuse on
individuals can be profound and lasting. Consequently, over the past few years,
there has been a substantial research effort towards automated abuse detection
in the field of natural language processing (NLP). In this paper, we present a
comprehensive survey of the methods that have been proposed to date, thus
providing a platform for further development of this area. We describe the
existing datasets and review the computational approaches to abuse detection,
analyzing their strengths and limitations. We discuss the main trends that
emerge, highlight the challenges that remain, outline possible solutions, and
propose guidelines for ethics and explainability
| 2,020 | Computation and Language |
Few-shot Text Classification with Distributional Signatures | In this paper, we explore meta-learning for few-shot text classification.
Meta-learning has shown strong performance in computer vision, where low-level
patterns are transferable across learning tasks. However, directly applying
this approach to text is challenging--lexical features highly informative for
one task may be insignificant for another. Thus, rather than learning solely
from words, our model also leverages their distributional signatures, which
encode pertinent word occurrence patterns. Our model is trained within a
meta-learning framework to map these signatures into attention scores, which
are then used to weight the lexical representations of words. We demonstrate
that our model consistently outperforms prototypical networks learned on
lexical knowledge (Snell et al., 2017) in both few-shot text classification and
relation classification by a significant margin across six benchmark datasets
(20.0% on average in 1-shot classification).
| 2,020 | Computation and Language |
Build it Break it Fix it for Dialogue Safety: Robustness from
Adversarial Human Attack | The detection of offensive language in the context of a dialogue has become
an increasingly important application of natural language processing. The
detection of trolls in public forums (Gal\'an-Garc\'ia et al., 2016), and the
deployment of chatbots in the public domain (Wolf et al., 2017) are two
examples that show the necessity of guarding against adversarially offensive
behavior on the part of humans. In this work, we develop a training scheme for
a model to become robust to such human attacks by an iterative build it, break
it, fix it strategy with humans and models in the loop. In detailed experiments
we show this approach is considerably more robust than previous systems.
Further, we show that offensive language used within a conversation critically
depends on the dialogue context, and cannot be viewed as a single sentence
offensive detection task as in most previous work. Our newly collected tasks
and methods will be made open source and publicly available.
| 2,019 | Computation and Language |
CFO: A Framework for Building Production NLP Systems | This paper introduces a novel orchestration framework, called CFO
(COMPUTATION FLOW ORCHESTRATOR), for building, experimenting with, and
deploying interactive NLP (Natural Language Processing) and IR (Information
Retrieval) systems to production environments. We then demonstrate a question
answering system built using this framework which incorporates state-of-the-art
BERT based MRC (Machine Reading Comprehension) with IR components to enable
end-to-end answer retrieval. Results from the demo system are shown to be high
quality in both academic and industry domain specific settings. Finally, we
discuss best practices when (pre-)training BERT based MRC models for production
systems.
| 2,019 | Computation and Language |
Transductive Auxiliary Task Self-Training for Neural Multi-Task Models | Multi-task learning and self-training are two common ways to improve a
machine learning model's performance in settings with limited training data.
Drawing heavily on ideas from those two approaches, we suggest transductive
auxiliary task self-training: training a multi-task model on (i) a combination
of main and auxiliary task training data, and (ii) test instances with
auxiliary task labels which a single-task version of the model has previously
generated. We perform extensive experiments on 86 combinations of languages and
tasks. Our results are that, on average, transductive auxiliary task
self-training improves absolute accuracy by up to 9.56% over the pure
multi-task model for dependency relation tagging and by up to 13.03% for
semantic tagging.
| 2,019 | Computation and Language |
UDS--DFKI Submission to the WMT2019 Similar Language Translation Shared
Task | In this paper we present the UDS-DFKI system submitted to the Similar
Language Translation shared task at WMT 2019. The first edition of this shared
task featured data from three pairs of similar languages: Czech and Polish,
Hindi and Nepali, and Portuguese and Spanish. Participants could choose to
participate in any of these three tracks and submit system outputs in any
translation direction. We report the results obtained by our system in
translating from Czech to Polish and comment on the impact of out-of-domain
test data in the performance of our system. UDS-DFKI achieved competitive
performance ranking second among ten teams in Czech to Polish translation.
| 2,019 | Computation and Language |
Improving CAT Tools in the Translation Workflow: New Approaches and
Evaluation | This paper describes strategies to improve an existing web-based
computer-aided translation (CAT) tool entitled CATaLog Online. CATaLog Online
provides a post-editing environment with simple yet helpful project management
tools. It offers translation suggestions from translation memories (TM),
machine translation (MT), and automatic post-editing (APE) and records detailed
logs of post-editing activities. To test the new approaches proposed in this
paper, we carried out a user study on an English--German translation task using
CATaLog Online. User feedback revealed that the users preferred using CATaLog
Online over existing CAT tools in some respects, especially by selecting the
output of the MT system and taking advantage of the color scheme for TM
suggestions.
| 2,019 | Computation and Language |
The Transference Architecture for Automatic Post-Editing | In automatic post-editing (APE) it makes sense to condition post-editing (pe)
decisions on both the source (src) and the machine translated text (mt) as
input. This has led to multi-source encoder based APE approaches. A research
challenge now is the search for architectures that best support the capture,
preparation and provision of src and mt information and its integration with pe
decisions. In this paper we present a new multi-source APE model, called
transference. Unlike previous approaches, it (i) uses a transformer encoder
block for src, (ii) followed by a decoder block, but without masking for
self-attention on mt, which effectively acts as second encoder combining src ->
mt, and (iii) feeds this representation into a final decoder block generating
pe. Our model outperforms the state-of-the-art by 1 BLEU point on the WMT 2016,
2017, and 2018 English--German APE shared tasks (PBSMT and NMT). We further
investigate the importance of our newly introduced second encoder and find that
a too small amount of layers does hurt the performance, while reducing the
number of layers of the decoder does not matter much.
| 2,019 | Computation and Language |
Learning Conceptual-Contextual Embeddings for Medical Text | External knowledge is often useful for natural language understanding tasks.
We introduce a contextual text representation model called
Conceptual-Contextual (CC) embeddings, which incorporates structured knowledge
into text representations. Unlike entity embedding methods, our approach
encodes a knowledge graph into a context model. CC embeddings can be easily
reused for a wide range of tasks just like pre-trained language models. Our
model effectively encodes the huge UMLS database by leveraging semantic
generalizability. Experiments on electronic health records (EHRs) and medical
text processing benchmarks showed our model gives a major boost to the
performance of supervised medical NLP tasks.
| 2,020 | Computation and Language |
Generating an Overview Report over Many Documents | How to efficiently generate an accurate, well-structured overview report
(ORPT) over thousands of related documents is challenging. A well-structured
ORPT consists of sections of multiple levels (e.g., sections and subsections).
None of the existing multi-document summarization (MDS) algorithms is directed
toward this task. To overcome this obstacle, we present NDORGS (Numerous
Documents' Overview Report Generation Scheme) that integrates text filtering,
keyword scoring, single-document summarization (SDS), topic modeling, MDS, and
title generation to generate a coherent, well-structured ORPT. We then devise a
multi-criteria evaluation method using techniques of text mining and
multi-attribute decision making on a combination of human judgments, running
time, information coverage, and topic diversity. We evaluate ORPTs generated by
NDORGS on two large corpora of documents, where one is classified and the other
unclassified. We show that, using Saaty's pairwise comparison 9-point scale and
under TOPSIS, the ORPTs generated on SDS's with the length of 20% of the
original documents are the best overall on both datasets.
| 2,019 | Computation and Language |
Language Graph Distillation for Low-Resource Machine Translation | Neural machine translation on low-resource language is challenging due to the
lack of bilingual sentence pairs. Previous works usually solve the low-resource
translation problem with knowledge transfer in a multilingual setting. In this
paper, we propose the concept of Language Graph and further design a novel
graph distillation algorithm that boosts the accuracy of low-resource
translations in the graph with forward and backward knowledge distillation.
Preliminary experiments on the TED talks multilingual dataset demonstrate the
effectiveness of our proposed method. Specifically, we improve the low-resource
translation pair by more than 3.13 points in terms of BLEU score.
| 2,019 | Computation and Language |
Hard but Robust, Easy but Sensitive: How Encoder and Decoder Perform in
Neural Machine Translation | Neural machine translation (NMT) typically adopts the encoder-decoder
framework. A good understanding of the characteristics and functionalities of
the encoder and decoder can help to explain the pros and cons of the framework,
and design better models for NMT. In this work, we conduct an empirical study
on the encoder and the decoder in NMT, taking Transformer as an example. We
find that 1) the decoder handles an easier task than the encoder in NMT, 2) the
decoder is more sensitive to the input noise than the encoder, and 3) the
preceding words/tokens in the decoder provide strong conditional information,
which accounts for the two observations above. We hope those observations can
shed light on the characteristics of the encoder and decoder and inspire future
research on NMT.
| 2,019 | Computation and Language |
A Sensitivity Analysis of Attention-Gated Convolutional Neural Networks
for Sentence Classification | In this paper, we investigate the effect of different hyperparameters as well
as different combinations of hyperparameters settings on the performance of the
Attention-Gated Convolutional Neural Networks (AGCNNs), e.g., the kernel window
size, the number of feature maps, the keep rate of the dropout layer, and the
activation function. We draw practical advice from a wide range of empirical
results. Through the sensitivity analysis, we further improve the
hyperparameters settings of AGCNNs. Experiments show that our proposals could
achieve an average of 0.81% and 0.67% improvements on AGCNN-NLReLU-rand and
AGCNN-SELU-rand, respectively; and an average of 0.47% and 0.45% improvements
on AGCNN-NLReLU-static and AGCNN-SELU-static, respectively.
| 2,019 | Computation and Language |
EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation | In this paper, we investigate the emotion recognition ability of the
pre-training language model, namely BERT. By the nature of the framework of
BERT, a two-sentence structure, we adapt BERT to continues dialogue emotion
prediction tasks, which rely heavily on the sentence-level context-aware
understanding. The experiments show that by mapping the continues dialogue into
a causal utterance pair, which is constructed by the utterance and the reply
utterance, models can better capture the emotions of the reply utterance. The
present method has achieved 0.815 and 0.885 micro F1 score in the testing
dataset of Friends and EmotionPush, respectively.
| 2,019 | Computation and Language |
Message Passing Attention Networks for Document Understanding | Graph neural networks have recently emerged as a very effective framework for
processing graph-structured data. These models have achieved state-of-the-art
performance in many tasks. Most graph neural networks can be described in terms
of message passing, vertex update, and readout functions. In this paper, we
represent documents as word co-occurrence networks and propose an application
of the message passing framework to NLP, the Message Passing Attention network
for Document understanding (MPAD). We also propose several hierarchical
variants of MPAD. Experiments conducted on 10 standard text classification
datasets show that our architectures are competitive with the state-of-the-art.
Ablation studies reveal further insights about the impact of the different
components on performance. Code is publicly available at:
https://github.com/giannisnik/mpad .
| 2,019 | Computation and Language |
Leveraging Sentence Similarity in Natural Language Generation: Improving
Beam Search using Range Voting | We propose a method for natural language generation, choosing the most
representative output rather than the most likely output. By viewing the
language generation process from the voting theory perspective, we define
representativeness using range voting and a similarity measure. The proposed
method can be applied when generating from any probabilistic language model,
including n-gram models and neural network models. We evaluate different
similarity measures on an image captioning task and a machine translation task,
and show that our method generates longer and more diverse sentences, providing
a solution to the common problem of short outputs being preferred over longer
and more informative ones. The generated sentences obtain higher BLEU scores,
particularly when the beam size is large. We also perform a human evaluation on
both tasks and find that the outputs generated using our method are rated
higher.
| 2,020 | Computation and Language |
Understanding Undesirable Word Embedding Associations | Word embeddings are often criticized for capturing undesirable word
associations such as gender stereotypes. However, methods for measuring and
removing such biases remain poorly understood. We show that for any embedding
model that implicitly does matrix factorization, debiasing vectors post hoc
using subspace projection (Bolukbasi et al., 2016) is, under certain
conditions, equivalent to training on an unbiased corpus. We also prove that
WEAT, the most common association test for word embeddings, systematically
overestimates bias. Given that the subspace projection method is provably
effective, we use it to derive a new measure of association called the
$\textit{relational inner product association}$ (RIPA). Experiments with RIPA
reveal that, on average, skipgram with negative sampling (SGNS) does not make
most words any more gendered than they are in the training corpus. However, for
gender-stereotyped words, SGNS actually amplifies the gender association in the
corpus.
| 2,019 | Computation and Language |
Concurrent Parsing of Constituency and Dependency | Constituent and dependency representation for syntactic structure share a lot
of linguistic and computational characteristics, this paper thus makes the
first attempt by introducing a new model that is capable of parsing constituent
and dependency at the same time, so that lets either of the parsers enhance
each other. Especially, we evaluate the effect of different shared network
components and empirically verify that dependency parsing may be much more
beneficial from constituent parsing structure.
The proposed parser achieves new state-of-the-art performance for both
parsing tasks, constituent and dependency on PTB and CTB benchmarks.
| 2,019 | Computation and Language |
TDAM: a Topic-Dependent Attention Model for Sentiment Analysis | We propose a topic-dependent attention model for sentiment classification and
topic extraction. Our model assumes that a global topic embedding is shared
across documents and employs an attention mechanism to derive local topic
embedding for words and sentences. These are subsequently incorporated in a
modified Gated Recurrent Unit (GRU) for sentiment classification and extraction
of topics bearing different sentiment polarities. Those topics emerge from the
words' local topic embeddings learned by the internal attention of the GRU
cells in the context of a multi-task learning framework. In this paper, we
present the hierarchical architecture, the new GRU unit and the experiments
conducted on users' reviews which demonstrate classification performance on a
par with the state-of-the-art methodologies for sentiment classification and
topic coherence outperforming the current approaches for supervised topic
extraction. In addition, our model is able to extract coherent aspect-sentiment
clusters despite using no aspect-level annotations for training.
| 2,019 | Computation and Language |
RefNet: A Reference-aware Network for Background Based Conversation | Existing conversational systems tend to generate generic responses. Recently,
Background Based Conversations (BBCs) have been introduced to address this
issue. Here, the generated responses are grounded in some background
information. The proposed methods for BBCs are able to generate more
informative responses, they either cannot generate natural responses or have
difficulty in locating the right background information. In this paper, we
propose a Reference-aware Network (RefNet) to address the two issues. Unlike
existing methods that generate responses token by token, RefNet incorporates a
novel reference decoder that provides an alternative way to learn to directly
cite a semantic unit (e.g., a span containing complete semantic information)
from the background. Experimental results show that RefNet significantly
outperforms state-of-the-art methods in terms of both automatic and human
evaluations, indicating that RefNet can generate more appropriate and
human-like responses.
| 2,019 | Computation and Language |
TwistBytes -- Hierarchical Classification at GermEval 2019: walking the
fine line (of recall and precision) | We present here our approach to the GermEval 2019 Task 1 - Shared Task on
hierarchical classification of German blurbs. We achieved first place in the
hierarchical subtask B and second place on the root node, flat classification
subtask A. In subtask A, we applied a simple multi-feature TF-IDF extraction
method using different n-gram range and stopword removal, on each feature
extraction module. The classifier on top was a standard linear SVM. For the
hierarchical classification, we used a local approach, which was more
light-weighted but was similar to the one used in subtask A. The key point of
our approach was the application of a post-processing to cope with the
multi-label aspect of the task, increasing the recall but not surpassing the
precision measure score.
| 2,019 | Computation and Language |
Transfer in Deep Reinforcement Learning using Knowledge Graphs | Text adventure games, in which players must make sense of the world through
text descriptions and declare actions through text descriptions, provide a
stepping stone toward grounding action in language. Prior work has demonstrated
that using a knowledge graph as a state representation and question-answering
to pre-train a deep Q-network facilitates faster control policy transfer. In
this paper, we explore the use of knowledge graphs as a representation for
domain knowledge transfer for training text-adventure playing reinforcement
learning agents. Our methods are tested across multiple computer generated and
human authored games, varying in domain and complexity, and demonstrate that
our transfer learning methods let us learn a higher-quality control policy
faster.
| 2,019 | Computation and Language |
Recurrent Graph Syntax Encoder for Neural Machine Translation | Syntax-incorporated machine translation models have been proven successful in
improving the model's reasoning and meaning preservation ability. In this
paper, we propose a simple yet effective graph-structured encoder, the
Recurrent Graph Syntax Encoder, dubbed \textbf{RGSE}, which enhances the
ability to capture useful syntactic information. The RGSE is done over a
standard encoder (recurrent or self-attention encoder), regarding recurrent
network units as graph nodes and injects syntactic dependencies as edges, such
that RGSE models syntactic dependencies and sequential information
(\textit{i.e.}, word order) simultaneously. Our approach achieves considerable
improvements over several syntax-aware NMT models in English$\Rightarrow$German
and English$\Rightarrow$Czech translation tasks. And RGSE-equipped big model
obtains competitive result compared with the state-of-the-art model in WMT14
En-De task. Extensive analysis further verifies that RGSE could benefit long
sentence modeling, and produces better translations.
| 2,019 | Computation and Language |
Long and Diverse Text Generation with Planning-based Hierarchical
Variational Model | Existing neural methods for data-to-text generation are still struggling to
produce long and diverse texts: they are insufficient to model input data
dynamically during generation, to capture inter-sentence coherence, or to
generate diversified expressions. To address these issues, we propose a
Planning-based Hierarchical Variational Model (PHVM). Our model first plans a
sequence of groups (each group is a subset of input items to be covered by a
sentence) and then realizes each sentence conditioned on the planning result
and the previously generated context, thereby decomposing long text generation
into dependent sentence generation sub-tasks. To capture expression diversity,
we devise a hierarchical latent structure where a global planning latent
variable models the diversity of reasonable planning and a sequence of local
latent variables controls sentence realization. Experiments show that our model
outperforms state-of-the-art baselines in long and diverse text generation.
| 2,019 | Computation and Language |
Question Answering based Clinical Text Structuring Using Pre-trained
Language Model | Clinical text structuring is a critical and fundamental task for clinical
research. Traditional methods such as taskspecific end-to-end models and
pipeline models usually suffer from the lack of dataset and error propagation.
In this paper, we present a question answering based clinical text structuring
(QA-CTS) task to unify different specific tasks and make dataset shareable. A
novel model that aims to introduce domain-specific features (e.g., clinical
named entity information) into pre-trained language model is also proposed for
QA-CTS task. Experimental results on Chinese pathology reports collected from
Ruijing Hospital demonstrate our presented QA-CTS task is very effective to
improve the performance on specific tasks. Our proposed model also competes
favorably with strong baseline models in specific tasks.
| 2,019 | Computation and Language |
Bilingual Lexicon Induction with Semi-supervision in Non-Isometric
Embedding Spaces | Recent work on bilingual lexicon induction (BLI) has frequently depended
either on aligned bilingual lexicons or on distribution matching, often with an
assumption about the isometry of the two spaces. We propose a technique to
quantitatively estimate this assumption of the isometry between two embedding
spaces and empirically show that this assumption weakens as the languages in
question become increasingly etymologically distant. We then propose Bilingual
Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach
that relaxes the isometric assumption while leveraging both limited aligned
bilingual lexicons and a larger set of unaligned word embeddings, as well as a
novel hubness filtering technique. Our proposed method obtains state of the art
results on 15 of 18 language pairs on the MUSE dataset, and does particularly
well when the embedding spaces don't appear to be isometric. In addition, we
also show that adding supervision stabilizes the learning procedure, and is
effective even with minimal supervision.
| 2,019 | Computation and Language |
Memory limitations are hidden in grammar | The ability to produce and understand an unlimited number of different
sentences is a hallmark of human language. Linguists have sought to define the
essence of this generative capacity using formal grammars that describe the
syntactic dependencies between constituents, independent of the computational
limitations of the human brain. Here, we evaluate this independence assumption
by sampling sentences uniformly from the space of possible syntactic
structures. We find that the average dependency distance between syntactically
related words, a proxy for memory limitations, is less than expected by chance
in a collection of state-of-the-art classes of dependency grammars. Our
findings indicate that memory limitations have permeated grammatical
descriptions, suggesting that it may be impossible to build a parsimonious
theory of human linguistic productivity independent of non-linguistic cognitive
constraints.
| 2,022 | Computation and Language |
Align, Mask and Select: A Simple Method for Incorporating Commonsense
Knowledge into Language Representation Models | The state-of-the-art pre-trained language representation models, such as
Bidirectional Encoder Representations from Transformers (BERT), rarely
incorporate commonsense knowledge or other knowledge explicitly. We propose a
pre-training approach for incorporating commonsense knowledge into language
representation models. We construct a commonsense-related multi-choice question
answering dataset for pre-training a neural language representation model. The
dataset is created automatically by our proposed "align, mask, and select"
(AMS) method. We also investigate different pre-training tasks. Experimental
results demonstrate that pre-training models using the proposed approach
followed by fine-tuning achieve significant improvements over previous
state-of-the-art models on two commonsense-related benchmarks, including
CommonsenseQA and Winograd Schema Challenge. We also observe that fine-tuned
models after the proposed pre-training approach maintain comparable performance
on other NLP tasks, such as sentence classification and natural language
inference tasks, compared to the original BERT models. These results verify
that the proposed approach, while significantly improving commonsense-related
NLP tasks, does not degrade the general language representation capabilities.
| 2,020 | Computation and Language |
Fast End-to-End Wikification | Wikification of large corpora is beneficial for various NLP applications.
Existing methods focus on quality performance rather than run-time, and are
therefore non-feasible for large data. Here, we introduce RedW, a run-time
oriented Wikification solution, based on Wikipedia redirects, that can Wikify
massive corpora with competitive performance. We further propose an efficient
method for estimating RedW confidence, opening the door for applying more
demanding methods only on top of RedW lower-confidence results. Our
experimental results support the validity of the proposed approach.
| 2,019 | Computation and Language |
Style Transfer for Texts: Retrain, Report Errors, Compare with Rewrites | This paper shows that standard assessment methodology for style transfer has
several significant problems. First, the standard metrics for style accuracy
and semantics preservation vary significantly on different re-runs. Therefore
one has to report error margins for the obtained results. Second, starting with
certain values of bilingual evaluation understudy (BLEU) between input and
output and accuracy of the sentiment transfer the optimization of these two
standard metrics diverge from the intuitive goal of the style transfer task.
Finally, due to the nature of the task itself, there is a specific dependence
between these two metrics that could be easily manipulated. Under these
circumstances, we suggest taking BLEU between input and human-written
reformulations into consideration for benchmarks. We also propose three new
architectures that outperform state of the art in terms of this metric.
| 2,019 | Computation and Language |
Are You for Real? Detecting Identity Fraud via Dialogue Interactions | Identity fraud detection is of great importance in many real-world scenarios
such as the financial industry. However, few studies addressed this problem
before. In this paper, we focus on identity fraud detection in loan
applications and propose to solve this problem with a novel interactive
dialogue system which consists of two modules. One is the knowledge graph (KG)
constructor organizing the personal information for each loan applicant. The
other is structured dialogue management that can dynamically generate a series
of questions based on the personal KG to ask the applicants and determine their
identity states. We also present a heuristic user simulator based on problem
analysis to evaluate our method. Experiments have shown that the trainable
dialogue system can effectively detect fraudsters, and achieve higher
recognition accuracy compared with rule-based systems. Furthermore, our learned
dialogue strategies are interpretable and flexible, which can help promote
real-world applications.
| 2,019 | Computation and Language |
Fine-grained Sentiment Analysis with Faithful Attention | While the general task of textual sentiment classification has been widely
studied, much less research looks specifically at sentiment between a specified
source and target. To tackle this problem, we experimented with a
state-of-the-art relation extraction model. Surprisingly, we found that despite
reasonable performance, the model's attention was often systematically
misaligned with the words that contribute to sentiment. Thus, we directly
trained the model's attention with human rationales and improved our model
performance by a robust 4~8 points on all tasks we defined on our data sets. We
also present a rigorous analysis of the model's attention, both trained and
untrained, using novel and intuitive metrics. Our results show that untrained
attention does not provide faithful explanations; however, trained attention
with concisely annotated human rationales not only increases performance, but
also brings faithful explanations. Encouragingly, a small amount of annotated
human rationales suffice to correct the attention in our task.
| 2,019 | Computation and Language |
Automated email Generation for Targeted Attacks using Natural Language | With an increasing number of malicious attacks, the number of people and
organizations falling prey to social engineering attacks is proliferating.
Despite considerable research in mitigation systems, attackers continually
improve their modus operandi by using sophisticated machine learning, natural
language processing techniques with an intent to launch successful targeted
attacks aimed at deceiving detection mechanisms as well as the victims. We
propose a system for advanced email masquerading attacks using Natural Language
Generation (NLG) techniques. Using legitimate as well as an influx of varying
malicious content, the proposed deep learning system generates \textit{fake}
emails with malicious content, customized depending on the attacker's intent.
The system leverages Recurrent Neural Networks (RNNs) for automated text
generation. We also focus on the performance of the generated emails in
defeating statistical detectors, and compare and analyze the emails using a
proposed baseline.
| 2,019 | Computation and Language |
Message Passing for Complex Question Answering over Knowledge Graphs | Question answering over knowledge graphs (KGQA) has evolved from simple
single-fact questions to complex questions that require graph traversal and
aggregation. We propose a novel approach for complex KGQA that uses
unsupervised message passing, which propagates confidence scores obtained by
parsing an input question and matching terms in the knowledge graph to a set of
possible answers. First, we identify entity, relationship, and class names
mentioned in a natural language question, and map these to their counterparts
in the graph. Then, the confidence scores of these mappings propagate through
the graph structure to locate the answer entities. Finally, these are
aggregated depending on the identified question type. This approach can be
efficiently implemented as a series of sparse matrix multiplications mimicking
joins over small local subgraphs. Our evaluation results show that the proposed
approach outperforms the state-of-the-art on the LC-QuAD benchmark. Moreover,
we show that the performance of the approach depends only on the quality of the
question interpretation results, i.e., given a correct relevance score
distribution, our approach always produces a correct answer ranking. Our error
analysis reveals correct answers missing from the benchmark dataset and
inconsistencies in the DBpedia knowledge graph. Finally, we provide a
comprehensive evaluation of the proposed approach accompanied with an ablation
study and an error analysis, which showcase the pitfalls for each of the
question answering components in more detail.
| 2,019 | Computation and Language |
Neural Architectures for Nested NER through Linearization | We propose two neural network architectures for nested named entity
recognition (NER), a setting in which named entities may overlap and also be
labeled with more than one label. We encode the nested labels using a
linearized scheme. In our first proposed approach, the nested labels are
modeled as multilabels corresponding to the Cartesian product of the nested
labels in a standard LSTM-CRF architecture. In the second one, the nested NER
is viewed as a sequence-to-sequence problem, in which the input sequence
consists of the tokens and output sequence of the labels, using hard attention
on the word whose label is being predicted. The proposed methods outperform the
nested NER state of the art on four corpora: ACE-2004, ACE-2005, GENIA and
Czech CNEC. We also enrich our architectures with the recently published
contextual embeddings: ELMo, BERT and Flair, reaching further improvements for
the four nested entity corpora. In addition, we report flat NER
state-of-the-art results for CoNLL-2002 Dutch and Spanish and for CoNLL-2003
English.
| 2,019 | Computation and Language |
UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization
with Morphological Categories, Corpora Merging | We present our contribution to the SIGMORPHON 2019 Shared Task:
Crosslinguality and Context in Morphology, Task 2: contextual morphological
analysis and lemmatization.
We submitted a modification of the UDPipe 2.0, one of best-performing systems
of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal
Dependencies and an overall winner of the The 2018 Shared Task on Extrinsic
Parser Evaluation.
As our first improvement, we use the pretrained contextualized embeddings
(BERT) as additional inputs to the network; secondly, we use individual
morphological features as regularization; and finally, we merge the selected
corpora of the same language.
In the lemmatization task, our system exceeds all the submitted systems by a
wide margin with lemmatization accuracy 95.78 (second best was 95.00, third
94.46). In the morphological analysis, our system placed tightly second: our
morphological analysis accuracy was 93.19, the winning system's 93.23.
| 2,019 | Computation and Language |
Encoder-Agnostic Adaptation for Conditional Language Generation | Large pretrained language models have changed the way researchers approach
discriminative natural language understanding tasks, leading to the dominance
of approaches that adapt a pretrained model for arbitrary downstream tasks.
However it is an open-question how to use similar techniques for language
generation. Early results in the encoder-agnostic setting have been mostly
negative. In this work we explore methods for adapting a pretrained language
model to arbitrary conditional input. We observe that pretrained transformer
models are sensitive to large parameter changes during tuning. We therefore
propose an adaptation that directly injects arbitrary conditioning into self
attention, an approach we call pseudo self attention. Through experiments on
four diverse conditional text generation tasks we show that this
encoder-agnostic technique outperforms strong baselines, produces coherent
generations, and is data efficient.
| 2,019 | Computation and Language |
Why So Down? The Role of Negative (and Positive) Pointwise Mutual
Information in Distributional Semantics | In distributional semantics, the pointwise mutual information
($\mathit{PMI}$) weighting of the cooccurrence matrix performs far better than
raw counts. There is, however, an issue with unobserved pair cooccurrences as
$\mathit{PMI}$ goes to negative infinity. This problem is aggravated by
unreliable statistics from finite corpora which lead to a large number of such
pairs. A common practice is to clip negative $\mathit{PMI}$
($\mathit{\texttt{-} PMI}$) at $0$, also known as Positive $\mathit{PMI}$
($\mathit{PPMI}$). In this paper, we investigate alternative ways of dealing
with $\mathit{\texttt{-} PMI}$ and, more importantly, study the role that
negative information plays in the performance of a low-rank, weighted
factorization of different $\mathit{PMI}$ matrices. Using various semantic and
syntactic tasks as probes into models which use either negative or positive
$\mathit{PMI}$ (or both), we find that most of the encoded semantics and syntax
come from positive $\mathit{PMI}$, in contrast to $\mathit{\texttt{-} PMI}$
which contributes almost exclusively syntactic information. Our findings deepen
our understanding of distributional semantics, while also introducing novel
$PMI$ variants and grounding the popular $PPMI$ measure.
| 2,019 | Computation and Language |
The Natural Selection of Words: Finding the Features of Fitness | We introduce a dataset for studying the evolution of words, constructed from
WordNet and the Google Books Ngram Corpus. The dataset tracks the evolution of
4,000 synonym sets (synsets), containing 9,000 English words, from 1800 AD to
2000 AD. We present a supervised learning algorithm that is able to predict the
future leader of a synset: the word in the synset that will have the highest
frequency. The algorithm uses features based on a word's length, the characters
in the word, and the historical frequencies of the word. It can predict change
of leadership (including the identity of the new leader) fifty years in the
future, with an F-score considerably above random guessing. Analysis of the
learned models provides insight into the causes of change in the leader of a
synset. The algorithm confirms observations linguists have made, such as the
trend to replace the -ise suffix with -ize, the rivalry between the -ity and
-ness suffixes, and the struggle between economy (shorter words are easier to
remember and to write) and clarity (longer words are more distinctive and less
likely to be confused with one another). The results indicate that integration
of the Google Books Ngram Corpus with WordNet has significant potential for
improving our understanding of how language evolves.
| 2,019 | Computation and Language |
Universal Adversarial Triggers for Attacking and Analyzing NLP | Adversarial examples highlight model vulnerabilities and are useful for
evaluation and interpretation. We define universal adversarial triggers:
input-agnostic sequences of tokens that trigger a model to produce a specific
prediction when concatenated to any input from a dataset. We propose a
gradient-guided search over tokens which finds short trigger sequences (e.g.,
one word for classification and four words for language modeling) that
successfully trigger the target prediction. For example, triggers cause SNLI
entailment accuracy to drop from 89.94% to 0.55%, 72% of "why" questions in
SQuAD to be answered "to kill american people", and the GPT-2 language model to
spew racist output even when conditioned on non-racial contexts. Furthermore,
although the triggers are optimized using white-box access to a specific model,
they transfer to other models for all tasks we consider. Finally, since
triggers are input-agnostic, they provide an analysis of global model behavior.
For instance, they confirm that SNLI models exploit dataset biases and help to
diagnose heuristics learned by reading comprehension models.
| 2,021 | Computation and Language |
Teacher-Student Framework Enhanced Multi-domain Dialogue Generation | Dialogue systems dealing with multi-domain tasks are highly required. How to
record the state remains a key problem in a task-oriented dialogue system.
Normally we use human-defined features as dialogue states and apply a state
tracker to extract these features. However, the performance of such a system is
limited by the error propagation of a state tracker. In this paper, we propose
a dialogue generation model that needs no external state trackers and still
benefits from human-labeled semantic data. By using a teacher-student
framework, several teacher models are firstly trained in their individual
domains, learn dialogue policies from labeled states. And then the learned
knowledge and experience are merged and transferred to a universal student
model, which takes raw utterance as its input. Experiments show that the
dialogue system trained under our framework outperforms the one uses a belief
tracker.
| 2,020 | Computation and Language |
CBOWRA: A Representation Learning Approach for Medication Anomaly
Detection | Electronic health record is an important source for clinical researches and
applications, and errors inevitably occur in the data, which could lead to
severe damages to both patients and hospital services. One of such error is the
mismatches between diagnoses and prescriptions, which we address as 'medication
anomaly' in the paper, and clinicians used to manually identify and correct
them. With the development of machine learning techniques, researchers are able
to train specific model for the task, but the process still requires expert
knowledge to construct proper features, and few semantic relations are
considered. In this paper, we propose a simple, yet effective detection method
that tackles the problem by detecting the semantic inconsistency between
diagnoses and prescriptions. Unlike traditional outlier or anomaly detection,
the scheme uses continuous bag of words to construct the semantic connection
between specific central words and their surrounding context. The detection of
medication anomaly is transformed into identifying the least possible central
word based on given context. To help distinguish the anomaly from normal
context, we also incorporate a ranking accumulation strategy. The experiments
were conducted on two real hospital electronic medical records, and the topN
accuracy of the proposed method increased by 3.91 to 10.91% and 0.68 to 2.13%
on the datasets, respectively, which is highly competitive to other traditional
machine learning-based approaches.
| 2,019 | Computation and Language |
Discriminative Topic Mining via Category-Name Guided Text Embedding | Mining a set of meaningful and distinctive topics automatically from massive
text corpora has broad applications. Existing topic models, however, typically
work in a purely unsupervised way, which often generate topics that do not fit
users' particular needs and yield suboptimal performance on downstream tasks.
We propose a new task, discriminative topic mining, which leverages a set of
user-provided category names to mine discriminative topics from text corpora.
This new task not only helps a user understand clearly and distinctively the
topics he/she is most interested in, but also benefits directly keyword-driven
classification tasks. We develop CatE, a novel category-name guided text
embedding method for discriminative topic mining, which effectively leverages
minimal user guidance to learn a discriminative embedding space and discover
category representative terms in an iterative manner. We conduct a
comprehensive set of experiments to show that CatE mines high-quality set of
topics guided by category names only, and benefits a variety of downstream
applications including weakly-supervised classification and lexical entailment
direction identification.
| 2,020 | Computation and Language |
Latent-Variable Non-Autoregressive Neural Machine Translation with
Deterministic Inference Using a Delta Posterior | Although neural machine translation models reached high translation quality,
the autoregressive nature makes inference difficult to parallelize and leads to
high translation latency. Inspired by recent refinement-based approaches, we
propose LaNMT, a latent-variable non-autoregressive model with continuous
latent variables and deterministic inference procedure. In contrast to existing
approaches, we use a deterministic inference algorithm to find the target
sequence that maximizes the lowerbound to the log-probability. During
inference, the length of translation automatically adapts itself. Our
experiments show that the lowerbound can be greatly increased by running the
inference algorithm, resulting in significantly improved translation quality.
Our proposed model closes the performance gap between non-autoregressive and
autoregressive approaches on ASPEC Ja-En dataset with 8.6x faster decoding. On
WMT'14 En-De dataset, our model narrows the gap with autoregressive baseline to
2.0 BLEU points with 12.5x speedup. By decoding multiple initial latent
variables in parallel and rescore using a teacher model, the proposed model
further brings the gap down to 1.0 BLEU point on WMT'14 En-De task with 6.8x
speedup.
| 2,019 | Computation and Language |
ARAML: A Stable Adversarial Training Framework for Text Generation | Most of the existing generative adversarial networks (GAN) for text
generation suffer from the instability of reinforcement learning training
algorithms such as policy gradient, leading to unstable performance. To tackle
this problem, we propose a novel framework called Adversarial Reward Augmented
Maximum Likelihood (ARAML). During adversarial training, the discriminator
assigns rewards to samples which are acquired from a stationary distribution
near the data rather than the generator's distribution. The generator is
optimized with maximum likelihood estimation augmented by the discriminator's
rewards instead of policy gradient. Experiments show that our model can
outperform state-of-the-art text GANs with a more stable training process.
| 2,019 | Computation and Language |
CA-EHN: Commonsense Analogy from E-HowNet | Embedding commonsense knowledge is crucial for end-to-end models to
generalize inference beyond training corpora. However, existing word analogy
datasets have tended to be handcrafted, involving permutations of hundreds of
words with only dozens of pre-defined relations, mostly morphological relations
and named entities. In this work, we model commonsense knowledge down to
word-level analogical reasoning by leveraging E-HowNet, an ontology that
annotates 88K Chinese words with their structured sense definitions and English
translations. We present CA-EHN, the first commonsense word analogy dataset
containing 90,505 analogies covering 5,656 words and 763 relations. Experiments
show that CA-EHN stands out as a great indicator of how well word
representations embed commonsense knowledge. The dataset is publicly available
at https://github.com/ckiplab/CA-EHN.
| 2,020 | Computation and Language |
Prosodic Phrase Alignment for Machine Dubbing | Dubbing is a type of audiovisual translation where dialogues are translated
and enacted so that they give the impression that the media is in the target
language. It requires a careful alignment of dubbed recordings with the lip
movements of performers in order to achieve visual coherence. In this paper, we
deal with the specific problem of prosodic phrase synchronization within the
framework of machine dubbing. Our methodology exploits the attention mechanism
output in neural machine translation to find plausible phrasing for the
translated dialogue lines and then uses them to condition their synthesis. Our
initial work in this field records comparable speech rate ratio to professional
dubbing translation, and improvement in terms of lip-syncing of long dialogue
lines.
| 2,019 | Computation and Language |
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge | Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous
word in a particular context. Traditional supervised methods rarely take into
consideration the lexical resources like WordNet, which are widely utilized in
knowledge-based methods. Recent studies have shown the effectiveness of
incorporating gloss (sense definition) into neural networks for WSD. However,
compared with traditional word expert supervised methods, they have not
achieved much improvement. In this paper, we focus on how to better leverage
gloss knowledge in a supervised neural WSD system. We construct context-gloss
pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained
BERT model on SemCor3.0 training corpus and the experimental results on several
English all-words WSD benchmark datasets show that our approach outperforms the
state-of-the-art systems.
| 2,020 | Computation and Language |
Density Matrices with Metric for Derivational Ambiguity | Recent work on vector-based compositional natural language semantics has
proposed the use of density matrices to model lexical ambiguity and (graded)
entailment (e.g. Piedeleu et al 2015, Bankova et al 2019, Sadrzadeh et al
2018). Ambiguous word meanings, in this work, are represented as mixed states,
and the compositional interpretation of phrases out of their constituent parts
takes the form of a strongly monoidal functor sending the derivational
morphisms of a pregroup syntax to linear maps in FdHilb. Our aims in this paper
are threefold. Firstly, we replace the pregroup front end by a Lambek
categorial grammar with directional implications expressing a word's
selectional requirements. By the Curry-Howard correspondence, the derivations
of the grammar's type logic are associated with terms of the (ordered) linear
lambda calculus; these terms can be read as programs for compositional meaning
assembly with density matrices as the target semantic spaces. Secondly, we
extend on the existing literature and introduce a symmetric, nondegenerate
bilinear form called a "metric" that defines a canonical isomorphism between a
vector space and its dual, allowing us to keep a distinction between left and
right implication. Thirdly, we use this metric to define density matrix spaces
in a directional form, modeling the ubiquitous derivational ambiguity of
natural language syntax, and show how this alows an integrated treatment of
lexical and derivational forms of ambiguity controlled at the level of the
interpretation.
| 2,020 | Computation and Language |
Deep Contextualized Word Embeddings in Transition-Based and Graph-Based
Dependency Parsing -- A Tale of Two Parsers Revisited | Transition-based and graph-based dependency parsers have previously been
shown to have complementary strengths and weaknesses: transition-based parsers
exploit rich structural features but suffer from error propagation, while
graph-based parsers benefit from global optimization but have restricted
feature scope. In this paper, we show that, even though some details of the
picture have changed after the switch to neural networks and continuous
representations, the basic trade-off between rich features and global
optimization remains essentially the same. Moreover, we show that deep
contextualized word embeddings, which allow parsers to pack information about
global sentence structure into local feature representations, benefit
transition-based parsers more than graph-based parsers, making the two
approaches virtually equivalent in terms of both accuracy and error profile. We
argue that the reason is that these representations help prevent search errors
and thereby allow transition-based parsers to better exploit their inherent
strength of making accurate local decisions. We support this explanation by an
error analysis of parsing experiments on 13 languages.
| 2,019 | Computation and Language |
Evaluating Contextualized Embeddings on 54 Languages in POS Tagging,
Lemmatization and Dependency Parsing | We present an extensive evaluation of three recently proposed methods for
contextualized embeddings on 89 corpora in 54 languages of the Universal
Dependencies 2.3 in three tasks: POS tagging, lemmatization, and dependency
parsing. Employing the BERT, Flair and ELMo as pretrained embedding inputs in a
strong baseline of UDPipe 2.0, one of the best-performing systems of the CoNLL
2018 Shared Task and an overall winner of the EPE 2018, we present a one-to-one
comparison of the three contextualized word embedding methods, as well as a
comparison with word2vec-like pretrained embeddings and with end-to-end
character-level word embeddings. We report state-of-the-art results in all
three tasks as compared to results on UD 2.2 in the CoNLL 2018 Shared Task.
| 2,019 | Computation and Language |
LXMERT: Learning Cross-Modality Encoder Representations from
Transformers | Vision-and-language reasoning requires an understanding of visual concepts,
language semantics, and, most importantly, the alignment and relationships
between these two modalities. We thus propose the LXMERT (Learning
Cross-Modality Encoder Representations from Transformers) framework to learn
these vision-and-language connections. In LXMERT, we build a large-scale
Transformer model that consists of three encoders: an object relationship
encoder, a language encoder, and a cross-modality encoder. Next, to endow our
model with the capability of connecting vision and language semantics, we
pre-train the model with large amounts of image-and-sentence pairs, via five
diverse representative pre-training tasks: masked language modeling, masked
object prediction (feature regression and label classification), cross-modality
matching, and image question answering. These tasks help in learning both
intra-modality and cross-modality relationships. After fine-tuning from our
pre-trained parameters, our model achieves the state-of-the-art results on two
visual question answering datasets (i.e., VQA and GQA). We also show the
generalizability of our pre-trained cross-modality model by adapting it to a
challenging visual-reasoning task, NLVR2, and improve the previous best result
by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies
to prove that both our novel model components and pre-training strategies
significantly contribute to our strong results; and also present several
attention visualizations for the different encoders. Code and pre-trained
models publicly available at: https://github.com/airsplay/lxmert
| 2,019 | Computation and Language |
Controversy in Context | With the growing interest in social applications of Natural Language
Processing and Computational Argumentation, a natural question is how
controversial a given concept is. Prior works relied on Wikipedia's metadata
and on content analysis of the articles pertaining to a concept in question.
Here we show that the immediate textual context of a concept is strongly
indicative of this property, and, using simple and language-independent
machine-learning tools, we leverage this observation to achieve
state-of-the-art results in controversiality prediction. In addition, we
analyze and make available a new dataset of concepts labeled for
controversiality. It is significantly larger than existing datasets, and grades
concepts on a 0-10 scale, rather than treating controversiality as a binary
label.
| 2,019 | Computation and Language |
Learning document embeddings along with their uncertainties | Majority of the text modelling techniques yield only point-estimates of
document embeddings and lack in capturing the uncertainty of the estimates.
These uncertainties give a notion of how well the embeddings represent a
document. We present Bayesian subspace multinomial model (Bayesian SMM), a
generative log-linear model that learns to represent documents in the form of
Gaussian distributions, thereby encoding the uncertainty in its co-variance.
Additionally, in the proposed Bayesian SMM, we address a commonly encountered
problem of intractability that appears during variational inference in
mixed-logit models. We also present a generative Gaussian linear classifier for
topic identification that exploits the uncertainty in document embeddings. Our
intrinsic evaluation using perplexity measure shows that the proposed Bayesian
SMM fits the data better as compared to the state-of-the-art neural variational
document model on Fisher speech and 20Newsgroups text corpora. Our topic
identification experiments show that the proposed systems are robust to
over-fitting on unseen test data. The topic ID results show that the proposed
model is outperforms state-of-the-art unsupervised topic models and achieve
comparable results to the state-of-the-art fully supervised discriminative
models.
| 2,020 | Computation and Language |
MoEL: Mixture of Empathetic Listeners | Previous research on empathetic dialogue systems has mostly focused on
generating responses given certain emotions. However, being empathetic not only
requires the ability of generating emotional responses, but more importantly,
requires the understanding of user emotions and replying appropriately. In this
paper, we propose a novel end-to-end approach for modeling empathy in dialogue
systems: Mixture of Empathetic Listeners (MoEL). Our model first captures the
user emotions and outputs an emotion distribution. Based on this, MoEL will
softly combine the output states of the appropriate Listener(s), which are each
optimized to react to certain emotions, and generate an empathetic response.
Human evaluations on empathetic-dialogues (Rashkin et al., 2018) dataset
confirm that MoEL outperforms multitask training baseline in terms of empathy,
relevance, and fluency. Furthermore, the case study on generated responses of
different Listeners shows high interpretability of our model.
| 2,019 | Computation and Language |
Improving Neural Machine Translation with Pre-trained Representation | Monolingual data has been demonstrated to be helpful in improving the
translation quality of neural machine translation (NMT). The current methods
stay at the usage of word-level knowledge, such as generating synthetic
parallel data or extracting information from word embedding. In contrast, the
power of sentence-level contextual knowledge which is more complex and diverse,
playing an important role in natural language generation, has not been fully
exploited. In this paper, we propose a novel structure which could leverage
monolingual data to acquire sentence-level contextual representations. Then, we
design a framework for integrating both source and target sentence-level
representations into NMT model to improve the translation quality. Experimental
results on Chinese-English, German-English machine translation tasks show that
our proposed model achieves improvement over strong Transformer baselines,
while experiments on English-Turkish further demonstrate the effectiveness of
our approach in the low-resource scenario.
| 2,019 | Computation and Language |
Latent Relation Language Models | In this paper, we propose Latent Relation Language Models (LRLMs), a class of
language models that parameterizes the joint distribution over the words in a
document and the entities that occur therein via knowledge graph relations.
This model has a number of attractive properties: it not only improves language
modeling performance, but is also able to annotate the posterior probability of
entity spans for a given text through relations. Experiments demonstrate
empirical improvements over both a word-based baseline language model and a
previous approach that incorporates knowledge graph information. Qualitative
analysis further demonstrates the proposed model's ability to learn to predict
appropriate relations in context.
| 2,019 | Computation and Language |
Copy-Enhanced Heterogeneous Information Learning for Dialogue State
Tracking | Dialogue state tracking (DST) is an essential component in task-oriented
dialogue systems, which estimates user goals at every dialogue turn. However,
most previous approaches usually suffer from the following problems. Many
discriminative models, especially end-to-end (E2E) models, are difficult to
extract unknown values that are not in the candidate ontology; previous
generative models, which can extract unknown values from utterances, degrade
the performance due to ignoring the semantic information of pre-defined
ontology. Besides, previous generative models usually need a hand-crafted list
to normalize the generated values. How to integrate the semantic information of
pre-defined ontology and dialogue text (heterogeneous texts) to generate
unknown values and improve performance becomes a severe challenge. In this
paper, we propose a Copy-Enhanced Heterogeneous Information Learning model with
multiple encoder-decoder for DST (CEDST), which can effectively generate all
possible values including unknown values by copying values from heterogeneous
texts. Meanwhile, CEDST can effectively decompose the large state space into
several small state spaces through multi-encoder, and employ multi-decoder to
make full use of the reduced spaces to generate values. Multi-encoder-decoder
architecture can significantly improve performance. Experiments show that CEDST
can achieve state-of-the-art results on two datasets and our constructed
datasets with many unknown values.
| 2,019 | Computation and Language |
Fine-tuning BERT for Joint Entity and Relation Extraction in Chinese
Medical Text | Entity and relation extraction is the necessary step in structuring medical
text. However, the feature extraction ability of the bidirectional long short
term memory network in the existing model does not achieve the best effect. At
the same time, the language model has achieved excellent results in more and
more natural language processing tasks. In this paper, we present a focused
attention model for the joint entity and relation extraction task. Our model
integrates well-known BERT language model into joint learning through dynamic
range attention mechanism, thus improving the feature representation ability of
shared parameter layer. Experimental results on coronary angiography texts
collected from Shuguang Hospital show that the F1-score of named entity
recognition and relation classification tasks reach 96.89% and 88.51%, which
are better than state-of-the-art methods 1.65% and 1.22%, respectively.
| 2,019 | Computation and Language |
Restricted Recurrent Neural Networks | Recurrent Neural Network (RNN) and its variations such as Long Short-Term
Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building
blocks for learning online data of sequential nature in many research areas,
including natural language processing and speech data analysis. In this paper,
we present a new methodology to significantly reduce the number of parameters
in RNNs while maintaining performance that is comparable or even better than
classical RNNs. The new proposal, referred to as Restricted Recurrent Neural
Network (RRNN), restricts the weight matrices corresponding to the input data
and hidden states at each time step to share a large proportion of parameters.
The new architecture can be regarded as a compression of its classical
counterpart, but it does not require pre-training or sophisticated parameter
fine-tuning, both of which are major issues in most existing compression
techniques. Experiments on natural language modeling show that compared with
its classical counterpart, the restricted recurrent architecture generally
produces comparable results at about 50\% compression rate. In particular, the
Restricted LSTM can outperform classical RNN with even less number of
parameters.
| 2,020 | Computation and Language |
On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word
Embedding Learning | Cross-lingual word embeddings are vector representations of words in
different languages where words with similar meaning are represented by similar
vectors, regardless of the language. Recent developments which construct these
embeddings by aligning monolingual spaces have shown that accurate alignments
can be obtained with little or no supervision. However, the focus has been on a
particular controlled scenario for evaluation, and there is no strong evidence
on how current state-of-the-art systems would fare with noisy text or for
language pairs with major linguistic differences. In this paper we present an
extensive evaluation over multiple cross-lingual embedding models, analyzing
their strengths and limitations with respect to different variables such as
target language, training corpora and amount of supervision. Our conclusions
put in doubt the view that high-quality cross-lingual embeddings can always be
learned without much supervision.
| 2,020 | Computation and Language |
Predict Emoji Combination with Retrieval Strategy | As emojis are widely used in social media, people not only use an emoji to
express their emotions or mention things but also extend its usage to represent
complicate emotions, concepts or activities by combining multiple emojis. In
this work, we study how emoji combination, a consecutive emoji sequence, is
used like a new language. We propose a novel algorithm called Retrieval
Strategy to predict what emoji combination follows given a short text as
context. Our algorithm treats emoji combinations as phrase in language, ranking
sets of emoji combinations like retrieving words from dictionary. We show that
our algorithm largely improves the F1 score from 0.141 to 0.204 on emoji
combination prediction task.
| 2,019 | Computation and Language |
Dialog State Tracking with Reinforced Data Augmentation | Neural dialog state trackers are generally limited due to the lack of
quantity and diversity of annotated training data. In this paper, we address
this difficulty by proposing a reinforcement learning (RL) based framework for
data augmentation that can generate high-quality data to improve the neural
state tracker. Specifically, we introduce a novel contextual bandit generator
to learn fine-grained augmentation policies that can generate new effective
instances by choosing suitable replacements for the specific context. Moreover,
by alternately learning between the generator and the state tracker, we can
keep refining the generative policies to generate more high-quality training
data for neural state tracker. Experimental results on the WoZ and MultiWoZ
(restaurant) datasets demonstrate that the proposed framework significantly
improves the performance over the state-of-the-art models, especially with
limited training data.
| 2,019 | Computation and Language |
Improving Captioning for Low-Resource Languages by Cycle Consistency | Improving the captioning performance on low-resource languages by leveraging
English caption datasets has received increasing research interest in recent
years. Existing works mainly fall into two categories: translation-based and
alignment-based approaches. In this paper, we propose to combine the merits of
both approaches in one unified architecture. Specifically, we use a pre-trained
English caption model to generate high-quality English captions, and then take
both the image and generated English captions to generate low-resource language
captions. We improve the captioning performance by adding the cycle consistency
constraint on the cycle of image regions, English words, and low-resource
language words. Moreover, our architecture has a flexible design which enables
it to benefit from large monolingual English caption datasets. Experimental
results demonstrate that our approach outperforms the state-of-the-art methods
on common evaluation metrics. The attention visualization also shows that the
proposed approach really improves the fine-grained alignment between words and
image regions.
| 2,019 | Computation and Language |
A Multi-Turn Emotionally Engaging Dialog Model | Open-domain dialog systems (also known as chatbots) have increasingly drawn
attention in natural language processing. Some of the recent work aims at
incorporating affect information into sequence-to-sequence neural dialog
modeling, making the response emotionally richer, while others use hand-crafted
rules to determine the desired emotion response. However, they do not
explicitly learn the subtle emotional interactions captured in human dialogs.
In this paper, we propose a multi-turn dialog system aimed at learning and
generating emotional responses that so far only humans know how to do. Compared
with two baseline models, offline experiments show that our method performs the
best in perplexity scores. Further human evaluations confirm that our chatbot
can keep track of the conversation context and generate emotionally more
appropriate responses while performing equally well on grammar.
| 2,020 | Computation and Language |
Disentangling Latent Emotions of Word Embeddings on Complex Emotional
Narratives | Word embedding models such as GloVe are widely used in natural language
processing (NLP) research to convert words into vectors. Here, we provide a
preliminary guide to probe latent emotions in text through GloVe word vectors.
First, we trained a neural network model to predict continuous emotion valence
ratings by taking linguistic inputs from Stanford Emotional Narratives Dataset
(SEND). After interpreting the weights in the model, we found that only a few
dimensions of the word vectors contributed to expressing emotions in text, and
words were clustered on the basis of their emotional polarities. Furthermore,
we performed a linear transformation that projected high dimensional embedded
vectors into an emotion space. Based on NRC Emotion Lexicon (EmoLex), we
visualized the entanglement of emotions in the lexicon by using both projected
and raw GloVe word vectors. We showed that, in the proposed emotion space, we
were able to better disentangle emotions than using raw GloVe vectors alone. In
addition, we found that the sum vectors of different pairs of emotion words
successfully captured expressed human feelings in the EmoLex. For example, the
sum of two embedded word vectors expressing Joy and Trust which express Love
shared high similarity (similarity score .62) with the embedded vector
expressing Optimism. On the contrary, this sum vector was dissimilar
(similarity score -.19) with the the embedded vector expressing Remorse. In
this paper, we argue that through the proposed emotion space, arithmetic of
emotions is preserved in the word vectors. The affective representation
uncovered in emotion vector space could shed some light on how to help machines
to disentangle emotion expressed in word embeddings.
| 2,019 | Computation and Language |
Replication of the Keyword Extraction part of the paper "'Without the
Clutter of Unimportant Words': Descriptive Keyphrases for Text Visualization" | "Keyword Extraction" refers to the task of automatically identifying the most
relevant and informative phrases in natural language text. As we are deluged
with large amounts of text data in many different forms and content - emails,
blogs, tweets, Facebook posts, academic papers, news articles - the task of
"making sense" of all this text by somehow summarizing them into a coherent
structure assumes paramount importance. Keyword extraction - a well-established
problem in Natural Language Processing - can help us here. In this report, we
construct and test three different hypotheses (all related to the task of
keyword extraction) that take us one step closer to understanding how to
meaningfully identify and extract "descriptive" keyphrases. The work reported
here was done as part of replicating the study by Chuang et al. [3].
| 2,019 | Computation and Language |
Rating for Parents: Predicting Children Suitability Rating for Movies
Based on Language of the Movies | The film culture has grown tremendously in recent years. The large number of
streaming services put films as one of the most convenient forms of
entertainment in today's world. Films can help us learn and inspire societal
change. But they can also negatively affect viewers. In this paper, our goal is
to predict the suitability of the movie content for children and young adults
based on scripts. The criterion that we use to measure suitability is the MPAA
rating that is specifically designed for this purpose. We propose an RNN based
architecture with attention that jointly models the genre and the emotions in
the script to predict the MPAA rating. We achieve 78% weighted F1-score for the
classification model that outperforms the traditional machine learning method
by 6%.
| 2,019 | Computation and Language |
Empirical Evaluation of Multi-task Learning in Deep Neural Networks for
Natural Language Processing | Multi-Task Learning (MTL) aims at boosting the overall performance of each
individual task by leveraging useful information contained in multiple related
tasks. It has shown great success in natural language processing (NLP).
Currently, a number of MLT architectures and learning mechanisms have been
proposed for various NLP tasks. However, there is no systematic exploration and
comparison of different MLT architectures and learning mechanisms for their
strong performance in-depth. In this paper, we conduct a thorough examination
of typical MTL methods on a broad range of representative NLP tasks. Our
primary goal is to understand the merits and demerits of existing MTL methods
in NLP tasks, thus devising new hybrid architectures intended to combine their
strengths.
| 2,020 | Computation and Language |
A Multi-level Neural Network for Implicit Causality Detection in Web
Texts | Mining causality from text is a complex and crucial natural language
understanding task corresponding to the human cognition. Existing studies at
its solution can be grouped into two primary categories: feature engineering
based and neural model based methods. In this paper, we find that the former
has incomplete coverage and inherent errors but provide prior knowledge; while
the latter leverages context information but causal inference of which is
insufficiency. To handle the limitations, we propose a novel causality
detection model named MCDN to explicitly model causal reasoning process, and
furthermore, to exploit the advantages of both methods. Specifically, we adopt
multi-head self-attention to acquire semantic feature at word level and develop
the SCRN to infer causality at segment level. To the best of our knowledge,
with regards to the causality tasks, this is the first time that the Relation
Network is applied. The experimental results show that: 1) the proposed
approach performs prominent performance on causality detection; 2) further
analysis manifests the effectiveness and robustness of MCDN.
| 2,021 | Computation and Language |
Polly Want a Cracker: Analyzing Performance of Parroting on Paraphrase
Generation Datasets | Paraphrase generation is an interesting and challenging NLP task which has
numerous practical applications. In this paper, we analyze datasets commonly
used for paraphrase generation research, and show that simply parroting input
sentences surpasses state-of-the-art models in the literature when evaluated on
standard metrics. Our findings illustrate that a model could be seemingly adept
at generating paraphrases, despite only making trivial changes to the input
sentence or even none at all.
| 2,019 | Computation and Language |
Parsimonious Morpheme Segmentation with an Application to Enriching Word
Embeddings | Traditionally, many text-mining tasks treat individual word-tokens as the
finest meaningful semantic granularity. However, in many languages and
specialized corpora, words are composed by concatenating semantically
meaningful subword structures. Word-level analysis cannot leverage the semantic
information present in such subword structures. With regard to word embedding
techniques, this leads to not only poor embeddings for infrequent words in
long-tailed text corpora but also weak capabilities for handling
out-of-vocabulary words. In this paper we propose MorphMine for unsupervised
morpheme segmentation. MorphMine applies a parsimony criterion to
hierarchically segment words into the fewest number of morphemes at each level
of the hierarchy. This leads to longer shared morphemes at each level of
segmentation. Experiments show that MorphMine segments words in a variety of
languages into human-verified morphemes. Additionally, we experimentally
demonstrate that utilizing MorphMine morphemes to enrich word embeddings
consistently improves embedding quality on a variety of of embedding
evaluations and a downstream language modeling task.
| 2,019 | Computation and Language |
PubLayNet: largest dataset ever for document layout analysis | Recognizing the layout of unstructured digital documents is an important step
when parsing the documents into structured machine-readable format for
downstream applications. Deep neural networks that are developed for computer
vision have been proven to be an effective method to analyze layout of document
images. However, document layout datasets that are currently publicly available
are several magnitudes smaller than established computing vision datasets.
Models have to be trained by transfer learning from a base model that is
pre-trained on a traditional computer vision dataset. In this paper, we develop
the PubLayNet dataset for document layout analysis by automatically matching
the XML representations and the content of over 1 million PDF articles that are
publicly available on PubMed Central. The size of the dataset is comparable to
established computer vision datasets, containing over 360 thousand document
images, where typical document layout elements are annotated. The experiments
demonstrate that deep neural networks trained on PubLayNet accurately recognize
the layout of scientific articles. The pre-trained models are also a more
effective base mode for transfer learning on a different document domain. We
release the dataset (https://github.com/ibm-aur-nlp/PubLayNet) to support
development and evaluation of more advanced models for document layout
analysis.
| 2,019 | Computation and Language |
Similarity Learning for Authorship Verification in Social Media | Authorship verification tries to answer the question if two documents with
unknown authors were written by the same author or not. A range of successful
technical approaches has been proposed for this task, many of which are based
on traditional linguistic features such as n-grams. These algorithms achieve
good results for certain types of written documents like books and novels.
Forensic authorship verification for social media, however, is a much more
challenging task since messages tend to be relatively short, with a large
variety of different genres and topics. At this point, traditional methods
based on features like n-grams have had limited success. In this work, we
propose a new neural network topology for similarity learning that
significantly improves the performance on the author verification task with
such challenging data sets.
| 2,019 | Computation and Language |
Representing text as abstract images enables image classifiers to also
simultaneously classify text | We introduce a novel method for converting text data into abstract image
representations, which allows image-based processing techniques (e.g. image
classification networks) to be applied to text-based comparison problems. We
apply the technique to entity disambiguation of inventor names in US patents.
The method involves converting text from each pairwise comparison between two
inventor name records into a 2D RGB (stacked) image representation. We then
train an image classification neural network to discriminate between such
pairwise comparison images, and use the trained network to label each pair of
records as either matched (same inventor) or non-matched (different inventors),
obtaining highly accurate results. Our new text-to-image representation method
could also be used more broadly for other NLP comparison problems, such as
disambiguation of academic publications, or for problems that require
simultaneous classification of both text and image datasets.
| 2,020 | Computation and Language |
GeoSQA: A Benchmark for Scenario-based Question Answering in the
Geography Domain at High School Level | Scenario-based question answering (SQA) has attracted increasing research
attention. It typically requires retrieving and integrating knowledge from
multiple sources, and applying general knowledge to a specific case described
by a scenario. SQA widely exists in the medical, geography, and legal
domains---both in practice and in the exams. In this paper, we introduce the
GeoSQA dataset. It consists of 1,981 scenarios and 4,110 multiple-choice
questions in the geography domain at high school level, where diagrams (e.g.,
maps, charts) have been manually annotated with natural language descriptions
to benefit NLP research. Benchmark results on a variety of state-of-the-art
methods for question answering, textual entailment, and reading comprehension
demonstrate the unique challenges presented by SQA for future research.
| 2,019 | Computation and Language |
Towards Better Understanding of Spontaneous Conversations: Overcoming
Automatic Speech Recognition Errors With Intent Recognition | In this paper, we present a method for correcting automatic speech
recognition (ASR) errors using a finite state transducer (FST) intent
recognition framework. Intent recognition is a powerful technique for dialog
flow management in turn-oriented, human-machine dialogs. This technique can
also be very useful in the context of human-human dialogs, though it serves a
different purpose of key insight extraction from conversations. We argue that
currently available intent recognition techniques are not applicable to
human-human dialogs due to the complex structure of turn-taking and various
disfluencies encountered in spontaneous conversations, exacerbated by speech
recognition errors and scarcity of domain-specific labeled data. Without
efficient key insight extraction techniques, raw human-human dialog transcripts
remain significantly unexploited.
Our contribution consists of a novel FST for intent indexing and an algorithm
for fuzzy intent search over the lattice - a compact graph encoding of ASR's
hypotheses. We also develop a pruning strategy to constrain the fuzziness of
the FST index search. Extracted intents represent linguistic domain knowledge
and help us improve (rescore) the original transcript. We compare our method
with a baseline, which uses only the most likely transcript hypothesis (best
path), and find an increase in the total number of recognized intents by 25%.
| 2,019 | Computation and Language |
Are We Modeling the Task or the Annotator? An Investigation of Annotator
Bias in Natural Language Understanding Datasets | Crowdsourcing has been the prevalent paradigm for creating natural language
understanding datasets in recent years. A common crowdsourcing practice is to
recruit a small number of high-quality workers, and have them massively
generate examples. Having only a few workers generate the majority of examples
raises concerns about data diversity, especially when workers freely generate
sentences. In this paper, we perform a series of experiments showing these
concerns are evident in three recent NLP datasets. We show that model
performance improves when training with annotator identifiers as features, and
that models are able to recognize the most productive annotators. Moreover, we
show that often models do not generalize well to examples from annotators that
did not contribute to the training set. Our findings suggest that annotator
bias should be monitored during dataset creation, and that test set annotators
should be disjoint from training set annotators.
| 2,019 | Computation and Language |
Evaluating Defensive Distillation For Defending Text Processing Neural
Networks Against Adversarial Examples | Adversarial examples are artificially modified input samples which lead to
misclassifications, while not being detectable by humans. These adversarial
examples are a challenge for many tasks such as image and text classification,
especially as research shows that many adversarial examples are transferable
between different classifiers. In this work, we evaluate the performance of a
popular defensive strategy for adversarial examples called defensive
distillation, which can be successful in hardening neural networks against
adversarial examples in the image domain. However, instead of applying
defensive distillation to networks for image classification, we examine, for
the first time, its performance on text classification tasks and also evaluate
its effect on the transferability of adversarial text examples. Our results
indicate that defensive distillation only has a minimal impact on text
classifying neural networks and does neither help with increasing their
robustness against adversarial examples nor prevent the transferability of
adversarial examples between neural networks.
| 2,019 | Computation and Language |
It Takes Nine to Smell a Rat: Neural Multi-Task Learning for
Check-Worthiness Prediction | We propose a multi-task deep-learning approach for estimating the
check-worthiness of claims in political debates. Given a political debate, such
as the 2016 US Presidential and Vice-Presidential ones, the task is to predict
which statements in the debate should be prioritized for fact-checking. While
different fact-checking organizations would naturally make different choices
when analyzing the same debate, we show that it pays to learn from multiple
sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago
Tribune, The Guardian, and Washington Post) in a multi-task learning setup,
even when a particular source is chosen as a target to imitate. Our evaluation
shows state-of-the-art results on a standard dataset for the task of
check-worthiness prediction.
| 2,019 | Computation and Language |
WikiCREM: A Large Unsupervised Corpus for Coreference Resolution | Pronoun resolution is a major area of natural language understanding.
However, large-scale training sets are still scarce, since manually labelling
data is costly. In this work, we introduce WikiCREM (Wikipedia CoREferences
Masked) a large-scale, yet accurate dataset of pronoun disambiguation
instances. We use a language-model-based approach for pronoun resolution in
combination with our WikiCREM dataset. We compare a series of models on a
collection of diverse and challenging coreference resolution problems, where we
match or outperform previous state-of-the-art approaches on 6 out of 7
datasets, such as GAP, DPR, WNLI, PDP, WinoBias, and WinoGender. We release our
model to be used off-the-shelf for solving pronoun disambiguation.
| 2,019 | Computation and Language |
"Mask and Infill" : Applying Masked Language Model to Sentiment Transfer | This paper focuses on the task of sentiment transfer on non-parallel text,
which modifies sentiment attributes (e.g., positive or negative) of sentences
while preserving their attribute-independent content. Due to the limited
capability of RNNbased encoder-decoder structure to capture deep and long-range
dependencies among words, previous works can hardly generate satisfactory
sentences from scratch. When humans convert the sentiment attribute of a
sentence, a simple but effective approach is to only replace the original
sentimental tokens in the sentence with target sentimental expressions, instead
of building a new sentence from scratch. Such a process is very similar to the
task of Text Infilling or Cloze, which could be handled by a deep bidirectional
Masked Language Model (e.g. BERT). So we propose a two step approach "Mask and
Infill". In the mask step, we separate style from content by masking the
positions of sentimental tokens. In the infill step, we retrofit MLM to
Attribute Conditional MLM, to infill the masked positions by predicting words
or phrases conditioned on the context1 and target sentiment. We evaluate our
model on two review datasets with quantitative, qualitative, and human
evaluations. Experimental results demonstrate that our models improve
state-of-the-art performance.
| 2,019 | Computation and Language |
Populating Web Scale Knowledge Graphs using Distantly Supervised
Relation Extraction and Validation | In this paper, we propose a fully automated system to extend knowledge graphs
using external information from web-scale corpora. The designed system
leverages a deep learning based technology for relation extraction that can be
trained by a distantly supervised approach. In addition to that, the system
uses a deep learning approach for knowledge base completion by utilizing the
global structure information of the induced KG to further refine the confidence
of the newly discovered relations. The designed system does not require any
effort for adaptation to new languages and domains as it does not use any
hand-labeled data, NLP analytics and inference rules. Our experiments,
performed on a popular academic benchmark demonstrate that the suggested system
boosts the performance of relation extraction by a wide margin, reporting error
reductions of 50%, resulting in relative improvement of up to 100%. Also, a
web-scale experiment conducted to extend DBPedia with knowledge from Common
Crawl shows that our system is not only scalable but also does not require any
adaptation cost, while yielding substantial accuracy gain.
| 2,019 | Computation and Language |
X-SQL: reinforce schema representation with context | In this work, we present X-SQL, a new network architecture for the problem of
parsing natural language to SQL query. X-SQL proposes to enhance the structural
schema representation with the contextual output from BERT-style pre-training
model, and together with type information to learn a new schema representation
for down-stream tasks. We evaluated X-SQL on the WikiSQL dataset and show its
new state-of-the-art performance.
| 2,019 | Computation and Language |
Multi-passage BERT: A Globally Normalized BERT Model for Open-domain
Question Answering | BERT model has been successfully applied to open-domain QA tasks. However,
previous work trains BERT by viewing passages corresponding to the same
question as independent training instances, which may cause incomparable scores
for answers from different passages. To tackle this issue, we propose a
multi-passage BERT model to globally normalize answer scores across all
passages of the same question, and this change enables our QA model find better
answers by utilizing more passages. In addition, we find that splitting
articles into passages with the length of 100 words by sliding window improves
performance by 4%. By leveraging a passage ranker to select high-quality
passages, multi-passage BERT gains additional 2%. Experiments on four standard
benchmarks showed that our multi-passage BERT outperforms all state-of-the-art
models on all benchmarks. In particular, on the OpenSQuAD dataset, our model
gains 21.4% EM and 21.5% $F_1$ over all non-BERT models, and 5.8% EM and 6.5%
$F_1$ over BERT-based models.
| 2,019 | Computation and Language |
Entropy-Enhanced Multimodal Attention Model for Scene-Aware Dialogue
Generation | With increasing information from social media, there are more and more videos
available. Therefore, the ability to reason on a video is important and
deserves to be discussed. TheDialog System Technology Challenge (DSTC7)
(Yoshino et al. 2018) proposed an Audio Visual Scene-aware Dialog (AVSD) task,
which contains five modalities including video, dialogue history, summary, and
caption, as a scene-aware environment. In this paper, we propose the
entropy-enhanced dynamic memory network (DMN) to effectively model video
modality. The attention-based GRU in the proposed model can improve the model's
ability to comprehend and memorize sequential information. The entropy
mechanism can control the attention distribution higher, so each to-be-answered
question can focus more specifically on a small set of video segments. After
the entropy-enhanced DMN secures the video context, we apply an attention model
that in-corporates summary and caption to generate an accurate answer given the
question about the video. In the official evaluation, our system can achieve
improved performance against the released baseline model for both subjective
and objective evaluation metrics.
| 2,019 | Computation and Language |
Denoising based Sequence-to-Sequence Pre-training for Text Generation | This paper presents a new sequence-to-sequence (seq2seq) pre-training method
PoDA (Pre-training of Denoising Autoencoders), which learns representations
suitable for text generation tasks. Unlike encoder-only (e.g., BERT) or
decoder-only (e.g., OpenAI GPT) pre-training approaches, PoDA jointly
pre-trains both the encoder and decoder by denoising the noise-corrupted text,
and it also has the advantage of keeping the network architecture unchanged in
the subsequent fine-tuning stage. Meanwhile, we design a hybrid model of
Transformer and pointer-generator networks as the backbone architecture for
PoDA. We conduct experiments on two text generation tasks: abstractive
summarization, and grammatical error correction. Results on four datasets show
that PoDA can improve model performance over strong baselines without using any
task-specific techniques and significantly speed up convergence.
| 2,019 | Computation and Language |
Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs | Entity alignment is the task of linking entities with the same real-world
identity from different knowledge graphs (KGs), which has been recently
dominated by embedding-based methods. Such approaches work by learning KG
representations so that entity alignment can be performed by measuring the
similarities between entity embeddings. While promising, prior works in the
field often fail to properly capture complex relation information that commonly
exists in multi-relational KGs, leaving much room for improvement. In this
paper, we propose a novel Relation-aware Dual-Graph Convolutional Network
(RDGCN) to incorporate relation information via attentive interactions between
the knowledge graph and its dual relation counterpart, and further capture
neighboring structures to learn better entity representations. Experiments on
three real-world cross-lingual datasets show that our approach delivers better
and more robust results over the state-of-the-art alignment methods by learning
better KG representations.
| 2,019 | Computation and Language |
Revisiting Semantic Representation and Tree Search for Similar Question
Retrieval | This paper studies the performances of BERT combined with tree structure in
short sentence ranking task. In retrieval-based question answering system, we
retrieve the most similar question of the query question by ranking all the
questions in datasets. If we want to rank all the sentences by neural rankers,
we need to score all the sentence pairs. However it consumes large amount of
time. So we design a specific tree for searching and combine deep model to
solve this problem. We fine-tune BERT on the training data to get semantic
vector or sentence embeddings on the test data. We use all the sentence
embeddings of test data to build our tree based on k-means and do beam search
at predicting time when given a sentence as query. We do the experiments on the
semantic textual similarity dataset, Quora Question Pairs, and process the
dataset for sentence ranking. Experimental results show that our methods
outperform the strong baseline. Our tree accelerate the predicting speed by
500%-1000% without losing too much ranking accuracy.
| 2,019 | Computation and Language |
Argument Invention from First Principles | Competitive debaters often find themselves facing a challenging task -- how
to debate a topic they know very little about, with only minutes to prepare,
and without access to books or the Internet? What they often do is rely on
"first principles", commonplace arguments which are relevant to many topics,
and which they have refined in past debates.
In this work we aim to explicitly define a taxonomy of such principled
recurring arguments, and, given a controversial topic, to automatically
identify which of these arguments are relevant to the topic.
As far as we know, this is the first time that this approach to argument
invention is formalized and made explicit in the context of NLP.
The main goal of this work is to show that it is possible to define such a
taxonomy. While the taxonomy suggested here should be thought of as a "first
attempt" it is nonetheless coherent, covers well the relevant topics and
coincides with what professional debaters actually argue in their speeches, and
facilitates automatic argument invention for new topics.
| 2,019 | Computation and Language |
Text Summarization with Pretrained Encoders | Bidirectional Encoder Representations from Transformers (BERT) represents the
latest incarnation of pretrained language models which have recently advanced a
wide range of natural language processing tasks. In this paper, we showcase how
BERT can be usefully applied in text summarization and propose a general
framework for both extractive and abstractive models. We introduce a novel
document-level encoder based on BERT which is able to express the semantics of
a document and obtain representations for its sentences. Our extractive model
is built on top of this encoder by stacking several inter-sentence Transformer
layers. For abstractive summarization, we propose a new fine-tuning schedule
which adopts different optimizers for the encoder and the decoder as a means of
alleviating the mismatch between the two (the former is pretrained while the
latter is not). We also demonstrate that a two-staged fine-tuning approach can
further boost the quality of the generated summaries. Experiments on three
datasets show that our model achieves state-of-the-art results across the board
in both extractive and abstractive settings. Our code is available at
https://github.com/nlpyang/PreSumm
| 2,019 | Computation and Language |
Compositionality decomposed: how do neural networks generalise? | Despite a multitude of empirical studies, little consensus exists on whether
neural networks are able to generalise compositionally, a controversy that, in
part, stems from a lack of agreement about what it means for a neural model to
be compositional. As a response to this controversy, we present a set of tests
that provide a bridge between, on the one hand, the vast amount of linguistic
and philosophical theory about compositionality of language and, on the other,
the successful neural models of language. We collect different interpretations
of compositionality and translate them into five theoretically grounded tests
for models that are formulated on a task-independent level. In particular, we
provide tests to investigate (i) if models systematically recombine known parts
and rules (ii) if models can extend their predictions beyond the length they
have seen in the training data (iii) if models' composition operations are
local or global (iv) if models' predictions are robust to synonym substitutions
and (v) if models favour rules or exceptions during training. To demonstrate
the usefulness of this evaluation paradigm, we instantiate these five tests on
a highly compositional data set which we dub PCFG SET and apply the resulting
tests to three popular sequence-to-sequence models: a recurrent, a
convolution-based and a transformer model. We provide an in-depth analysis of
the results, which uncover the strengths and weaknesses of these three
architectures and point to potential areas of improvement.
| 2,020 | Computation and Language |
Controllable Dual Skew Divergence Loss for Neural Machine Translation | In sequence prediction tasks like neural machine translation, training with
cross-entropy loss often leads to models that overgeneralize and plunge into
local optima. In this paper, we propose an extended loss function called
\emph{dual skew divergence} (DSD) that integrates two symmetric terms on KL
divergences with a balanced weight. We empirically discovered that such a
balanced weight plays a crucial role in applying the proposed DSD loss into
deep models. Thus we eventually develop a controllable DSD loss for
general-purpose scenarios. Our experiments indicate that switching to the DSD
loss after the convergence of ML training helps models escape local optima and
stimulates stable performance improvements. Our evaluations on the WMT 2014
English-German and English-French translation tasks demonstrate that the
proposed loss as a general and convenient mean for NMT training indeed brings
performance improvement in comparison to strong baselines.
| 2,021 | Computation and Language |
NE-LP: Normalized Entropy and Loss Prediction based Sampling for Active
Learning in Chinese Word Segmentation on EHRs | Electronic Health Records (EHRs) in hospital information systems contain
patients' diagnosis and treatments, so EHRs are essential to clinical data
mining. Of all the tasks in the mining process, Chinese Word Segmentation (CWS)
is a fundamental and important one, and most state-of-the-art methods greatly
rely on large-scale of manually-annotated data. Since annotation is
time-consuming and expensive, efforts have been devoted to techniques, such as
active learning, to locate the most informative samples for modeling. In this
paper, we follow the trend and present an active learning method for CWS in
EHRs. Specically, a new sampling strategy combining Normalized Entropy with
Loss Prediction (NE-LP) is proposed to select the most representative data.
Meanwhile, to minimize the computational cost of learning, we propose a joint
model including a word segmenter and a loss prediction model. Furthermore, to
capture interactions between adjacent characters, bigram features are also
applied in the joint model. To illustrate the effectiveness of NE-LP, we
conducted experiments on EHRs collected from the Shuguang Hospital Affiliated
to Shanghai University of Traditional Chinese Medicine. The results demonstrate
that NE-LP consistently outperforms conventional uncertainty-based sampling
strategies for active learning in CWS.
| 2,020 | Computation and Language |
Dialogue Coherence Assessment Without Explicit Dialogue Act Labels | Recent dialogue coherence models use the coherence features designed for
monologue texts, e.g. nominal entities, to represent utterances and then
explicitly augment them with dialogue-relevant features, e.g., dialogue act
labels. It indicates two drawbacks, (a) semantics of utterances is limited to
entity mentions, and (b) the performance of coherence models strongly relies on
the quality of the input dialogue act labels. We address these issues by
introducing a novel approach to dialogue coherence assessment. We use dialogue
act prediction as an auxiliary task in a multi-task learning scenario to obtain
informative utterance representations for coherence assessment. Our approach
alleviates the need for explicit dialogue act labels during evaluation. The
results of our experiments show that our model substantially (more than 20
accuracy points) outperforms its strong competitors on the DailyDialogue
corpus, and performs on par with them on the SwitchBoard corpus for ranking
dialogues concerning their coherence.
| 2,020 | Computation and Language |
Unsupervised Lemmatization as Embeddings-Based Word Clustering | We focus on the task of unsupervised lemmatization, i.e. grouping together
inflected forms of one word under one label (a lemma) without the use of
annotated training data. We propose to perform agglomerative clustering of word
forms with a novel distance measure. Our distance measure is based on the
observation that inflections of the same word tend to be similar both
string-wise and in meaning. We therefore combine word embedding cosine
similarity, serving as a proxy to the meaning similarity, with Jaro-Winkler
edit distance. Our experiments on 23 languages show our approach to be
promising, surpassing the baseline on 23 of the 28 evaluation datasets.
| 2,019 | Computation and Language |
Unsupervised Text Summarization via Mixed Model Back-Translation | Back-translation based approaches have recently lead to significant progress
in unsupervised sequence-to-sequence tasks such as machine translation or style
transfer. In this work, we extend the paradigm to the problem of learning a
sentence summarization system from unaligned data. We present several initial
models which rely on the asymmetrical nature of the task to perform the first
back-translation step, and demonstrate the value of combining the data created
by these diverse initialization methods. Our system outperforms the current
state-of-the-art for unsupervised sentence summarization from fully unaligned
data by over 2 ROUGE, and matches the performance of recent semi-supervised
approaches.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.