Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Zero-Shot Fine-Grained Style Transfer: Leveraging Distributed Continuous
Style Representations to Transfer To Unseen Styles | Text style transfer is usually performed using attributes that can take a
handful of discrete values (e.g., positive to negative reviews). In this work,
we introduce an architecture that can leverage pre-trained consistent
continuous distributed style representations and use them to transfer to an
attribute unseen during training, without requiring any re-tuning of the style
transfer model. We demonstrate the method by training an architecture to
transfer text conveying one sentiment to another sentiment, using a
fine-grained set of over 20 sentiment labels rather than the binary
positive/negative often used in style transfer. Our experiments show that this
model can then rewrite text to match a target sentiment that was unseen during
training.
| 2,019 | Computation and Language |
Improving BERT Fine-tuning with Embedding Normalization | Large pre-trained sentence encoders like BERT start a new chapter in natural
language processing. A common practice to apply pre-trained BERT to sequence
classification tasks (e.g., classification of sentences or sentence pairs) is
by feeding the embedding of [CLS] token (in the last layer) to a task-specific
classification layer, and then fine tune the model parameters of BERT and
classifier jointly. In this paper, we conduct systematic analysis over several
sequence classification datasets to examine the embedding values of [CLS] token
before the fine tuning phase, and present the biased embedding distribution
issue---i.e., embedding values of [CLS] concentrate on a few dimensions and are
non-zero centered. Such biased embedding brings challenge to the optimization
process during fine-tuning as gradients of [CLS] embedding may explode and
result in degraded model performance. We further propose several simple yet
effective normalization methods to modify the [CLS] embedding during the
fine-tuning. Compared with the previous practice, neural classification model
with the normalized embedding shows improvements on several text classification
tasks, demonstrates the effectiveness of our method.
| 2,020 | Computation and Language |
Un systeme de lemmatisation pour les applications de TALN | This paper presents a method of stemming for the Arabian texts based on the
linguistic techniques of the natural language processing. This method leans on
the notion of scheme (one of the strong points of the morphology of the Arabian
language). The advantage of this approach is that it doesn't use a dictionary
of inflexions but a smart dynamic recognition of the different words of the
language.
| 2,019 | Computation and Language |
Evaluating Voice Conversion-based Privacy Protection against Informed
Attackers | Speech data conveys sensitive speaker attributes like identity or accent.
With a small amount of found data, such attributes can be inferred and
exploited for malicious purposes: voice cloning, spoofing, etc. Anonymization
aims to make the data unlinkable, i.e., ensure that no utterance can be linked
to its original speaker. In this paper, we investigate anonymization methods
based on voice conversion. In contrast to prior work, we argue that various
linkage attacks can be designed depending on the attackers' knowledge about the
anonymization scheme. We compare two frequency warping-based conversion methods
and a deep learning based method in three attack scenarios. The utility of
converted speech is measured via the word error rate achieved by automatic
speech recognition, while privacy protection is assessed by the increase in
equal error rate achieved by state-of-the-art i-vector or x-vector based
speaker verification. Our results show that voice conversion schemes are unable
to effectively protect against an attacker that has extensive knowledge of the
type of conversion and how it has been applied, but may provide some protection
against less knowledgeable attackers.
| 2,020 | Computation and Language |
Can Neural Image Captioning be Controlled via Forced Attention? | Learned dynamic weighting of the conditioning signal (attention) has been
shown to improve neural language generation in a variety of settings. The
weights applied when generating a particular output sequence have also been
viewed as providing a potentially explanatory insight into the internal
workings of the generator. In this paper, we reverse the direction of this
connection and ask whether through the control of the attention of the model we
can control its output. Specifically, we take a standard neural image
captioning model that uses attention, and fix the attention to pre-determined
areas in the image. We evaluate whether the resulting output is more likely to
mention the class of the object in that area than the normally generated
caption. We introduce three effective methods to control the attention and find
that these are producing expected results in up to 28.56% of the cases.
| 2,019 | Computation and Language |
Language Model-Driven Unsupervised Neural Machine Translation | Unsupervised neural machine translation(NMT) is associated with noise and
errors in synthetic data when executing vanilla back-translations. Here, we
explicitly exploits language model(LM) to drive construction of an unsupervised
NMT system. This features two steps. First, we initialize NMT models using
synthetic data generated via temporary statistical machine translation(SMT).
Second, unlike vanilla back-translation, we formulate a weight function, that
scores synthetic data at each step of subsequent iterative training; this
allows unsupervised training to an improved outcome. We present the detailed
mathematical construction of our method. Experimental WMT2014 English-French,
and WMT2016 English-German and English-Russian translation tasks revealed that
our method outperforms the best prior systems by more than 3 BLEU points.
| 2,019 | Computation and Language |
Word Sense Disambiguation using Knowledge-based Word Similarity | In natural language processing, word-sense disambiguation (WSD) is an open
problem concerned with identifying the correct sense of words in a particular
context. To address this problem, we introduce a novel knowledge-based WSD
system. We suggest the adoption of two methods in our system. First, we suggest
a novel method to encode the word vector representation by considering the
graphical semantic relationships from the lexical knowledge-base. Second, we
propose a method for extracting the contextual words from the text for
analyzing an ambiguous word based on the similarity of word vector
representations. To validate the effectiveness of our WSD system, we conducted
experiments on the five benchmark English WSD corpora (Senseval-02,
Senseval-03, SemEval-07, SemEval-13, and SemEval-15). The obtained results
demonstrated that the suggested methods significantly enhanced the WSD
performance. Furthermore, our system outperformed the existing knowledge-based
WSD systems and showed a performance comparable to that of the state-of-the-art
supervised WSD systems.
| 2,020 | Computation and Language |
Decompressing Knowledge Graph Representations for Link Prediction | This paper studies the problem of predicting missing relationships between
entities in knowledge graphs through learning their representations. Currently,
the majority of existing link prediction models employ simple but intuitive
scoring functions and relatively small embedding size so that they could be
applied to large-scale knowledge graphs. However, these properties also
restrict the ability to learn more expressive and robust features. Therefore,
diverging from most of the prior works which focus on designing new objective
functions, we propose, DeCom, a simple but effective mechanism to boost the
performance of existing link predictors such as DistMult, ComplEx, etc, through
extracting more expressive features while preventing overfitting by adding just
a few extra parameters. Specifically, embeddings of entities and relationships
are first decompressed to a more expressive and robust space by decompressing
functions, then knowledge graph embedding models are trained in this new
feature space. Experimental results on several benchmark knowledge graphs and
advanced link prediction systems demonstrate the generalization and
effectiveness of our method. Especially, RESCAL + DeCom achieves
state-of-the-art performance on the FB15k-237 benchmark across all evaluation
metrics. In addition, we also show that compared with DeCom, explicitly
increasing the embedding size significantly increase the number of parameters
but could not achieve promising performance improvement.
| 2,019 | Computation and Language |
Learning to Order Sub-questions for Complex Question Answering | Answering complex questions involving multiple entities and relations is a
challenging task. Logically, the answer to a complex question should be derived
by decomposing the complex question into multiple simple sub-questions and then
answering those sub-questions. Existing work has followed this strategy but has
not attempted to optimize the order of how those sub-questions are answered. As
a result, the sub-questions are answered in an arbitrary order, leading to
larger search space and a higher risk of missing an answer. In this paper, we
propose a novel reinforcement learning(RL) approach to answering complex
questions that can learn a policy to dynamically decide which sub-question
should be answered at each stage of reasoning. We lever-age the expected
value-variance criterion to enable the learned policy to balance between the
risk and utility of answering a sub-question. Experiment results show that the
RL approach can substantially improve the optimality of ordering the
sub-questions, leading to improved accuracy of question answering. The proposed
method for learning to order sub-questions is general and can thus be
potentially combined with many existing ideas for answering complex questions
to enhance their performance.
| 2,019 | Computation and Language |
BP-Transformer: Modelling Long-Range Context via Binary Partitioning | The Transformer model is widely successful on many natural language
processing tasks. However, the quadratic complexity of self-attention limit its
application on long text. In this paper, adopting a fine-to-coarse attention
mechanism on multi-scale spans via binary partitioning (BP), we propose
BP-Transformer (BPT for short). BPT yields $O(k\cdot n\log (n/k))$ connections
where $k$ is a hyperparameter to control the density of attention. BPT has a
good balance between computation complexity and model capacity. A series of
experiments on text classification, machine translation and language modeling
shows BPT has a superior performance for long text than previous self-attention
models. Our code, hyperparameters and CUDA kernels for sparse attention are
available in PyTorch.
| 2,019 | Computation and Language |
Zero-shot Cross-lingual Dialogue Systems with Transferable Latent
Variables | Despite the surging demands for multilingual task-oriented dialog systems
(e.g., Alexa, Google Home), there has been less research done in multilingual
or cross-lingual scenarios. Hence, we propose a zero-shot adaptation of
task-oriented dialogue system to low-resource languages. To tackle this
challenge, we first use a set of very few parallel word pairs to refine the
aligned cross-lingual word-level representations. We then employ a latent
variable model to cope with the variance of similar sentences across different
languages, which is induced by imperfect cross-lingual alignments and inherent
differences in languages. Finally, the experimental results show that even
though we utilize much less external resources, our model achieves better
adaptation performance for natural language understanding task (i.e., the
intent detection and slot filling) compared to the current state-of-the-art
model in the zero-shot scenario.
| 2,019 | Computation and Language |
DialogAct2Vec: Towards End-to-End Dialogue Agent by Multi-Task
Representation Learning | In end-to-end dialogue modeling and agent learning, it is important to (1)
effectively learn knowledge from data, and (2) fully utilize heterogeneous
information, e.g., dialogue act flow and utterances. However, the majority of
existing methods cannot simultaneously satisfy the two conditions. For example,
rule definition and data labeling during system design take too much manual
work, and sequence-to-sequence methods only model one-side utterance
information. In this paper, we propose a novel joint end-to-end model by
multi-task representation learning, which can capture the knowledge from
heterogeneous information through automatically learning knowledgeable
low-dimensional embeddings from data, named with DialogAct2Vec. The model
requires little manual work for intervention in system design and we find that
the multi-task learning can greatly improve the effectiveness of representation
learning. Extensive experiments on a public dataset for restaurant reservation
show that the proposed method leads to significant improvements against the
state-of-the-art baselines on both the act prediction task and utterance
prediction task.
| 2,019 | Computation and Language |
A unified sequence-to-sequence front-end model for Mandarin
text-to-speech synthesis | In Mandarin text-to-speech (TTS) system, the front-end text processing module
significantly influences the intelligibility and naturalness of synthesized
speech. Building a typical pipeline-based front-end which consists of multiple
individual components requires extensive efforts. In this paper, we proposed a
unified sequence-to-sequence front-end model for Mandarin TTS that converts raw
texts to linguistic features directly. Compared to the pipeline-based
front-end, our unified front-end can achieve comparable performance in
polyphone disambiguation and prosody word prediction, and improve intonation
phrase prediction by 0.0738 in F1 score. We also implemented the unified
front-end with Tacotron and WaveRNN to build a Mandarin TTS system. The
synthesized speech by that got a comparable MOS (4.38) with the pipeline-based
front-end (4.37) and close to human recordings (4.49).
| 2,019 | Computation and Language |
Text classification with pixel embedding | We propose a novel framework to understand the text by converting sentences
or articles into video-like 3-dimensional tensors. Each frame, corresponding to
a slice of the tensor, is a word image that is rendered by the word's shape.
The length of the tensor equals to the number of words in the sentence or
article. The proposed transformation from the text to a 3-dimensional tensor
makes it very convenient to implement an $n$-gram model with convolutional
neural networks for text analysis. Concretely, we impose a 3-dimensional
convolutional kernel on the 3-dimensional text tensor. The first two dimensions
of the convolutional kernel size equal the size of the word image and the last
dimension of the kernel size is $n$. That is, every time when we slide the
3-dimensional kernel over a word sequence, the convolution covers $n$ word
images and outputs a scalar. By iterating this process continuously for each
$n$-gram along with the sentence or article with multiple kernels, we obtain a
2-dimensional feature map. A subsequent 1-dimensional max-over-time pooling is
applied to this feature map, and three fully-connected layers are used for
conducting text classification finally. Experiments of several text
classification datasets demonstrate surprisingly superior performances using
the proposed model in comparison with existing methods.
| 2,021 | Computation and Language |
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer
Sentence Selection | We propose TANDA, an effective technique for fine-tuning pre-trained
Transformer models for natural language tasks. Specifically, we first transfer
a pre-trained model into a model for a general task by fine-tuning it with a
large and high-quality dataset. We then perform a second fine-tuning step to
adapt the transferred model to the target domain. We demonstrate the benefits
of our approach for answer sentence selection, which is a well-known inference
task in Question Answering. We built a large scale dataset to enable the
transfer step, exploiting the Natural Questions dataset. Our approach
establishes the state of the art on two well-known benchmarks, WikiQA and
TREC-QA, achieving MAP scores of 92% and 94.3%, respectively, which largely
outperform the previous highest scores of 83.4% and 87.5%, obtained in very
recent work. We empirically show that TANDA generates more stable and robust
models reducing the effort required for selecting optimal hyper-parameters.
Additionally, we show that the transfer step of TANDA makes the adaptation step
more robust to noise. This enables a more effective use of noisy datasets for
fine-tuning. Finally, we also confirm the positive impact of TANDA in an
industrial setting, using domain specific datasets subject to different types
of noise.
| 2,019 | Computation and Language |
Leveraging Dependency Forest for Neural Medical Relation Extraction | Medical relation extraction discovers relations between entity mentions in
text, such as research articles. For this task, dependency syntax has been
recognized as a crucial source of features. Yet in the medical domain, 1-best
parse trees suffer from relatively low accuracies, diminishing their
usefulness. We investigate a method to alleviate this problem by utilizing
dependency forests. Forests contain many possible decisions and therefore have
higher recall but more noise compared with 1-best outputs. A graph neural
network is used to represent the forests, automatically distinguishing the
useful syntactic information from parsing noise. Results on two biomedical
benchmarks show that our method outperforms the standard tree-based methods,
giving the state-of-the-art results in the literature.
| 2,019 | Computation and Language |
A hybrid text normalization system using multi-head self-attention for
mandarin | In this paper, we propose a hybrid text normalization system using multi-head
self-attention. The system combines the advantages of a rule-based model and a
neural model for text preprocessing tasks. Previous studies in Mandarin text
normalization usually use a set of hand-written rules, which are hard to
improve on general cases. The idea of our proposed system is motivated by the
neural models from recent studies and has a better performance on our internal
news corpus. This paper also includes different attempts to deal with
imbalanced pattern distribution of the dataset. Overall, the performance of the
system is improved by over 1.5% on sentence-level and it has a potential to
improve further.
| 2,020 | Computation and Language |
Meta Answering for Machine Reading | We investigate a framework for machine reading, inspired by real world
information-seeking problems, where a meta question answering system interacts
with a black box environment. The environment encapsulates a competitive
machine reader based on BERT, providing candidate answers to questions, and
possibly some context. To validate the realism of our formulation, we ask
humans to play the role of a meta-answerer. With just a small snippet of text
around an answer, humans can outperform the machine reader, improving recall.
Similarly, a simple machine meta-answerer outperforms the environment,
improving both precision and recall on the Natural Questions dataset. The
system relies on joint training of answer scoring and the selection of
conditioning information.
| 2,020 | Computation and Language |
Keep it Consistent: Topic-Aware Storytelling from an Image Stream via
Iterative Multi-agent Communication | Visual storytelling aims to generate a narrative paragraph from a sequence of
images automatically. Existing approaches construct text description
independently for each image and roughly concatenate them as a story, which
leads to the problem of generating semantically incoherent content. In this
paper, we propose a new way for visual storytelling by introducing a topic
description task to detect the global semantic context of an image stream. A
story is then constructed with the guidance of the topic description. In order
to combine the two generation tasks, we propose a multi-agent communication
framework that regards the topic description generator and the story generator
as two agents and learn them simultaneously via iterative updating mechanism.
We validate our approach on VIST dataset, where quantitative results,
ablations, and human evaluation demonstrate our method's good ability in
generating stories with higher quality compared to state-of-the-art methods.
| 2,020 | Computation and Language |
NegBERT: A Transfer Learning Approach for Negation Detection and Scope
Resolution | Negation is an important characteristic of language, and a major component of
information extraction from text. This subtask is of considerable importance to
the biomedical domain. Over the years, multiple approaches have been explored
to address this problem: Rule-based systems, Machine Learning classifiers,
Conditional Random Field Models, CNNs and more recently BiLSTMs. In this paper,
we look at applying Transfer Learning to this problem. First, we extensively
review previous literature addressing Negation Detection and Scope Resolution
across the 3 datasets that have gained popularity over the years: the BioScope
Corpus, the Sherlock dataset, and the SFU Review Corpus. We then explore the
decision choices involved with using BERT, a popular transfer learning model,
for this task, and report state-of-the-art results for scope resolution across
all 3 datasets. Our model, referred to as NegBERT, achieves a token level F1
score on scope resolution of 92.36 on the Sherlock dataset, 95.68 on the
BioScope Abstracts subcorpus, 91.24 on the BioScope Full Papers subcorpus,
90.95 on the SFU Review Corpus, outperforming the previous state-of-the-art
systems by a significant margin. We also analyze the model's generalizability
to datasets on which it is not trained.
| 2,020 | Computation and Language |
Data Efficient Direct Speech-to-Text Translation with Modality Agnostic
Meta-Learning | End-to-end Speech Translation (ST) models have several advantages such as
lower latency, smaller model size, and less error compounding over conventional
pipelines that combine Automatic Speech Recognition (ASR) and text Machine
Translation (MT) models. However, collecting large amounts of parallel data for
ST task is more difficult compared to the ASR and MT tasks. Previous studies
have proposed the use of transfer learning approaches to overcome the above
difficulty. These approaches benefit from weakly supervised training data, such
as ASR speech-to-transcript or MT text-to-text translation pairs. However, the
parameters in these models are updated independently of each task, which may
lead to sub-optimal solutions. In this work, we adopt a meta-learning algorithm
to train a modality agnostic multi-task model that transfers knowledge from
source tasks=ASR+MT to target task=ST where ST task severely lacks data. In the
meta-learning phase, the parameters of the model are exposed to vast amounts of
speech transcripts (e.g., English ASR) and text translations (e.g.,
English-German MT). During this phase, parameters are updated in such a way to
understand speech, text representations, the relation between them, as well as
act as a good initialization point for the target ST task. We evaluate the
proposed meta-learning approach for ST tasks on English-German (En-De) and
English-French (En-Fr) language pairs from the Multilingual Speech Translation
Corpus (MuST-C). Our method outperforms the previous transfer learning
approaches and sets new state-of-the-art results for En-De and En-Fr ST tasks
by obtaining 9.18, and 11.76 BLEU point improvements, respectively.
| 2,020 | Computation and Language |
Deep Contextualized Self-training for Low Resource Dependency Parsing | Neural dependency parsing has proven very effective, achieving
state-of-the-art results on numerous domains and languages. Unfortunately, it
requires large amounts of labeled data, that is costly and laborious to create.
In this paper we propose a self-training algorithm that alleviates this
annotation bottleneck by training a parser on its own output. Our Deep
Contextualized Self-training (DCST) algorithm utilizes representation models
trained on sequence labeling tasks that are derived from the parser's output
when applied to unlabeled data, and integrates these models with the base
parser through a gating mechanism. We conduct experiments across multiple
languages, both in low resource in-domain and in cross-domain setups, and
demonstrate that DCST substantially outperforms traditional self-training as
well as recent semi-supervised training methods.
| 2,019 | Computation and Language |
Diversity by Phonetics and its Application in Neural Machine Translation | We introduce a powerful approach for Neural Machine Translation (NMT),
whereby, during training and testing, together with the input we provide its
phonetic encoding and the variants of such an encoding. This way we obtain very
significant improvements up to 4 BLEU points over the state-of-the-art
large-scale system. The phonetic encoding is the first part of our
contribution, with a second being a theory that aims to understand the reason
for this improvement. Our hypothesis states that the phonetic encoding helps
NMT because it encodes a procedure to emphasize the difference between
semantically diverse sentences. We conduct an empirical geometric validation of
our hypothesis in support of which we obtain overwhelming evidence.
Subsequently, as our third contribution and based on our theory, we develop
artificial mechanisms that leverage during learning the hypothesized (and
verified) effect phonetics. We achieve significant and consistent improvements
overall language pairs and datasets: French-English, German-English, and
Chinese-English in medium task IWSLT'17 and French-English in large task WMT'18
Bio, with up to 4 BLEU points over the state-of-the-art. Moreover, our
approaches are more robust than baselines when evaluated on unknown
out-of-domain test sets with up to a 5 BLEU point increase.
| 2,019 | Computation and Language |
Attending to Entities for Better Text Understanding | Recent progress in NLP witnessed the development of large-scale pre-trained
language models (GPT, BERT, XLNet, etc.) based on Transformer (Vaswani et al.
2017), and in a range of end tasks, such models have achieved state-of-the-art
results, approaching human performance. This demonstrates the power of the
stacked self-attention architecture when paired with a sufficient number of
layers and a large amount of pre-training data. However, on tasks that require
complex and long-distance reasoning where surface-level cues are not enough,
there is still a large gap between the pre-trained models and human
performance. Strubell et al. (2018) recently showed that it is possible to
inject knowledge of syntactic structure into a model through supervised
self-attention. We conjecture that a similar injection of semantic knowledge,
in particular, coreference information, into an existing model would improve
performance on such complex problems. On the LAMBADA (Paperno et al. 2016)
task, we show that a model trained from scratch with coreference as auxiliary
supervision for self-attention outperforms the largest GPT-2 model, setting the
new state-of-the-art, while only containing a tiny fraction of parameters
compared to GPT-2. We also conduct a thorough analysis of different variants of
model architectures and supervision configurations, suggesting future
directions on applying similar techniques to other problems.
| 2,019 | Computation and Language |
Sequence-to-Set Semantic Tagging: End-to-End Multi-label Prediction
using Neural Attention for Complex Query Reformulation and Automated Text
Categorization | Novel contexts may often arise in complex querying scenarios such as in
evidence-based medicine (EBM) involving biomedical literature, that may not
explicitly refer to entities or canonical concept forms occurring in any fact-
or rule-based knowledge source such as an ontology like the UMLS. Moreover,
hidden associations between candidate concepts meaningful in the current
context, may not exist within a single document, but within the collection, via
alternate lexical forms. Therefore, inspired by the recent success of
sequence-to-sequence neural models in delivering the state-of-the-art in a wide
range of NLP tasks, we develop a novel sequence-to-set framework with neural
attention for learning document representations that can effect term transfer
within the corpus, for semantically tagging a large collection of documents. We
demonstrate that our proposed method can be effective in both a supervised
multi-label classification setup for text categorization, as well as in a
unique unsupervised setting with no human-annotated document labels that uses
no external knowledge resources and only corpus-derived term statistics to
drive the training. Further, we show that semi-supervised training using our
architecture on large amounts of unlabeled data can augment performance on the
text categorization task when limited labeled data is available. Our approach
to generate document encodings employing our sequence-to-set models for
inference of semantic tags, gives to the best of our knowledge, the
state-of-the-art for both, the unsupervised query expansion task for the TREC
CDS 2016 challenge dataset when evaluated on an Okapi BM25--based document
retrieval system; and also over the MLTM baseline (Soleimani et al, 2016), for
both supervised and semi-supervised multi-label prediction tasks on the
del.icio.us and Ohsumed datasets. We will make our code and data publicly
available.
| 2,019 | Computation and Language |
TENER: Adapting Transformer Encoder for Named Entity Recognition | The Bidirectional long short-term memory networks (BiLSTM) have been widely
used as an encoder in models solving the named entity recognition (NER) task.
Recently, the Transformer is broadly adopted in various Natural Language
Processing (NLP) tasks owing to its parallelism and advantageous performance.
Nevertheless, the performance of the Transformer in NER is not as good as it is
in other NLP tasks. In this paper, we propose TENER, a NER architecture
adopting adapted Transformer Encoder to model the character-level features and
word-level features. By incorporating the direction and relative distance aware
attention and the un-scaled attention, we prove the Transformer-like encoder is
just as effective for NER as other NLP tasks.
| 2,019 | Computation and Language |
Understanding BERT performance in propaganda analysis | In this paper, we describe our system used in the shared task for
fine-grained propaganda analysis at sentence level. Despite the challenging
nature of the task, our pretrained BERT model (team YMJA) fine tuned on the
training dataset provided by the shared task scored 0.62 F1 on the test set and
ranked third among 25 teams who participated in the contest. We present a set
of illustrative experiments to better understand the performance of our BERT
model on this shared task. Further, we explore beyond the given dataset for
false-positive cases that likely to be produced by our system. We show that
despite the high performance on the given testset, our system may have the
tendency of classifying opinion pieces as propaganda and cannot distinguish
quotations of propaganda speech from actual usage of propaganda techniques.
| 2,019 | Computation and Language |
Long-span language modeling for speech recognition | We explore neural language modeling for speech recognition where the context
spans multiple sentences. Rather than encode history beyond the current
sentence using a cache of words or document-level features, we focus our study
on the ability of LSTM and Transformer language models to implicitly learn to
carry over context across sentence boundaries. We introduce a new architecture
that incorporates an attention mechanism into LSTM to combine the benefits of
recurrent and attention architectures. We conduct language modeling and speech
recognition experiments on the publicly available LibriSpeech corpus. We show
that conventional training on a paragraph-level corpus results in significant
reductions in perplexity compared to training on a sentence-level corpus. We
also describe speech recognition experiments using long-span language models in
second-pass re-ranking, and provide insights into the ability of such models to
take advantage of context beyond the current sentence.
| 2,019 | Computation and Language |
A Syntax-aware Multi-task Learning Framework for Chinese Semantic Role
Labeling | Semantic role labeling (SRL) aims to identify the predicate-argument
structure of a sentence. Inspired by the strong correlation between syntax and
semantics, previous works pay much attention to improve SRL performance on
exploiting syntactic knowledge, achieving significant results. Pipeline methods
based on automatic syntactic trees and multi-task learning (MTL) approaches
using standard syntactic trees are two common research orientations. In this
paper, we adopt a simple unified span-based model for both span-based and
word-based Chinese SRL as a strong baseline. Besides, we present a MTL
framework that includes the basic SRL module and a dependency parser module.
Different from the commonly used hard parameter sharing strategy in MTL, the
main idea is to extract implicit syntactic representations from the dependency
parser as external inputs for the basic SRL model. Experiments on the
benchmarks of Chinese Proposition Bank 1.0 and CoNLL-2009 Chinese datasets show
that our proposed framework can effectively improve the performance over the
strong baselines. With the external BERT representations, our framework
achieves new state-of-the-art 87.54 and 88.5 F1 scores on the two test data of
the two benchmarks, respectively. In-depth analysis are conducted to gain more
insights on the proposed framework and the effectiveness of syntax.
| 2,019 | Computation and Language |
How to Evaluate Word Representations of Informal Domain? | Diverse word representations have surged in most state-of-the-art natural
language processing (NLP) applications. Nevertheless, how to efficiently
evaluate such word embeddings in the informal domain such as Twitter or forums,
remains an ongoing challenge due to the lack of sufficient evaluation dataset.
We derived a large list of variant spelling pairs from UrbanDictionary with the
automatic approaches of weakly-supervised pattern-based bootstrapping and
self-training linear-chain conditional random field (CRF). With these extracted
relation pairs we promote the odds of eliding the text normalization procedure
of traditional NLP pipelines and directly adopting representations of
non-standard words in the informal domain. Our code is available.
| 2,019 | Computation and Language |
A Pre-training Based Personalized Dialogue Generation Model with
Persona-sparse Data | Endowing dialogue systems with personas is essential to deliver more
human-like conversations. However, this problem is still far from well explored
due to the difficulties of both embodying personalities in natural languages
and the persona sparsity issue observed in most dialogue corpora. This paper
proposes a pre-training based personalized dialogue model that can generate
coherent responses using persona-sparse dialogue data. In this method, a
pre-trained language model is used to initialize an encoder and decoder, and
personal attribute embeddings are devised to model richer dialogue contexts by
encoding speakers' personas together with dialogue histories. Further, to
incorporate the target persona in the decoding process and to balance its
contribution, an attention routing structure is devised in the decoder to merge
features extracted from the target persona and dialogue contexts using
dynamically predicted weights. Our model can utilize persona-sparse dialogues
in a unified manner during the training process, and can also control the
amount of persona-related features to exhibit during the inference process.
Both automatic and manual evaluation demonstrates that the proposed model
outperforms state-of-the-art methods for generating more coherent and persona
consistent responses with persona-sparse data.
| 2,019 | Computation and Language |
Prediction of Missing Semantic Relations in Lexical-Semantic Network
using Random Forest Classifier | This study focuses on the prediction of missing six semantic relations (such
as is_a and has_part) between two given nodes in RezoJDM a French
lexical-semantic network. The output of this prediction is a set of pairs in
which the first entries are semantic relations and the second entries are the
probabilities of existence of such relations. Due to the statement of the
problem we choose the random forest (RF) predictor classifier approach to
tackle this problem. We take for granted the existing semantic relations, for
training/test dataset, gathered and validated by crowdsourcing. We describe how
all of the mentioned ideas can be followed after using the node2vec approach in
the feature extraction phase. We show how this approach can lead to acceptable
results.
| 2,019 | Computation and Language |
A Survey on Why-Type Question Answering Systems | Search engines such as Google, Yahoo and Baidu yield information in the form
of a relevant set of web pages according to the need of the user. Question
Answering Systems reduce the time taken to get an answer, to a query asked in
natural language by providing the one most relevant answer. To the best of our
knowledge, major research in Why-type questions began in early 2000's and our
work on Why-type questions can help explore newer avenues for fact-finding and
analysis. The paper presents a survey on Why-type Question Answering System,
details the architecture, the processes involved in the system and suggests
further areas of research.
| 2,019 | Computation and Language |
Orthogonal Relation Transforms with Graph Context Modeling for Knowledge
Graph Embedding | Translational distance-based knowledge graph embedding has shown progressive
improvements on the link prediction task, from TransE to the latest
state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain
challenging. In this work, we propose a novel translational distance-based
approach for knowledge graph link prediction. The proposed method includes
two-folds, first we extend the RotatE from 2D complex domain to high dimension
space with orthogonal transforms to model relations for better modeling
capacity. Second, the graph context is explicitly modeled via two directed
context representations. These context representations are used as part of the
distance scoring function to measure the plausibility of the triples during
training and inference. The proposed approach effectively improves prediction
accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link
prediction task. The experimental results show that it achieves better
performance on two benchmark data sets compared to the baseline RotatE,
especially on data set (FB15k-237) with many high in-degree connection nodes.
| 2,020 | Computation and Language |
Privacy-Preserving Adversarial Representation Learning in ASR: Reality
or Illusion? | Automatic speech recognition (ASR) is a key technology in many services and
applications. This typically requires user devices to send their speech data to
the cloud for ASR decoding. As the speech signal carries a lot of information
about the speaker, this raises serious privacy concerns. As a solution, an
encoder may reside on each user device which performs local computations to
anonymize the representation. In this paper, we focus on the protection of
speaker identity and study the extent to which users can be recognized based on
the encoded representation of their speech as obtained by a deep
encoder-decoder architecture trained for ASR. Through speaker identification
and verification experiments on the Librispeech corpus with open and closed
sets of speakers, we show that the representations obtained from a standard
architecture still carry a lot of information about speaker identity. We then
propose to use adversarial training to learn representations that perform well
in ASR while hiding speaker identity. Our results demonstrate that adversarial
training dramatically reduces the closed-set classification accuracy, but this
does not translate into increased open-set verification error hence into
increased protection of the speaker identity in practice. We suggest several
possible reasons behind this negative result.
| 2,019 | Computation and Language |
Morphological Segmentation Inside-Out | Morphological segmentation has traditionally been modeled with
non-hierarchical models, which yield flat segmentations as output. In many
cases, however, proper morphological analysis requires hierarchical structure
-- especially in the case of derivational morphology. In this work, we
introduce a discriminative, joint model of morphological segmentation along
with the orthographic changes that occur during word formation. To the best of
our knowledge, this is the first attempt to approach discriminative
segmentation with a context-free model. Additionally, we release an annotated
treebank of 7454 English words with constituency parses, encouraging future
research in this area.
| 2,021 | Computation and Language |
RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL
Parsers | When translating natural language questions into SQL queries to answer
questions from a database, contemporary semantic parsing models struggle to
generalize to unseen database schemas. The generalization challenge lies in (a)
encoding the database relations in an accessible way for the semantic parser,
and (b) modeling alignment between database columns and their mentions in a
given query. We present a unified framework, based on the relation-aware
self-attention mechanism, to address schema encoding, schema linking, and
feature representation within a text-to-SQL encoder. On the challenging Spider
dataset this framework boosts the exact match accuracy to 57.2%, surpassing its
best counterparts by 8.7% absolute improvement. Further augmented with BERT, it
achieves the new state-of-the-art performance of 65.6% on the Spider
leaderboard. In addition, we observe qualitative improvements in the model's
understanding of schema linking and alignment. Our implementation will be
open-sourced at https://github.com/Microsoft/rat-sql.
| 2,021 | Computation and Language |
CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB | We show that margin-based bitext mining in a multilingual sentence space can
be applied to monolingual corpora of billions of sentences. We are using ten
snapshots of a curated common crawl corpus (Wenzek et al., 2019) totalling 32.7
billion unique sentences. Using one unified approach for 38 languages, we were
able to mine 4.5 billions parallel sentences, out of which 661 million are
aligned with English. 20 language pairs have more then 30 million parallel
sentences, 112 more then 10 million, and most more than one million, including
direct alignments between many European or Asian languages.
To evaluate the quality of the mined bitexts, we train NMT systems for most
of the language pairs and evaluate them on TED, WMT and WAT test sets. Using
our mined bitexts only and no human translated parallel data, we achieve a new
state-of-the-art for a single system on the WMT'19 test set for translation
between English and German, Russian and Chinese, as well as German/French. In
particular, our English/German system outperforms the best single one by close
to 4 BLEU points and is almost on pair with best WMT'19 evaluation system which
uses system combination and back-translation. We also achieve excellent results
for distant languages pairs like Russian/Japanese, outperforming the best
submission at the 2019 workshop on Asian Translation (WAT).
| 2,020 | Computation and Language |
Character-based NMT with Transformer | Character-based translation has several appealing advantages, but its
performance is in general worse than a carefully tuned BPE baseline. In this
paper we study the impact of character-based input and output with the
Transformer architecture. In particular, our experiments on EN-DE show that
character-based Transformer models are more robust than their BPE counterpart,
both when translating noisy text, and when translating text from a different
domain. To obtain comparable BLEU scores in clean, in-domain data and close the
gap with BPE-based models we use known techniques to train deeper Transformer
models.
| 2,019 | Computation and Language |
Learning Sparse Sharing Architectures for Multiple Tasks | Most existing deep multi-task learning models are based on parameter sharing,
such as hard sharing, hierarchical sharing, and soft sharing. How choosing a
suitable sharing mechanism depends on the relations among the tasks, which is
not easy since it is difficult to understand the underlying shared factors
among these tasks. In this paper, we propose a novel parameter sharing
mechanism, named \emph{Sparse Sharing}. Given multiple tasks, our approach
automatically finds a sparse sharing structure. We start with an
over-parameterized base network, from which each task extracts a subnetwork.
The subnetworks of multiple tasks are partially overlapped and trained in
parallel. We show that both hard sharing and hierarchical sharing can be
formulated as particular instances of the sparse sharing framework. We conduct
extensive experiments on three sequence labeling tasks. Compared with
single-task models and three typical multi-task learning baselines, our
proposed approach achieves consistent improvement while requiring fewer
parameters.
| 2,019 | Computation and Language |
Improving Robustness of Task Oriented Dialog Systems | Task oriented language understanding in dialog systems is often modeled using
intents (task of a query) and slots (parameters for that task). Intent
detection and slot tagging are, in turn, modeled using sentence classification
and word tagging techniques respectively. Similar to adversarial attack
problems with computer vision models discussed in existing literature, these
intent-slot tagging models are often over-sensitive to small variations in
input -- predicting different and often incorrect labels when small changes are
made to a query, thus reducing their accuracy and reliability. However,
evaluating a model's robustness to these changes is harder for language since
words are discrete and an automated change (e.g. adding `noise') to a query
sometimes changes the meaning and thus labels of a query. In this paper, we
first describe how to create an adversarial test set to measure the robustness
of these models. Furthermore, we introduce and adapt adversarial training
methods as well as data augmentation using back-translation to mitigate these
issues. Our experiments show that both techniques improve the robustness of the
system substantially and can be combined to yield the best results.
| 2,019 | Computation and Language |
Creating Auxiliary Representations from Charge Definitions for Criminal
Charge Prediction | Charge prediction, determining charges for criminal cases by analyzing the
textual fact descriptions, is a promising technology in legal assistant
systems. In practice, the fact descriptions could exhibit a significant
intra-class variation due to factors like non-normative use of language, which
makes the prediction task very challenging, especially for charge classes with
too few samples to cover the expression variation. In this work, we explore to
use the charge definitions from criminal law to alleviate this issue. The key
idea is that the expressions in a fact description should have corresponding
formal terms in charge definitions, and those terms are shared across classes
and could account for the diversity in the fact descriptions. Thus, we propose
to create auxiliary fact representations from charge definitions to augment
fact descriptions representation. The generated auxiliary representations are
created through the interaction of fact description with the relevant charge
definitions and terms in those definitions by integrated sentence- and
word-level attention scheme. Experimental results on two datasets show that our
model achieves significant improvement than baselines, especially for classes
with few samples.
| 2,019 | Computation and Language |
Robustness to Capitalization Errors in Named Entity Recognition | Robustness to capitalization errors is a highly desirable characteristic of
named entity recognizers, yet we find standard models for the task are
surprisingly brittle to such noise. Existing methods to improve robustness to
the noise completely discard given orthographic information, mwhich
significantly degrades their performance on well-formed text. We propose a
simple alternative approach based on data augmentation, which allows the model
to \emph{learn} to utilize or ignore orthographic information depending on its
usefulness in the context. It achieves competitive robustness to capitalization
errors while making negligible compromise to its performance on well-formed
text and significantly improving generalization power on noisy user-generated
text. Our experiments clearly and consistently validate our claim across
different types of machine learning models, languages, and dataset sizes.
| 2,019 | Computation and Language |
LexiPers: An ontology based sentiment lexicon for Persian | Sentiment analysis refers to the use of natural language processing to
identify and extract subjective information from textual resources. One
approach for sentiment extraction is using a sentiment lexicon. A sentiment
lexicon is a set of words associated with the sentiment orientation that they
express. In this paper, we describe the process of generating a general purpose
sentiment lexicon for Persian. A new graph-based method is introduced for seed
selection and expansion based on an ontology. Sentiment lexicon generation is
then mapped to a document classification problem. We used the K-nearest
neighbors and nearest centroid methods for classification. These classifiers
have been evaluated based on a set of hand labeled synsets. The final sentiment
lexicon has been generated by the best classifier. The results show an
acceptable performance in terms of accuracy and F-measure in the generated
sentiment lexicon.
| 2,019 | Computation and Language |
A Stable Variational Autoencoder for Text Modelling | Variational Autoencoder (VAE) is a powerful method for learning
representations of high-dimensional data. However, VAEs can suffer from an
issue known as latent variable collapse (or KL loss vanishing), where the
posterior collapses to the prior and the model will ignore the latent codes in
generative tasks. Such an issue is particularly prevalent when employing
VAE-RNN architectures for text modelling (Bowman et al., 2016). In this paper,
we present a simple architecture called holistic regularisation VAE (HR-VAE),
which can effectively avoid latent variable collapse. Compared to existing
VAE-RNN architectures, we show that our model can achieve much more stable
training process and can generate text with significantly better quality.
| 2,019 | Computation and Language |
Neural Duplicate Question Detection without Labeled Training Data | Supervised training of neural models to duplicate question detection in
community Question Answering (cQA) requires large amounts of labeled question
pairs, which are costly to obtain. To minimize this cost, recent works thus
often used alternative methods, e.g., adversarial domain adaptation. In this
work, we propose two novel methods: (1) the automatic generation of duplicate
questions, and (2) weak supervision using the title and body of a question. We
show that both can achieve improved performances even though they do not
require any labeled data. We provide comprehensive comparisons of popular
training strategies, which provides important insights on how to best train
models in different scenarios. We show that our proposed approaches are more
effective in many cases because they can utilize larger amounts of unlabeled
data from cQA forums. Finally, we also show that our proposed approach for weak
supervision with question title and body information is also an effective
method to train cQA answer selection models without direct answer supervision.
| 2,020 | Computation and Language |
Adapting and evaluating a deep learning language model for clinical
why-question answering | Objectives: To adapt and evaluate a deep learning language model for
answering why-questions based on patient-specific clinical text. Materials and
Methods: Bidirectional encoder representations from transformers (BERT) models
were trained with varying data sources to perform SQuAD 2.0 style why-question
answering (why-QA) on clinical notes. The evaluation focused on: 1) comparing
the merits from different training data, 2) error analysis. Results: The best
model achieved an accuracy of 0.707 (or 0.760 by partial match). Training
toward customization for the clinical language helped increase 6% in accuracy.
Discussion: The error analysis suggested that the model did not really perform
deep reasoning and that clinical why-QA might warrant more sophisticated
solutions. Conclusion: The BERT model achieved moderate accuracy in clinical
why-QA and should benefit from the rapidly evolving technology. Despite the
identified limitations, it could serve as a competent proxy for question-driven
clinical information extraction.
| 2,020 | Computation and Language |
Prevalence of code mixing in semi-formal patient communication in low
resource languages of South Africa | In this paper we address the problem of code-mixing in resource-poor language
settings. We examine data consisting of 182k unique questions generated by
users of the MomConnect helpdesk, part of a national scale public health
platform in South Africa. We show evidence of code-switching at the level of
approximately 10% within this dataset -- a level that is likely to pose
challenges for future services. We use a natural language processing library
(Polyglot) that supports detection of 196 languages and attempt to evaluate its
performance at identifying English, isiZulu and code-mixed questions.
| 2,019 | Computation and Language |
Relative contributions of Shakespeare and Fletcher in Henry VIII: An
Analysis Based on Most Frequent Words and Most Frequent Rhythmic Patterns | The versified play Henry VIII is nowadays widely recognized to be a
collaborative work not written solely by William Shakespeare. We employ
combined analysis of vocabulary and versification together with machine
learning techniques to determine which authors also took part in the writing of
the play and what were their relative contributions. Unlike most previous
studies, we go beyond the attribution of particular scenes and use the rolling
attribution approach to determine the probabilities of authorship of pieces of
texts, without respecting the scene boundaries. Our results highly support the
canonical division of the play between William Shakespeare and John Fletcher
proposed by James Spedding, but also bring new evidence supporting the
modifications proposed later by Thomas Merriam.
| 2,020 | Computation and Language |
Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text | Modeling semantic plausibility requires commonsense knowledge about the world
and has been used as a testbed for exploring various knowledge representations.
Previous work has focused specifically on modeling physical plausibility and
shown that distributional methods fail when tested in a supervised setting. At
the same time, distributional models, namely large pretrained language models,
have led to improved results for many natural language understanding tasks. In
this work, we show that these pretrained language models are in fact effective
at modeling physical plausibility in the supervised setting. We therefore
present the more difficult problem of learning to model physical plausibility
directly from text. We create a training set by extracting attested events from
a large corpus, and we provide a baseline for training on these attested events
in a self-supervised manner and testing on a physical plausibility task. We
believe results could be further improved by injecting explicit commonsense
knowledge into a distributional model.
| 2,019 | Computation and Language |
Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling | Defining words in a textual context is a useful task both for practical
purposes and for gaining insight into distributed word representations.
Building on the distributional hypothesis, we argue here that the most natural
formalization of definition modeling is to treat it as a sequence-to-sequence
task, rather than a word-to-sequence task: given an input sequence with a
highlighted word, generate a contextually appropriate definition for it. We
implement this approach in a Transformer-based sequence-to-sequence model. Our
proposal allows to train contextualization and definition generation in an
end-to-end fashion, which is a conceptual improvement over earlier works. We
achieve state-of-the-art results both in contextual and non-contextual
definition modeling.
| 2,019 | Computation and Language |
What do you mean, BERT? Assessing BERT as a Distributional Semantics
Model | Contextualized word embeddings, i.e. vector representations for words in
context, are naturally seen as an extension of previous noncontextual
distributional semantic models. In this work, we focus on BERT, a deep neural
network that produces contextualized embeddings and has set the
state-of-the-art in several semantic tasks, and study the semantic coherence of
its embedding space. While showing a tendency towards coherence, BERT does not
fully live up to the natural expectations for a semantic vector space. In
particular, we find that the position of the sentence in which a word occurs,
while having no meaning correlates, leaves a noticeable trace on the word
embeddings and disturbs similarity relationships.
| 2,020 | Computation and Language |
FAQ-based Question Answering via Knowledge Anchors | Question answering (QA) aims to understand questions and find appropriate
answers. In real-world QA systems, Frequently Asked Question (FAQ) based QA is
usually a practical and effective solution, especially for some complicated
questions (e.g., How and Why). Recent years have witnessed the great successes
of knowledge graphs (KGs) in KBQA systems, while there are still few works
focusing on making full use of KGs in FAQ-based QA. In this paper, we propose a
novel Knowledge Anchor based Question Answering (KAQA) framework for FAQ-based
QA to better understand questions and retrieve more appropriate answers. More
specifically, KAQA mainly consists of three modules: knowledge graph
construction, query anchoring and query-document matching. We consider entities
and triples of KGs in texts as knowledge anchors to precisely capture the core
semantics, which brings in higher precision and better interpretability. The
multi-channel matching strategy also enables most sentence matching models to
be flexibly plugged in our KAQA framework to fit different real-world
computation limitations. In experiments, we evaluate our models on both offline
and online query-document matching tasks on a real-world FAQ-based QA system in
WeChat Search, with detailed analysis, ablation tests and case studies. The
significant improvements confirm the effectiveness and robustness of the KAQA
framework in real-world FAQ-based QA.
| 2,020 | Computation and Language |
Ethanos: Lightweight Bootstrapping for Ethereum | As ethereum blockchain has become popular, the number of users and
transactions has skyrocketed, causing an explosive increase of its data size.
As a result, ordinary clients using PCs or smartphones cannot easily bootstrap
as a full node, but rely on other full nodes such as the miners to run or
verify transactions. This may affect the security of ethereum, so light
bootstrapping techniques such as fast sync has been proposed to download only
parts of full data, yet the space overhead is still too high. One of the
biggest space overhead that cannot easily be reduced is caused by saving the
state of all accounts in the block's state trie. Fortunately, we found that
more than 90% of accounts are inactive and old transactions are hard to be
manipulated. Based on these observations, this paper propose a novel
optimization technique called ethanos that can reduce bootstrapping cost by
sweeping inactive accounts periodically and by not downloading old
transactions. If an inactive account becomes active, ethanos restore its state
by running a restoration transaction. Also, ethanos gives incentives for
archive nodes to maintain the old transactions for possible re-verification. We
implemented ethanos by instrumenting the go-ethereum (geth) client and
evaluated with the real 113 million transactions from 14 million accounts
between 7M-th and 8M-th blocks in ethereum. Our experimental result shows that
ethanos can reduce the size of the account state by half, which, if combined
with removing old transactions, may reduce the storage size for bootstrapping
to around 1GB. This would be reasonable enough for ordinary clients to
bootstrap on their personal devices.
| 2,019 | Computation and Language |
Contextual Recurrent Units for Cloze-style Reading Comprehension | Recurrent Neural Networks (RNN) are known as powerful models for handling
sequential data, and especially widely utilized in various natural language
processing tasks. In this paper, we propose Contextual Recurrent Units (CRU)
for enhancing local contextual representations in neural networks. The proposed
CRU injects convolutional neural networks (CNN) into the recurrent units to
enhance the ability to model the local context and reducing word ambiguities
even in bi-directional RNNs. We tested our CRU model on sentence-level and
document-level modeling NLP tasks: sentiment classification and reading
comprehension. Experimental results show that the proposed CRU model could give
significant improvements over traditional CNN or RNN models, including
bidirectional conditions, as well as various state-of-the-art systems on both
tasks, showing its promising future of extensibility to other NLP tasks as
well.
| 2,019 | Computation and Language |
Training a code-switching language model with monolingual data | A lack of code-switching data complicates the training of code-switching (CS)
language models. We propose an approach to train such CS language models on
monolingual data only. By constraining and normalizing the output projection
matrix in RNN-based language models, we bring embeddings of different languages
closer to each other. Numerical and visualization results show that the
proposed approaches remarkably improve the performance of CS language models
trained on monolingual data. The proposed approaches are comparable or even
better than training CS language models with artificially generated CS data. We
additionally use unsupervised bilingual word translation to analyze whether
semantically equivalent words in different languages are mapped together.
| 2,020 | Computation and Language |
Instance-based Transfer Learning for Multilingual Deep Retrieval | We focus on the problem of search in the multilingual setting. Examining the
problems of next-sentence prediction and inverse cloze, we show that at large
scale, instance-based transfer learning is surprisingly effective in the
multilingual setting, leading to positive transfer on all of the 35 target
languages and two tasks tested. We analyze this improvement and argue that the
most natural explanation, namely direct vocabulary overlap between languages,
only partially explains the performance gains: in fact, we demonstrate
target-language improvement can occur after adding data from an auxiliary
language even with no vocabulary in common with the target. This surprising
result is due to the effect of transitive vocabulary overlaps between pairs of
auxiliary and target languages.
| 2,021 | Computation and Language |
Learning Multi-Sense Word Distributions using Approximate
Kullback-Leibler Divergence | Learning word representations has garnered greater attention in the recent
past due to its diverse text applications. Word embeddings encapsulate the
syntactic and semantic regularities of sentences. Modelling word embedding as
multi-sense gaussian mixture distributions, will additionally capture
uncertainty and polysemy of words. We propose to learn the Gaussian mixture
representation of words using a Kullback-Leibler (KL) divergence based
objective function. The KL divergence based energy function provides a better
distance metric which can effectively capture entailment and distribution
similarity among the words. Due to the intractability of KL divergence for
Gaussian mixture, we go for a KL approximation between Gaussian mixtures. We
perform qualitative and quantitative experiments on benchmark word similarity
and entailment datasets which demonstrate the effectiveness of the proposed
approach.
| 2,019 | Computation and Language |
Towards Supervised Extractive Text Summarization via RNN-based Sequence
Classification | This article briefly explains our submitted approach to the DocEng'19
competition on extractive summarization. We implemented a recurrent neural
network based model that learns to classify whether an article's sentence
belongs to the corresponding extractive summary or not. We bypass the lack of
large annotated news corpora for extractive summarization by generating
extractive summaries from abstractive ones, which are available from the CNN
corpus.
| 2,019 | Computation and Language |
KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language
Representation | Pre-trained language representation models (PLMs) cannot well capture factual
knowledge from text. In contrast, knowledge embedding (KE) methods can
effectively represent the relational facts in knowledge graphs (KGs) with
informative entity embeddings, but conventional KE models cannot take full
advantage of the abundant textual information. In this paper, we propose a
unified model for Knowledge Embedding and Pre-trained LanguagE Representation
(KEPLER), which can not only better integrate factual knowledge into PLMs but
also produce effective text-enhanced KE with the strong PLMs. In KEPLER, we
encode textual entity descriptions with a PLM as their embeddings, and then
jointly optimize the KE and language modeling objectives. Experimental results
show that KEPLER achieves state-of-the-art performances on various NLP tasks,
and also works remarkably well as an inductive KE model on KG link prediction.
Furthermore, for pre-training and evaluating KEPLER, we construct Wikidata5M, a
large-scale KG dataset with aligned entity descriptions, and benchmark
state-of-the-art KE methods on it. It shall serve as a new KE benchmark and
facilitate the research on large KG, inductive KE, and KG with text. The source
code can be obtained from https://github.com/THU-KEG/KEPLER.
| 2,020 | Computation and Language |
Unsupervised Domain Adaptation on Reading Comprehension | Reading comprehension (RC) has been studied in a variety of datasets with the
boosted performance brought by deep neural networks. However, the
generalization capability of these models across different domains remains
unclear. To alleviate this issue, we are going to investigate unsupervised
domain adaptation on RC, wherein a model is trained on labeled source domain
and to be applied to the target domain with only unlabeled samples. We first
show that even with the powerful BERT contextual representation, the
performance is still unsatisfactory when the model trained on one dataset is
directly applied to another target dataset. To solve this, we provide a novel
conditional adversarial self-training method (CASe). Specifically, our approach
leverages a BERT model fine-tuned on the source dataset along with the
confidence filtering to generate reliable pseudo-labeled samples in the target
domain for self-training. On the other hand, it further reduces domain
distribution discrepancy through conditional adversarial learning across
domains. Extensive experiments show our approach achieves comparable accuracy
to supervised models on multiple large-scale benchmark datasets.
| 2,020 | Computation and Language |
Biomedical Evidence Generation Engine | With the rapid development of precision medicine, a large amount of health
data (such as electronic health records, gene sequencing, medical images, etc.)
has been produced. It encourages more and more interest in data-driven insight
discovery from these data. It is a reasonable way to verify the derived
insights in biomedical literature. However, manual verification is inefficient
and not scalable. Therefore, an intelligent technique is necessary to solve
this problem. In this paper, we propose a task of biomedical evidence
generation, which is very novel and different from existing NLP tasks.
Furthermore, we developed a biomedical evidence generation engine for this task
with the pipeline of three components which are a literature retrieval module,
a skeleton information identification module, and a text summarization module.
| 2,019 | Computation and Language |
t-SS3: a text classifier with dynamic n-grams for early risk detection
over text streams | A recently introduced classifier, called SS3, has shown to be well suited to
deal with early risk detection (ERD) problems on text streams. It obtained
state-of-the-art performance on early depression and anorexia detection on
Reddit in the CLEF's eRisk open tasks. SS3 was created to deal with ERD
problems naturally since: it supports incremental training and classification
over text streams, and it can visually explain its rationale. However, SS3
processes the input using a bag-of-word model lacking the ability to recognize
important word sequences. This aspect could negatively affect the
classification performance and also reduces the descriptiveness of visual
explanations. In the standard document classification field, it is very common
to use word n-grams to try to overcome some of these limitations.
Unfortunately, when working with text streams, using n-grams is not trivial
since the system must learn and recognize which n-grams are important "on the
fly". This paper introduces t-SS3, an extension of SS3 that allows it to
recognize useful patterns over text streams dynamically. We evaluated our model
in the eRisk 2017 and 2018 tasks on early depression and anorexia detection.
Experimental results suggest that t-SS3 is able to improve both current results
and the richness of visual explanations.
| 2,020 | Computation and Language |
CCAligned: A Massive Collection of Cross-Lingual Web-Document Pairs | Cross-lingual document alignment aims to identify pairs of documents in two
distinct languages that are of comparable content or translations of each
other. In this paper, we exploit the signals embedded in URLs to label web
documents at scale with an average precision of 94.5% across different language
pairs. We mine sixty-eight snapshots of the Common Crawl corpus and identify
web document pairs that are translations of each other. We release a new web
dataset consisting of over 392 million URL pairs from Common Crawl covering
documents in 8144 language pairs of which 137 pairs include English. In
addition to curating this massive dataset, we introduce baseline methods that
leverage cross-lingual representations to identify aligned documents based on
their textual content. Finally, we demonstrate the value of this parallel
documents dataset through a downstream task of mining parallel sentences and
measuring the quality of machine translations from models trained on this mined
data. Our objective in releasing this dataset is to foster new research in
cross-lingual NLP across a variety of low, medium, and high-resource languages.
| 2,020 | Computation and Language |
RNN-Test: Towards Adversarial Testing for Recurrent Neural Network
Systems | While massive efforts have been investigated in adversarial testing of
convolutional neural networks (CNN), testing for recurrent neural networks
(RNN) is still limited and leaves threats for vast sequential application
domains. In this paper, we propose an adversarial testing framework RNN-Test
for RNN systems, focusing on the main sequential domains, not only
classification tasks. First, we design a novel search methodology customized
for RNN models by maximizing the inconsistency of RNN states to produce
adversarial inputs. Next, we introduce two state-based coverage metrics
according to the distinctive structure of RNNs to explore more inference
logics. Finally, RNN-Test solves the joint optimization problem to maximize
state inconsistency and state coverage, and crafts adversarial inputs for
various tasks of different kinds of inputs.
For evaluations, we apply RNN-Test on three sequential models of common RNN
structures. On the tested models, the RNN-Test approach is demonstrated to be
competitive in generating adversarial inputs, outperforming FGSM-based and
DLFuzz-based methods to reduce the model performance more sharply with 2.78% to
32.5% higher success (or generation) rate. RNN-Test could also achieve 52.65%
to 66.45% higher adversary rate on MNIST-LSTM model than relevant work testRNN.
Compared with the neuron coverage, the proposed state coverage metrics as
guidance excel with 4.17% to 97.22% higher success (or generation) rate.
| 2,021 | Computation and Language |
Syntax-Infused Transformer and BERT models for Machine Translation and
Natural Language Understanding | Attention-based models have shown significant improvement over traditional
algorithms in several NLP tasks. The Transformer, for instance, is an
illustrative example that generates abstract representations of tokens inputted
to an encoder based on their relationships to all tokens in a sequence. Recent
studies have shown that although such models are capable of learning syntactic
features purely by seeing examples, explicitly feeding this information to deep
learning models can significantly enhance their performance. Leveraging
syntactic information like part of speech (POS) may be particularly beneficial
in limited training data settings for complex models such as the Transformer.
We show that the syntax-infused Transformer with multiple features achieves an
improvement of 0.7 BLEU when trained on the full WMT 14 English to German
translation dataset and a maximum improvement of 1.99 BLEU points when trained
on a fraction of the dataset. In addition, we find that the incorporation of
syntax into BERT fine-tuning outperforms baseline on a number of downstream
tasks from the GLUE benchmark.
| 2,019 | Computation and Language |
Enhanced Meta-Learning for Cross-lingual Named Entity Recognition with
Minimal Resources | For languages with no annotated resources, transferring knowledge from
rich-resource languages is an effective solution for named entity recognition
(NER). While all existing methods directly transfer from source-learned model
to a target language, in this paper, we propose to fine-tune the learned model
with a few similar examples given a test case, which could benefit the
prediction by leveraging the structural and semantic information conveyed in
such similar examples. To this end, we present a meta-learning algorithm to
find a good model parameter initialization that could fast adapt to the given
test case and propose to construct multiple pseudo-NER tasks for meta-training
by computing sentence similarities. To further improve the model's
generalization ability across different languages, we introduce a masking
scheme and augment the loss function with an additional maximum term during
meta-training. We conduct extensive experiments on cross-lingual named entity
recognition with minimal resources over five target languages. The results show
that our approach significantly outperforms existing state-of-the-art methods
across the board.
| 2,020 | Computation and Language |
Unsupervised Pre-training for Natural Language Generation: A Literature
Review | Recently, unsupervised pre-training is gaining increasing popularity in the
realm of computational linguistics, thanks to its surprising success in
advancing natural language understanding (NLU) and the potential to effectively
exploit large-scale unlabelled corpus. However, regardless of the success in
NLU, the power of unsupervised pre-training is only partially excavated when it
comes to natural language generation (NLG). The major obstacle stems from an
idiosyncratic nature of NLG: Texts are usually generated based on certain
context, which may vary with the target applications. As a result, it is
intractable to design a universal architecture for pre-training as in NLU
scenarios. Moreover, retaining the knowledge learned from pre-training when
learning on the target task is also a non-trivial problem. This review
summarizes the recent efforts to enhance NLG systems with unsupervised
pre-training, with a special focus on the methods to catalyse the integration
of pre-trained models into downstream tasks. They are classified into
architecture-based methods and strategy-based methods, based on their way of
handling the above obstacle. Discussions are also provided to give further
insights into the relationship between these two lines of work, some
informative empirical phenomenons, as well as some possible directions where
future work can be devoted to.
| 2,019 | Computation and Language |
Word-level Lexical Normalisation using Context-Dependent Embeddings | Lexical normalisation (LN) is the process of correcting each word in a
dataset to its canonical form so that it may be more easily and more accurately
analysed. Most lexical normalisation systems operate at the character-level,
while word-level models are seldom used. Recent language models offer solutions
to the drawbacks of word-level LN models, yet, to the best of our knowledge, no
research has investigated their effectiveness on LN. In this paper we introduce
a word-level GRU-based LN model and investigate the effectiveness of recent
embedding techniques on word-level LN. Our results show that our GRU-based
word-level model produces greater results than character-level models, and
outperforms existing deep-learning based LN techniques on Twitter data. We also
find that randomly-initialised embeddings are capable of outperforming
pre-trained embedding models in certain scenarios. Finally, we release a
substantial lexical normalisation dataset to the community.
| 2,019 | Computation and Language |
MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language
Models | Recent state-of-the-art language models utilize a two-phase training
procedure comprised of (i) unsupervised pre-training on unlabeled text, and
(ii) fine-tuning for a specific supervised task. More recently, many studies
have been focused on trying to improve these models by enhancing the
pre-training phase, either via better choice of hyperparameters or by
leveraging an improved formulation. However, the pre-training phase is
computationally expensive and often done on private datasets. In this work, we
present a method that leverages BERT's fine-tuning phase to its fullest, by
applying an extensive number of parallel classifier heads, which are enforced
to be orthogonal, while adaptively eliminating the weaker heads during
training. Our method allows the model to converge to an optimal number of
parallel classifiers, depending on the given dataset at hand.
We conduct an extensive inter- and intra-dataset evaluations, showing that
our method improves the robustness of BERT, sometimes leading to a +9\% gain in
accuracy. These results highlight the importance of a proper fine-tuning
procedure, especially for relatively smaller-sized datasets. Our code is
attached as supplementary and our models will be made completely public.
| 2,019 | Computation and Language |
Microsoft Research Asia's Systems for WMT19 | We Microsoft Research Asia made submissions to 11 language directions in the
WMT19 news translation tasks. We won the first place for 8 of the 11 directions
and the second place for the other three. Our basic systems are built on
Transformer, back translation and knowledge distillation. We integrate several
of our rececent techniques to enhance the baseline systems: multi-agent dual
learning (MADL), masked sequence-to-sequence pre-training (MASS), neural
architecture optimization (NAO), and soft contextual data augmentation (SCA).
| 2,019 | Computation and Language |
Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced
Question Answering | Multi-domain dialogue state tracking (DST) is a critical component for
conversational AI systems. The domain ontology (i.e., specification of domains,
slots, and values) of a conversational AI system is generally incomplete,
making the capability for DST models to generalize to new slots, values, and
domains during inference imperative. In this paper, we propose to model
multi-domain DST as a question answering problem, referred to as Dialogue State
Tracking via Question Answering (DSTQA). Within DSTQA, each turn generates a
question asking for the value of a (domain, slot) pair, thus making it
naturally extensible to unseen domains, slots, and values. Additionally, we use
a dynamically-evolving knowledge graph to explicitly learn relationships
between (domain, slot) pairs. Our model has a 5.80% and 12.21% relative
improvement over the current state-of-the-art model on MultiWOZ 2.0 and
MultiWOZ 2.1 datasets, respectively. Additionally, our model consistently
outperforms the state-of-the-art model in domain adaptation settings. (Code is
released at https://github.com/alexa/dstqa )
| 2,020 | Computation and Language |
Towards Hierarchical Importance Attribution: Explaining Compositional
Semantics for Neural Sequence Models | The impressive performance of neural networks on natural language processing
tasks attributes to their ability to model complicated word and phrase
compositions. To explain how the model handles semantic compositions, we study
hierarchical explanation of neural network predictions. We identify
non-additivity and context independent importance attributions within
hierarchies as two desirable properties for highlighting word and phrase
compositions. We show some prior efforts on hierarchical explanations, e.g.
contextual decomposition, do not satisfy the desired properties mathematically,
leading to inconsistent explanation quality in different models. In this paper,
we start by proposing a formal and general way to quantify the importance of
each word and phrase. Following the formulation, we propose Sampling and
Contextual Decomposition (SCD) algorithm and Sampling and Occlusion (SOC)
algorithm. Human and metrics evaluation on both LSTM models and BERT
Transformer models on multiple datasets show that our algorithms outperform
prior hierarchical explanation algorithms. Our algorithms help to visualize
semantic composition captured by models, extract classification rules and
improve human trust of models. Project page: https://inklab.usc.edu/hiexpl/
| 2,020 | Computation and Language |
Towards automatic extractive text summarization of A-133 Single Audit
reports with machine learning | The rapid growth of text data has motivated the development of
machine-learning based automatic text summarization strategies that concisely
capture the essential ideas in a larger text. This study aimed to devise an
extractive summarization method for A-133 Single Audits, which assess if
recipients of federal grants are compliant with program requirements for use of
federal funding. Currently, these voluminous audits must be manually analyzed
by officials for oversight, risk management, and prioritization purposes.
Automated summarization has the potential to streamline these processes.
Analysis focused on the "Findings" section of ~20,000 Single Audits spanning
2016-2018. Following text preprocessing and GloVe embedding, sentence-level
k-means clustering was performed to partition sentences by topic and to
establish the importance of each sentence. For each audit, key summary
sentences were extracted by proximity to cluster centroids. Summaries were
judged by non-expert human evaluation and compared to human-generated summaries
using the ROUGE metric. Though the goal was to fully automate summarization of
A-133 audits, human input was required at various stages due to large
variability in audit writing style, content, and context. Examples of human
inputs include the number of clusters, the choice to keep or discard certain
clusters based on their content relevance, and the definition of a top
sentence. Overall, this approach made progress towards automated extractive
summaries of A-133 audits, with future work to focus on full automation and
improving summary consistency. This work highlights the inherent difficulty and
subjective nature of automated summarization in a real-world application.
| 2,019 | Computation and Language |
BERT-CNN: a Hierarchical Patent Classifier Based on a Pre-Trained
Language Model | The automatic classification is a process of automatically assigning text
documents to predefined categories. An accurate automatic patent classifier is
crucial to patent inventors and patent examiners in terms of intellectual
property protection, patent management, and patent information retrieval. We
present BERT-CNN, a hierarchical patent classifier based on pre-trained
language model by training the national patent application documents collected
from the State Information Center, China. The experimental results show that
BERT-CNN achieves 84.3% accuracy, which is far better than the two compared
baseline methods, Convolutional Neural Networks and Recurrent Neural Networks.
We didn't apply our model to the third and fourth hierarchical level of the
International Patent Classification - "subclass" and "group".The visualization
of the Attention Mechanism shows that BERT-CNN obtains new state-of-the-art
results in representing vocabularies and semantics. This article demonstrates
the practicality and effectiveness of BERT-CNN in the field of automatic patent
classification.
| 2,019 | Computation and Language |
The Eighth Dialog System Technology Challenge | This paper introduces the Eighth Dialog System Technology Challenge. In line
with recent challenges, the eighth edition focuses on applying end-to-end
dialog technologies in a pragmatic way for multi-domain task-completion, noetic
response selection, audio visual scene-aware dialog, and schema-guided dialog
state tracking tasks. This paper describes the task definition, provided
datasets, and evaluation set-up for each track. We also summarize the results
of the submitted systems to highlight the overall trends of the
state-of-the-art technologies for the tasks.
| 2,019 | Computation and Language |
Sparse associative memory based on contextual code learning for
disambiguating word senses | In recent literature, contextual pretrained Language Models (LMs)
demonstrated their potential in generalizing the knowledge to several Natural
Language Processing (NLP) tasks including supervised Word Sense Disambiguation
(WSD), a challenging problem in the field of Natural Language Understanding
(NLU). However, word representations from these models are still very dense,
costly in terms of memory footprint, as well as minimally interpretable. In
order to address such issues, we propose a new supervised biologically inspired
technique for transferring large pre-trained language model representations
into a compressed representation, for the case of WSD. Our produced
representation contributes to increase the general interpretability of the
framework and to decrease memory footprint, while enhancing performance.
| 2,019 | Computation and Language |
Using natural language processing to extract health-related causality
from Twitter messages | Twitter messages (tweets) contain various types of information, which include
health-related information. Analysis of health-related tweets would help us
understand health conditions and concerns encountered in our daily life. In
this work, we evaluated an approach to extracting causal relations from tweets
using natural language processing (NLP) techniques. We focused on three
health-related topics: stress", "insomnia", and "headache". We proposed a set
of lexico-syntactic patterns based on dependency parser outputs to extract
causal information. A large dataset consisting of 24 million tweets were used.
The results show that our approach achieved an average precision between 74.59%
and 92.27%. Analysis of extracted relations revealed interesting findings about
health-related in Twitter.
| 2,018 | Computation and Language |
Improving Distant Supervised Relation Extraction by Dynamic Neural
Network | Distant Supervised Relation Extraction (DSRE) is usually formulated as a
problem of classifying a bag of sentences that contain two query entities, into
the predefined relation classes. Most existing methods consider those relation
classes as distinct semantic categories while ignoring their potential
connection to query entities. In this paper, we propose to leverage this
connection to improve the relation extraction accuracy. Our key ideas are
twofold: (1) For sentences belonging to the same relation class, the expression
style, i.e. words choice, can vary according to the query entities. To account
for this style shift, the model should adjust its parameters in accordance with
entity types. (2) Some relation classes are semantically similar, and the
entity types appear in one relation may also appear in others. Therefore, it
can be trained cross different relation classes and further enhance those
classes with few samples, i.e., long-tail classes. To unify these two
arguments, we developed a novel Dynamic Neural Network for Relation Extraction
(DNNRE). The network adopts a novel dynamic parameter generator that
dynamically generates the network parameters according to the query entity
types and relation classes. By using this mechanism, the network can
simultaneously handle the style shift problem and enhance the prediction
accuracy for long-tail classes. Through our experimental study, we demonstrate
the effectiveness of the proposed method and show that it can achieve superior
performance over the state-of-the-art methods.
| 2,019 | Computation and Language |
CatGAN: Category-aware Generative Adversarial Networks with Hierarchical
Evolutionary Learning for Category Text Generation | Generating multiple categories of texts is a challenging task and draws more
and more attention. Since generative adversarial nets (GANs) have shown
competitive results on general text generation, they are extended for category
text generation in some previous works. However, the complicated model
structures and learning strategies limit their performance and exacerbate the
training instability. This paper proposes a category-aware GAN (CatGAN) which
consists of an efficient category-aware model for category text generation and
a hierarchical evolutionary learning algorithm for training our model. The
category-aware model directly measures the gap between real samples and
generated samples on each category, then reducing this gap will guide the model
to generate high-quality category samples. The Gumbel-Softmax relaxation
further frees our model from complicated learning strategies for updating
CatGAN on discrete data. Moreover, only focusing on the sample quality normally
leads the mode collapse problem, thus a hierarchical evolutionary learning
algorithm is introduced to stabilize the training procedure and obtain the
trade-off between quality and diversity while training CatGAN. Experimental
results demonstrate that CatGAN outperforms most of the existing
state-of-the-art methods.
| 2,019 | Computation and Language |
Bootstrapping NLU Models with Multi-task Learning | Bootstrapping natural language understanding (NLU) systems with minimal
training data is a fundamental challenge of extending digital assistants like
Alexa and Siri to a new language. A common approach that is adapted in digital
assistants when responding to a user query is to process the input in a
pipeline manner where the first task is to predict the domain, followed by the
inference of intent and slots. However, this cascaded approach instigates error
propagation and prevents information sharing among these tasks. Further, the
use of words as the atomic units of meaning as done in many studies might lead
to coverage problems for morphologically rich languages such as German and
French when data is limited. We address these issues by introducing a
character-level unified neural architecture for joint modeling of the domain,
intent, and slot classification. We compose word-embeddings from characters and
jointly optimize all classification tasks via multi-task learning. In our
results, we show that the proposed architecture is an optimal choice for
bootstrapping NLU systems in low-resource settings thus saving time, cost and
human effort.
| 2,019 | Computation and Language |
Towards Personalized Dialog Policies for Conversational Skill Discovery | Many businesses and consumers are extending the capabilities of voice-based
services such as Amazon Alexa, Google Home, Microsoft Cortana, and Apple Siri
to create custom voice experiences (also known as skills). As the number of
these experiences increases, a key problem is the discovery of skills that can
be used to address a user's request. In this paper, we focus on conversational
skill discovery and present a conversational agent which engages in a dialog
with users to help them find the skills that fulfill their needs. To this end,
we start with a rule-based agent and improve it by using reinforcement
learning. In this way, we enable the agent to adapt to different user
attributes and conversational styles as it interacts with users. We evaluate
our approach in a real production setting by deploying the agent to interact
with real users, and show the effectiveness of the conversational agent in
helping users find the skills that serve their request.
| 2,019 | Computation and Language |
Experiments in Detecting Persuasion Techniques in the News | Many recent political events, like the 2016 US Presidential elections or the
2018 Brazilian elections have raised the attention of institutions and of the
general public on the role of Internet and social media in influencing the
outcome of these events. We argue that a safe democracy is one in which
citizens have tools to make them aware of propaganda campaigns. We propose a
novel task: performing fine-grained analysis of texts by detecting all
fragments that contain propaganda techniques as well as their type. We further
design a novel multi-granularity neural network, and we show that it
outperforms several strong BERT-based baselines.
| 2,019 | Computation and Language |
Assigning Medical Codes at the Encounter Level by Paying Attention to
Documents | The vast majority of research in computer assisted medical coding focuses on
coding at the document level, but a substantial proportion of medical coding in
the real world involves coding at the level of clinical encounters, each of
which is typically represented by a potentially large set of documents. We
introduce encounter-level document attention networks, which use hierarchical
attention to explicitly take the hierarchical structure of encounter
documentation into account. Experimental evaluation demonstrates improvements
in coding accuracy as well as facilitation of human reviewers in their ability
to identify which documents within an encounter play a role in determining the
encounter level codes.
| 2,019 | Computation and Language |
CNN-based Dual-Chain Models for Knowledge Graph Learning | Knowledge graph learning plays a critical role in integrating domain specific
knowledge bases when deploying machine learning and data mining models in
practice. Existing methods on knowledge graph learning primarily focus on
modeling the relations among entities as translations among the relations and
entities, and many of these methods are not able to handle zero-shot problems,
when new entities emerge. In this paper, we present a new convolutional neural
network (CNN)-based dual-chain model. Different from translation based methods,
in our model, interactions among relations and entities are directly captured
via CNN over their embeddings. Moreover, a secondary chain of learning is
conducted simultaneously to incorporate additional information and to enable
better performance. We also present an extension of this model, which
incorporates descriptions of entities and learns a second set of entity
embeddings from the descriptions. As a result, the extended model is able to
effectively handle zero-shot problems. We conducted comprehensive experiments,
comparing our methods with 15 methods on 8 benchmark datasets. Extensive
experimental results demonstrate that our proposed methods achieve or
outperform the state-of-the-art results on knowledge graph learning, and
outperform other methods on zero-shot problems. In addition, our methods
applied to real-world biomedical data are able to produce results that conform
to expert domain knowledge.
| 2,019 | Computation and Language |
Evaluating robustness of language models for chief complaint extraction
from patient-generated text | Automated classification of chief complaints from patient-generated text is a
critical first step in developing scalable platforms to triage patients without
human intervention. In this work, we evaluate several approaches to chief
complaint classification using a novel Chief Complaint (CC) Dataset that
contains ~200,000 patient-generated reasons-for-visit entries mapped to a set
of 795 discrete chief complaints. We examine the use of several fine-tuned
bidirectional transformer (BERT) models trained on both unrelated texts as well
as on the CC dataset. We contrast this performance with a TF-IDF baseline. Our
evaluation has three components: (1) a random test hold-out from the original
dataset; (2) a "misspelling set," consisting of a hand-selected subset of the
test set, where every entry has at least one misspelling; (3) a separate
experimenter-generated free-text set. We find that the TF-IDF model performs
significantly better than the strongest BERT-based model on the test (best BERT
PR-AUC $0.3597 \pm 0.0041$ vs TF-IDF PR-AUC $0.3878 \pm 0.0148$, $p=7\cdot
10^{-5}$), and is statistically comparable to the misspelling sets (best BERT
PR-AUC $0.2579 \pm 0.0079$ vs TF-IDF PR-AUC $0.2733 \pm 0.0130$, $p=0.06$).
However, when examining model predictions on experimenter-generated queries,
some concerns arise about TF-IDF baseline's robustness. Our results suggest
that in certain tasks, simple language embedding baselines may be very
performant; however, truly understanding their robustness requires further
analysis.
| 2,019 | Computation and Language |
Improved Document Modelling with a Neural Discourse Parser | Despite the success of attention-based neural models for natural language
generation and classification tasks, they are unable to capture the discourse
structure of larger documents. We hypothesize that explicit discourse
representations have utility for NLP tasks over longer documents or document
sequences, which sequence-to-sequence models are unable to capture. For
abstractive summarization, for instance, conventional neural models simply
match source documents and the summary in a latent space without explicit
representation of text structure or relations. In this paper, we propose to use
neural discourse representations obtained from a rhetorical structure theory
(RST) parser to enhance document representations. Specifically, document
representations are generated for discourse spans, known as the elementary
discourse units (EDUs). We empirically investigate the benefit of the proposed
approach on two different tasks: abstractive summarization and popularity
prediction of online petitions. We find that the proposed approach leads to
improvements in all cases.
| 2,019 | Computation and Language |
Utterance-to-Utterance Interactive Matching Network for Multi-Turn
Response Selection in Retrieval-Based Chatbots | This paper proposes an utterance-to-utterance interactive matching network
(U2U-IMN) for multi-turn response selection in retrieval-based chatbots.
Different from previous methods following context-to-response matching or
utterance-to-response matching frameworks, this model treats both contexts and
responses as sequences of utterances when calculating the matching degrees
between them. For a context-response pair, the U2U-IMN model first encodes each
utterance separately using recurrent and self-attention layers. Then, a global
and bidirectional interaction between the context and the response is conducted
using the attention mechanism to collect the matching information between them.
The distances between context and response utterances are employed as a prior
component when calculating the attention weights. Finally, sentence-level
aggregation and context-response-level aggregation are executed in turn to
obtain the feature vector for matching degree prediction. Experiments on four
public datasets showed that our proposed method outperformed baseline methods
on all metrics, achieving a new state-of-the-art performance and demonstrating
compatibility across domains for multi-turn response selection.
| 2,019 | Computation and Language |
Robust Reading Comprehension with Linguistic Constraints via Posterior
Regularization | In spite of great advancements of machine reading comprehension (RC),
existing RC models are still vulnerable and not robust to different types of
adversarial examples. Neural models over-confidently predict wrong answers to
semantic different adversarial examples, while over-sensitively predict wrong
answers to semantic equivalent adversarial examples. Existing methods which
improve the robustness of such neural models merely mitigate one of the two
issues but ignore the other. In this paper, we address the over-confidence
issue and the over-sensitivity issue existing in current RC models
simultaneously with the help of external linguistic knowledge. We first
incorporate external knowledge to impose different linguistic constraints
(entity constraint, lexical constraint, and predicate constraint), and then
regularize RC models through posterior regularization. Linguistic constraints
induce more reasonable predictions for both semantic different and semantic
equivalent adversarial examples, and posterior regularization provides an
effective mechanism to incorporate these constraints. Our method can be applied
to any existing neural RC models including state-of-the-art BERT models.
Extensive experiments show that our method remarkably improves the robustness
of base RC models, and is better to cope with these two issues simultaneously.
| 2,019 | Computation and Language |
Learning Autocomplete Systems as a Communication Game | We study textual autocomplete---the task of predicting a full sentence from a
partial sentence---as a human-machine communication game. Specifically, we
consider three competing goals for effective communication: use as few tokens
as possible (efficiency), transmit sentences faithfully (accuracy), and be
learnable to humans (interpretability). We propose an unsupervised approach
which tackles all three desiderata by constraining the communication scheme to
keywords extracted from a source sentence for interpretability and optimizing
the efficiency-accuracy tradeoff. Our experiments show that this approach
results in an autocomplete system that is 52% more accurate at a given
efficiency level compared to baselines, is robust to user variations, and saves
time by nearly 50% compared to typing full sentences.
| 2,019 | Computation and Language |
Contribution au Niveau de l'Approche Indirecte \`a Base de Transfert
dans la Traduction Automatique | In this thesis, we address several important issues concerning the
morphological analysis of Arabic language applied to textual data and machine
translation. First, we provided an overview on machine translation, its history
and its development, then we exposed human translation techniques for eventual
inspiration in machine translation, and we exposed linguistic approaches and
particularly indirect transfer approaches. Finally, we presented our
contributions to the resolution of morphosyntactic problems in computer
linguistics as multilingual information retrieval and machine translation. As a
first contribution, we developed a morphological analyzer for Arabic, and we
have exploited it in the bilingual information retrieval such as a computer
application of multilingual documentary. Results validation showed a
statistically significant performance. In a second contribution, we proposed a
list of morphosyntactic transfer rules from English to Arabic for translation
in three phases: analysis, transfer, generation. We focused on the transfer
phase without semantic distortion for an abstraction of English in a sufficient
subset of Arabic.
| 2,015 | Computation and Language |
AttaCut: A Fast and Accurate Neural Thai Word Segmenter | Word segmentation is a fundamental pre-processing step for Thai Natural
Language Processing. The current off-the-shelf solutions are not benchmarked
consistently, so it is difficult to compare their trade-offs. We conducted a
speed and accuracy comparison of the popular systems on three different domains
and found that the state-of-the-art deep learning system is slow and moreover
does not use sub-word structures to guide the model. Here, we propose a fast
and accurate neural Thai Word Segmenter that uses dilated CNN filters to
capture the environment of each character and uses syllable embeddings as
features. Our system runs at least 5.6x faster and outperforms the previous
state-of-the-art system on some domains. In addition, we develop the first
ML-based Thai orthographical syllable segmenter, which yields syllable
embeddings to be used as features by the word segmenter.
| 2,019 | Computation and Language |
Quick and (not so) Dirty: Unsupervised Selection of Justification
Sentences for Multi-hop Question Answering | We propose an unsupervised strategy for the selection of justification
sentences for multi-hop question answering (QA) that (a) maximizes the
relevance of the selected sentences, (b) minimizes the overlap between the
selected facts, and (c) maximizes the coverage of both question and answer.
This unsupervised sentence selection method can be coupled with any supervised
QA approach. We show that the sentences selected by our method improve the
performance of a state-of-the-art supervised QA model on two multi-hop QA
datasets: AI2's Reasoning Challenge (ARC) and Multi-Sentence Reading
Comprehension (MultiRC). We obtain new state-of-the-art performance on both
datasets among approaches that do not use external resources for training the
QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1%
EM0 on MultiRC. Our justification sentences have higher quality than the
justifications selected by a strong information retrieval baseline, e.g., by
5.4% F1 in MultiRC. We also show that our unsupervised selection of
justification sentences is more stable across domains than a state-of-the-art
supervised sentence selection method.
| 2,019 | Computation and Language |
Multi-Zone Unit for Recurrent Neural Networks | Recurrent neural networks (RNNs) have been widely used to deal with sequence
learning problems. The input-dependent transition function, which folds new
observations into hidden states to sequentially construct fixed-length
representations of arbitrary-length sequences, plays a critical role in RNNs.
Based on single space composition, transition functions in existing RNNs often
have difficulty in capturing complicated long-range dependencies. In this
paper, we introduce a new Multi-zone Unit (MZU) for RNNs. The key idea is to
design a transition function that is capable of modeling multiple space
composition. The MZU consists of three components: zone generation, zone
composition, and zone aggregation. Experimental results on multiple datasets of
the character-level language modeling task and the aspect-based sentiment
analysis task demonstrate the superiority of the MZU.
| 2,019 | Computation and Language |
Deep Learning versus Traditional Classifiers on Vietnamese Students'
Feedback Corpus | Student's feedback is an important source of collecting students' opinions to
improve the quality of training activities. Implementing sentiment analysis
into student feedback data, we can determine sentiments polarities which
express all problems in the institution since changes necessary will be applied
to improve the quality of teaching and learning. This study focused on machine
learning and natural language processing techniques (NaiveBayes, Maximum
Entropy, Long Short-Term Memory, Bi-Directional Long Short-Term Memory) on the
VietnameseStudents' Feedback Corpus collected from a university. The final
results were compared and evaluated to find the most effective model based on
different evaluation criteria. The experimental results show that the
Bi-Directional LongShort-Term Memory algorithm outperformed than three other
algorithms in terms of the F1-score measurement with 92.0% on the sentiment
classification task and 89.6% on the topic classification task. In addition, we
developed a sentiment analysis application analyzing student feedback. The
application will help the institution to recognize students' opinions about a
problem and identify shortcomings that still exist. With the use of this
application, the institution can propose an appropriate method to improve the
quality of training activities in the future.
| 2,018 | Computation and Language |
Error Analysis for Vietnamese Named Entity Recognition on Deep Neural
Network Models | In recent years, Vietnamese Named Entity Recognition (NER) systems have had a
great breakthrough when using Deep Neural Network methods. This paper describes
the primary errors of the state-of-the-art NER systems on Vietnamese language.
After conducting experiments on BLSTM-CNN-CRF and BLSTM-CRF models with
different word embeddings on the Vietnamese NER dataset. This dataset is
provided by VLSP in 2016 and used to evaluate most of the current Vietnamese
NER systems. We noticed that BLSTM-CNN-CRF gives better results, therefore, we
analyze the errors on this model in detail. Our error-analysis results provide
us thorough insights in order to increase the performance of NER for the
Vietnamese language and improve the quality of the corpus in the future works.
| 2,018 | Computation and Language |
Using Error Decay Prediction to Overcome Practical Issues of Deep Active
Learning for Named Entity Recognition | Existing deep active learning algorithms achieve impressive sampling
efficiency on natural language processing tasks. However, they exhibit several
weaknesses in practice, including (a) inability to use uncertainty sampling
with black-box models, (b) lack of robustness to labeling noise, and (c) lack
of transparency. In response, we propose a transparent batch active sampling
framework by estimating the error decay curves of multiple feature-defined
subsets of the data. Experiments on four named entity recognition (NER) tasks
demonstrate that the proposed methods significantly outperform
diversification-based methods for black-box NER taggers, and can make the
sampling process more robust to labeling noise when combined with
uncertainty-based methods. Furthermore, the analysis of experimental results
sheds light on the weaknesses of different active sampling strategies, and when
traditional uncertainty-based or diversification-based methods can be expected
to work well.
| 2,020 | Computation and Language |
Multi-task Sentence Encoding Model for Semantic Retrieval in Question
Answering Systems | Question Answering (QA) systems are used to provide proper responses to
users' questions automatically. Sentence matching is an essential task in the
QA systems and is usually reformulated as a Paraphrase Identification (PI)
problem. Given a question, the aim of the task is to find the most similar
question from a QA knowledge base. In this paper, we propose a Multi-task
Sentence Encoding Model (MSEM) for the PI problem, wherein a connected graph is
employed to depict the relation between sentences, and a multi-task learning
model is applied to address both the sentence matching and sentence intent
classification problem. In addition, we implement a general semantic retrieval
framework that combines our proposed model and the Approximate Nearest Neighbor
(ANN) technology, which enables us to find the most similar question from all
available candidates very quickly during online serving. The experiments show
the superiority of our proposed method as compared with the existing sentence
matching models.
| 2,019 | Computation and Language |
Graph Transformer for Graph-to-Sequence Learning | The dominant graph-to-sequence transduction models employ graph neural
networks for graph representation learning, where the structural information is
reflected by the receptive field of neurons. Unlike graph neural networks that
restrict the information exchange between immediate neighborhood, we propose a
new model, known as Graph Transformer, that uses explicit relation encoding and
allows direct communication between two distant nodes. It provides a more
efficient way for global graph structure modeling. Experiments on the
applications of text generation from Abstract Meaning Representation (AMR) and
syntax-based neural machine translation show the superiority of our proposed
model. Specifically, our model achieves 27.4 BLEU on LDC2015E86 and 29.7 BLEU
on LDC2017T10 for AMR-to-text generation, outperforming the state-of-the-art
results by up to 2.2 points. On the syntax-based translation tasks, our model
establishes new single-model state-of-the-art BLEU scores, 21.3 for
English-to-German and 14.1 for English-to-Czech, improving over the existing
best results, including ensembles, by over 1 BLEU.
| 2,019 | Computation and Language |
Deep and Dense Sarcasm Detection | Recent work in automated sarcasm detection has placed a heavy focus on
context and meta-data. Whilst certain utterances indeed require background
knowledge and commonsense reasoning, previous works have only explored shallow
models for capturing the lexical, syntactic and semantic cues present within a
text. In this paper, we propose a deep 56 layer network, implemented with dense
connectivity to model the isolated utterance and extract richer features
therein. We compare our approach against recent state-of-the-art architectures
which make considerable use of extrinsic information, and demonstrate
competitive results whilst using only the local features of the text. Further,
we provide an analysis of the dependency of prior convolution outputs in
generating the final feature maps. Finally a case study is presented,
supporting that our approach accurately classifies additional uses of clear
sarcasm, which a standard CNN misclassifies.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.