Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Efficient Sentence Embedding via Semantic Subspace Analysis | A novel sentence embedding method built upon semantic subspace analysis,
called semantic subspace sentence embedding (S3E), is proposed in this work.
Given the fact that word embeddings can capture semantic relationship while
semantically similar words tend to form semantic groups in a high-dimensional
embedding space, we develop a sentence representation scheme by analyzing
semantic subspaces of its constituent words. Specifically, we construct a
sentence model from two aspects. First, we represent words that lie in the same
semantic group using the intra-group descriptor. Second, we characterize the
interaction between multiple semantic groups with the inter-group descriptor.
The proposed S3E method is evaluated on both textual similarity tasks and
supervised tasks. Experimental results show that it offers comparable or better
performance than the state-of-the-art. The complexity of our S3E method is also
much lower than other parameterized models.
| 2,020 | Computation and Language |
Data Augmentation for Copy-Mechanism in Dialogue State Tracking | While several state-of-the-art approaches to dialogue state tracking (DST)
have shown promising performances on several benchmarks, there is still a
significant performance gap between seen slot values (i.e., values that occur
in both training set and test set) and unseen ones (values that occur in
training set but not in test set). Recently, the copy-mechanism has been widely
used in DST models to handle unseen slot values, which copies slot values from
user utterance directly. In this paper, we aim to find out the factors that
influence the generalization ability of a common copy-mechanism model for DST.
Our key observations include: 1) the copy-mechanism tends to memorize values
rather than infer them from contexts, which is the primary reason for
unsatisfactory generalization performance; 2) greater diversity of slot values
in the training set increase the performance on unseen values but slightly
decrease the performance on seen values. Moreover, we propose a simple but
effective algorithm of data augmentation to train copy-mechanism models, which
augments the input dataset by copying user utterances and replacing the real
slot values with randomly generated strings. Users could use two
hyper-parameters to realize a trade-off between the performances on seen values
and unseen ones, as well as a trade-off between overall performance and
computational cost. Experimental results on three widely used datasets (WoZ
2.0, DSTC2, and Multi-WoZ 2.0) show the effectiveness of our approach.
| 2,020 | Computation and Language |
Markov Chain Monte-Carlo Phylogenetic Inference Construction in
Computational Historical Linguistics | More and more languages in the world are under study nowadays, as a result,
the traditional way of historical linguistics study is facing some challenges.
For example, the linguistic comparative research among languages needs manual
annotation, which becomes more and more impossible with the increasing amount
of language data coming out all around the world. Although it could hardly
replace linguists work, the automatic computational methods have been taken
into consideration and it can help people reduce their workload. One of the
most important work in historical linguistics is word comparison from different
languages and find the cognate words for them, which means people try to figure
out if the two languages are related to each other or not. In this paper, I am
going to use computational method to cluster the languages and use Markov Chain
Monte Carlo (MCMC) method to build the language typology relationship tree
based on the clusters.
| 2,020 | Computation and Language |
Machine Translation System Selection from Bandit Feedback | Adapting machine translation systems in the real world is a difficult
problem. In contrast to offline training, users cannot provide the type of
fine-grained feedback (such as correct translations) typically used for
improving the system. Moreover, different users have different translation
needs, and even a single user's needs may change over time.
In this work we take a different approach, treating the problem of adaptation
as one of selection. Instead of adapting a single system, we train many
translation systems using different architectures, datasets, and optimization
methods. Using bandit learning techniques on simulated user feedback, we learn
a policy to choose which system to use for a particular translation task. We
show that our approach can (1) quickly adapt to address domain changes in
translation tasks, (2) outperform the single best system in mixed-domain
translation tasks, and (3) make effective instance-specific decisions when
using contextual bandit strategies.
| 2,020 | Computation and Language |
Incorporating Effective Global Information via Adaptive Gate Attention
for Text Classification | The dominant text classification studies focus on training classifiers using
textual instances only or introducing external knowledge (e.g., hand-craft
features and domain expert knowledge). In contrast, some corpus-level
statistical features, like word frequency and distribution, are not well
exploited. Our work shows that such simple statistical information can enhance
classification performance both efficiently and significantly compared with
several baseline models. In this paper, we propose a classifier with gate
mechanism named Adaptive Gate Attention model with Global Information (AGA+GI),
in which the adaptive gate mechanism incorporates global statistical features
into latent semantic features and the attention layer captures dependency
relationship within the sentence. To alleviate the overfitting issue, we
propose a novel Leaky Dropout mechanism to improve generalization ability and
performance stability. Our experiments show that the proposed method can
achieve better accuracy than CNN-based and RNN-based approaches without global
information on several benchmarks.
| 2,020 | Computation and Language |
Investigating Typed Syntactic Dependencies for Targeted Sentiment
Classification Using Graph Attention Neural Network | Targeted sentiment classification predicts the sentiment polarity on given
target mentions in input texts. Dominant methods employ neural networks for
encoding the input sentence and extracting relations between target mentions
and their contexts. Recently, graph neural network has been investigated for
integrating dependency syntax for the task, achieving the state-of-the-art
results. However, existing methods do not consider dependency label
information, which can be intuitively useful. To solve the problem, we
investigate a novel relational graph attention network that integrates typed
syntactic dependency information. Results on standard benchmarks show that our
method can effectively leverage label information for improving targeted
sentiment classification performances. Our final model significantly
outperforms state-of-the-art syntax-based approaches.
| 2,020 | Computation and Language |
Unsupervised Question Decomposition for Question Answering | We aim to improve question answering (QA) by decomposing hard questions into
simpler sub-questions that existing QA systems are capable of answering. Since
labeling questions with decompositions is cumbersome, we take an unsupervised
approach to produce sub-questions, also enabling us to leverage millions of
questions from the internet. Specifically, we propose an algorithm for One-to-N
Unsupervised Sequence transduction (ONUS) that learns to map one hard,
multi-hop question to many simpler, single-hop sub-questions. We answer
sub-questions with an off-the-shelf QA model and give the resulting answers to
a recomposition model that combines them into a final answer. We show large QA
improvements on HotpotQA over a strong baseline on the original, out-of-domain,
and multi-hop dev sets. ONUS automatically learns to decompose different kinds
of questions, while matching the utility of supervised and heuristic
decomposition methods for QA and exceeding those methods in fluency.
Qualitatively, we find that using sub-questions is promising for shedding light
on why a QA system makes a prediction.
| 2,020 | Computation and Language |
Fill in the BLANC: Human-free quality estimation of document summaries | We present BLANC, a new approach to the automatic estimation of document
summary quality. Our goal is to measure the functional performance of a summary
with an objective, reproducible, and fully automated method. Our approach
achieves this by measuring the performance boost gained by a pre-trained
language model with access to a document summary while carrying out its
language understanding task on the document's text. We present evidence that
BLANC scores have as good correlation with human evaluations as do the ROUGE
family of summary quality measurements. And unlike ROUGE, the BLANC method does
not require human-written reference summaries, allowing for fully human-free
summary quality estimation.
| 2,020 | Computation and Language |
A Nepali Rule Based Stemmer and its performance on different NLP
applications | Stemming is an integral part of Natural Language Processing (NLP). It's a
preprocessing step in almost every NLP application. Arguably, the most
important usage of stemming is in Information Retrieval (IR). While there are
lots of work done on stemming in languages like English, Nepali stemming has
only a few works. This study focuses on creating a Rule Based stemmer for
Nepali text. Specifically, it is an affix stripping system that identifies two
different class of suffixes in Nepali grammar and strips them separately. Only
a single negativity prefix (Na) is identified and stripped. This study focuses
on a number of techniques like exception word identification, morphological
normalization and word transformation to increase stemming performance. The
stemmer is tested intrinsically using Paice's method and extrinsically on a
basic tf-idf based IR system and an elementary news topic classifier using
Multinomial Naive Bayes Classifier. The difference in performance of these
systems with and without using the stemmer is analysed.
| 2,018 | Computation and Language |
Do Multi-Hop Question Answering Systems Know How to Answer the
Single-Hop Sub-Questions? | Multi-hop question answering (QA) requires a model to retrieve and integrate
information from different parts of a long text to answer a question. Humans
answer this kind of complex questions via a divide-and-conquer approach. In
this paper, we investigate whether top-performing models for multi-hop
questions understand the underlying sub-questions like humans. We adopt a
neural decomposition model to generate sub-questions for a multi-hop complex
question, followed by extracting the corresponding sub-answers. We show that
multiple state-of-the-art multi-hop QA models fail to correctly answer a large
portion of sub-questions, although their corresponding multi-hop questions are
correctly answered. This indicates that these models manage to answer the
multi-hop questions using some partial clues, instead of truly understanding
the reasoning paths. We also propose a new model which significantly improves
the performance on answering the sub-questions. Our work takes a step forward
towards building a more explainable multi-hop QA system.
| 2,021 | Computation and Language |
GRET: Global Representation Enhanced Transformer | Transformer, based on the encoder-decoder framework, has achieved
state-of-the-art performance on several natural language generation tasks. The
encoder maps the words in the input sentence into a sequence of hidden states,
which are then fed into the decoder to generate the output sentence. These
hidden states usually correspond to the input words and focus on capturing
local information. However, the global (sentence level) information is seldom
explored, leaving room for the improvement of generation quality. In this
paper, we propose a novel global representation enhanced Transformer (GRET) to
explicitly model global representation in the Transformer network.
Specifically, in the proposed model, an external state is generated for the
global representation from the encoder. The global representation is then fused
into the decoder during the decoding process to improve generation quality. We
conduct experiments in two text generation tasks: machine translation and text
summarization. Experimental results on four WMT machine translation tasks and
LCSTS text summarization task demonstrate the effectiveness of the proposed
approach on natural language generation.
| 2,020 | Computation and Language |
Predicting Subjective Features of Questions of QA Websites using BERT | Community Question-Answering websites, such as StackOverflow and Quora,
expect users to follow specific guidelines in order to maintain content
quality. These systems mainly rely on community reports for assessing contents,
which has serious problems such as the slow handling of violations, the loss of
normal and experienced users' time, the low quality of some reports, and
discouraging feedback to new users. Therefore, with the overall goal of
providing solutions for automating moderation actions in Q&A websites, we aim
to provide a model to predict 20 quality or subjective aspects of questions in
QA websites. To this end, we used data gathered by the CrowdSource team at
Google Research in 2019 and a fine-tuned pre-trained BERT model on our problem.
Based on the evaluation by Mean-Squared-Error (MSE), the model achieved a value
of 0.046 after 2 epochs of training, which did not improve substantially in the
next ones. Results confirm that by simple fine-tuning, we can achieve accurate
models in little time and on less amount of data.
| 2,020 | Computation and Language |
A Hybrid Approach to Dependency Parsing: Combining Rules and Morphology
with Deep Learning | Fully data-driven, deep learning-based models are usually designed as
language-independent and have been shown to be successful for many natural
language processing tasks. However, when the studied language is low-resourced
and the amount of training data is insufficient, these models can benefit from
the integration of natural language grammar-based information. We propose two
approaches to dependency parsing especially for languages with restricted
amount of training data. Our first approach combines a state-of-the-art deep
learning-based parser with a rule-based approach and the second one
incorporates morphological information into the parser. In the rule-based
approach, the parsing decisions made by the rules are encoded and concatenated
with the vector representations of the input words as additional information to
the deep network. The morphology-based approach proposes different methods to
include the morphological structure of words into the parser network.
Experiments are conducted on the IMST-UD Treebank and the results suggest that
integration of explicit knowledge about the target language to a neural parser
through a rule-based parsing system and morphological analysis leads to more
accurate annotations and hence, increases the parsing performance in terms of
attachment scores. The proposed methods are developed for Turkish, but can be
adapted to other languages as well.
| 2,022 | Computation and Language |
Learning to Select Bi-Aspect Information for Document-Scale Text Content
Manipulation | In this paper, we focus on a new practical task, document-scale text content
manipulation, which is the opposite of text style transfer and aims to preserve
text styles while altering the content. In detail, the input is a set of
structured records and a reference text for describing another recordset. The
output is a summary that accurately describes the partial content in the source
recordset with the same writing style of the reference. The task is
unsupervised due to lack of parallel data, and is challenging to select
suitable records and style words from bi-aspect inputs respectively and
generate a high-fidelity long document. To tackle those problems, we first
build a dataset based on a basketball game report corpus as our testbed, and
present an unsupervised neural model with interactive attention mechanism,
which is used for learning the semantic relationship between records and
reference texts to achieve better content transfer and better style
preservation. In addition, we also explore the effectiveness of the
back-translation in our task for constructing some pseudo-training pairs.
Empirical results show superiority of our approaches over competitive methods,
and the models also yield a new state-of-the-art result on a sentence-level
dataset.
| 2,020 | Computation and Language |
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine
Translation | Transformer-based models have brought a radical change to neural machine
translation. A key feature of the Transformer architecture is the so-called
multi-head attention mechanism, which allows the model to focus simultaneously
on different parts of the input. However, recent works have shown that most
attention heads learn simple, and often redundant, positional patterns. In this
paper, we propose to replace all but one attention head of each encoder layer
with simple fixed -- non-learnable -- attentive patterns that are solely based
on position and do not require any external knowledge. Our experiments with
different data sizes and multiple language pairs show that fixing the attention
heads on the encoder side of the Transformer at training time does not impact
the translation quality and even increases BLEU scores by up to 3 points in
low-resource scenarios.
| 2,020 | Computation and Language |
Word Embeddings Inherently Recover the Conceptual Organization of the
Human Mind | Machine learning is a means to uncover deep patterns from rich sources of
data. Here, we find that machine learning can recover the conceptual
organization of the human mind when applied to the natural language use of
millions of people. Utilizing text from billions of webpages, we recover most
of the concepts contained in English, Dutch, and Japanese, as represented in
large scale Word Association networks. Our results justify machine learning as
a means to probe the human mind, at a depth and scale that has been
unattainable using self-report and observational methods. Beyond direct
psychological applications, our methods may prove useful for projects concerned
with defining, assessing, relating, or uncovering concepts in any scientific
field.
| 2,020 | Computation and Language |
Semi-Supervised Speech Recognition via Local Prior Matching | For sequence transduction tasks like speech recognition, a strong structured
prior model encodes rich information about the target space, implicitly ruling
out invalid sequences by assigning them low probability. In this work, we
propose local prior matching (LPM), a semi-supervised objective that distills
knowledge from a strong prior (e.g. a language model) to provide learning
signal to a discriminative model trained on unlabeled speech. We demonstrate
that LPM is theoretically well-motivated, simple to implement, and superior to
existing knowledge distillation techniques under comparable settings. Starting
from a baseline trained on 100 hours of labeled speech, with an additional 360
hours of unlabeled data, LPM recovers 54% and 73% of the word error rate on
clean and noisy test sets relative to a fully supervised model on the same
data.
| 2,020 | Computation and Language |
Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation | Fine-tuning pre-trained language models like BERT has become an effective way
in NLP and yields state-of-the-art results on many downstream tasks. Recent
studies on adapting BERT to new tasks mainly focus on modifying the model
structure, re-designing the pre-train tasks, and leveraging external data and
knowledge. The fine-tuning strategy itself has yet to be fully explored. In
this paper, we improve the fine-tuning of BERT with two effective mechanisms:
self-ensemble and self-distillation. The experiments on text classification and
natural language inference tasks show our proposed methods can significantly
improve the adaption of BERT without any external data or knowledge.
| 2,020 | Computation and Language |
Low-Resource Knowledge-Grounded Dialogue Generation | Responding with knowledge has been recognized as an important capability for
an intelligent conversational agent. Yet knowledge-grounded dialogues, as
training data for learning such a response generation model, are difficult to
obtain. Motivated by the challenge in practice, we consider knowledge-grounded
dialogue generation under a natural assumption that only limited training
examples are available. In such a low-resource setting, we devise a
disentangled response decoder in order to isolate parameters that depend on
knowledge-grounded dialogues from the entire generation model. By this means,
the major part of the model can be learned from a large number of ungrounded
dialogues and unstructured documents, while the remaining small parameters can
be well fitted using the limited training examples. Evaluation results on two
benchmarks indicate that with only 1/8 training data, our model can achieve the
state-of-the-art performance and generalize well on out-of-domain knowledge.
| 2,020 | Computation and Language |
Multilingual Twitter Corpus and Baselines for Evaluating Demographic
Bias in Hate Speech Recognition | Existing research on fairness evaluation of document classification models
mainly uses synthetic monolingual data without ground truth for author
demographic attributes. In this work, we assemble and publish a multilingual
Twitter corpus for the task of hate speech detection with inferred four author
demographic factors: age, country, gender and race/ethnicity. The corpus covers
five languages: English, Italian, Polish, Portuguese and Spanish. We evaluate
the inferred demographic labels with a crowdsourcing platform, Figure Eight. To
examine factors that can cause biases, we take an empirical analysis of
demographic predictability on the English corpus. We measure the performance of
four popular document classifiers and evaluate the fairness and bias of the
baseline classifiers on the author-level demographic attributes.
| 2,020 | Computation and Language |
Discriminative Adversarial Search for Abstractive Summarization | We introduce a novel approach for sequence decoding, Discriminative
Adversarial Search (DAS), which has the desirable properties of alleviating the
effects of exposure bias without requiring external metrics. Inspired by
Generative Adversarial Networks (GANs), wherein a discriminator is used to
improve the generator, our method differs from GANs in that the generator
parameters are not updated at training time and the discriminator is only used
to drive sequence generation at inference time.
We investigate the effectiveness of the proposed approach on the task of
Abstractive Summarization: the results obtained show that a naive application
of DAS improves over the state-of-the-art methods, with further gains obtained
via discriminator retraining. Moreover, we show how DAS can be effective for
cross-domain adaptation. Finally, all results reported are obtained without
additional rule-based filtering strategies, commonly used by the best
performing systems available: this indicates that DAS can effectively be
deployed without relying on post-hoc modifications of the generated outputs.
| 2,020 | Computation and Language |
Resources for Turkish Dependency Parsing: Introducing the BOUN Treebank
and the BoAT Annotation Tool | In this paper, we introduce the resources that we developed for Turkish
dependency parsing, which include a novel manually annotated treebank (BOUN
Treebank), along with the guidelines we adopted, and a new annotation tool
(BoAT). The manual annotation process we employed was shaped and implemented by
a team of four linguists and five Natural Language Processing (NLP)
specialists. Decisions regarding the annotation of the BOUN Treebank were made
in line with the Universal Dependencies (UD) framework as well as our recent
efforts for unifying the Turkish UD treebanks through manual re-annotation. To
the best of our knowledge, BOUN Treebank is the largest Turkish treebank. It
contains a total of 9,761 sentences from various topics including biographical
texts, national newspapers, instructional texts, popular culture articles, and
essays. In addition, we report the parsing results of a state-of-the-art
dependency parser obtained over the BOUN Treebank as well as two other
treebanks in Turkish. Our results demonstrate that the unification of the
Turkish annotation scheme and the introduction of a more comprehensive treebank
lead to improved performance with regard to dependency parsing.
| 2,021 | Computation and Language |
Parsing Early Modern English for Linguistic Search | We investigate the question of whether advances in NLP over the last few
years make it possible to vastly increase the size of data usable for research
in historical syntax. This brings together many of the usual tools in NLP -
word embeddings, tagging, and parsing - in the service of linguistic queries
over automatically annotated corpora. We train a part-of-speech (POS) tagger
and parser on a corpus of historical English, using ELMo embeddings trained
over a billion words of similar text. The evaluation is based on the standard
metrics, as well as on the accuracy of the query searches using the parsed
data.
| 2,020 | Computation and Language |
Differentiable Reasoning over a Virtual Knowledge Base | We consider the task of answering complex multi-hop questions using a corpus
as a virtual knowledge base (KB). In particular, we describe a neural module,
DrKIT, that traverses textual data like a KB, softly following paths of
relations between mentions of entities in the corpus. At each step the module
uses a combination of sparse-matrix TFIDF indices and a maximum inner product
search (MIPS) on a special index of contextual representations of the mentions.
This module is differentiable, so the full system can be trained end-to-end
using gradient based methods, starting from natural language inputs. We also
describe a pretraining scheme for the contextual representation encoder by
generating hard negative examples using existing knowledge bases. We show that
DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset,
cutting the gap between text-based and KB-based state-of-the-art by 70%. On
HotpotQA, DrKIT leads to a 10% improvement over a BERT-based re-ranking
approach to retrieving the relevant passages required to answer a question.
DrKIT is also very efficient, processing 10-100x more queries per second than
existing multi-hop systems.
| 2,020 | Computation and Language |
Exploring BERT Parameter Efficiency on the Stanford Question Answering
Dataset v2.0 | In this paper we explore the parameter efficiency of BERT arXiv:1810.04805 on
version 2.0 of the Stanford Question Answering dataset (SQuAD2.0). We evaluate
the parameter efficiency of BERT while freezing a varying number of final
transformer layers as well as including the adapter layers proposed in
arXiv:1902.00751. Additionally, we experiment with the use of context-aware
convolutional (CACNN) filters, as described in arXiv:1709.08294v3, as a final
augmentation layer for the SQuAD2.0 tasks.
This exploration is motivated in part by arXiv:1907.10597, which made a
compelling case for broadening the evaluation criteria of artificial
intelligence models to include various measures of resource efficiency. While
we do not evaluate these models based on their floating point operation
efficiency as proposed in arXiv:1907.10597, we examine efficiency with respect
to training time, inference time, and total number of model parameters. Our
results largely corroborate those of arXiv:1902.00751 for adapter modules,
while also demonstrating that gains in F1 score from adding context-aware
convolutional filters are not practical due to the increase in training and
inference time.
| 2,020 | Computation and Language |
Multimodal Transformer with Pointer Network for the DSTC8 AVSD Challenge | Audio-Visual Scene-Aware Dialog (AVSD) is an extension from Video Question
Answering (QA) whereby the dialogue agent is required to generate natural
language responses to address user queries and carry on conversations. This is
a challenging task as it consists of video features of multiple modalities,
including text, visual, and audio features. The agent also needs to learn
semantic dependencies among user utterances and system responses to make
coherent conversations with humans. In this work, we describe our submission to
the AVSD track of the 8th Dialogue System Technology Challenge. We adopt
dot-product attention to combine text and non-text features of input video. We
further enhance the generation capability of the dialogue agent by adopting
pointer networks to point to tokens from multiple source sequences in each
generation step. Our systems achieve high performance in automatic metrics and
obtain 5th and 6th place in human evaluation among all submissions.
| 2,020 | Computation and Language |
End-to-end Emotion-Cause Pair Extraction via Learning to Link | Emotion-cause pair extraction (ECPE), as an emergent natural language
processing task, aims at jointly investigating emotions and their underlying
causes in documents. It extends the previous emotion cause extraction (ECE)
task, yet without requiring a set of pre-given emotion clauses as in ECE.
Existing approaches to ECPE generally adopt a two-stage method, i.e., (1)
emotion and cause detection, and then (2) pairing the detected emotions and
causes. Such pipeline method, while intuitive, suffers from two critical
issues, including error propagation across stages that may hinder the
effectiveness, and high computational cost that would limit the practical
application of the method. To tackle these issues, we propose a multi-task
learning model that can extract emotions, causes and emotion-cause pairs
simultaneously in an end-to-end manner. Specifically, our model regards pair
extraction as a link prediction task, and learns to link from emotion clauses
to cause clauses, i.e., the links are directional. Emotion extraction and cause
extraction are incorporated into the model as auxiliary tasks, which further
boost the pair extraction. Experiments are conducted on an ECPE benchmarking
dataset. The results show that our proposed model outperforms a range of
state-of-the-art approaches.
| 2,022 | Computation and Language |
Edge-Enhanced Graph Convolution Networks for Event Detection with
Syntactic Relation | Event detection (ED), a key subtask of information extraction, aims to
recognize instances of specific event types in text. Previous studies on the
task have verified the effectiveness of integrating syntactic dependency into
graph convolutional networks. However, these methods usually ignore dependency
label information, which conveys rich and useful linguistic knowledge for ED.
In this paper, we propose a novel architecture named Edge-Enhanced Graph
Convolution Networks (EE-GCN), which simultaneously exploits syntactic
structure and typed dependency label information to perform ED. Specifically,
an edge-aware node update module is designed to generate expressive word
representations by aggregating syntactically-connected words through specific
dependency types. Furthermore, to fully explore clues hidden in dependency
edges, a node-aware edge update module is introduced, which refines the
relation representations with contextual information. These two modules are
complementary to each other and work in a mutual promotion way. We conduct
experiments on the widely used ACE2005 dataset and the results show significant
improvement over competitive baseline methods.
| 2,020 | Computation and Language |
Label-guided Learning for Text Classification | Text classification is one of the most important and fundamental tasks in
natural language processing. Performance of this task mainly dependents on text
representation learning. Currently, most existing learning frameworks mainly
focus on encoding local contextual information between words. These methods
always neglect to exploit global clues, such as label information, for encoding
text information. In this study, we propose a label-guided learning framework
LguidedLearn for text representation and classification. Our method is novel
but simple that we only insert a label-guided encoding layer into the commonly
used text representation learning schemas. That label-guided layer performs
label-based attentive encoding to map the universal text embedding (encoded by
a contextual information learner) into different label spaces, resulting in
label-wise embeddings. In our proposed framework, the label-guided layer can be
easily and directly applied with a contextual encoding method to perform
jointly learning. Text information is encoded based on both the local
contextual information and the global label clues. Therefore, the obtained text
embeddings are more robust and discriminative for text classification.
Extensive experiments are conducted on benchmark datasets to illustrate the
effectiveness of our proposed method.
| 2,020 | Computation and Language |
MuST-Cinema: a Speech-to-Subtitles corpus | Growing needs in localising audiovisual content in multiple languages through
subtitles call for the development of automatic solutions for human subtitling.
Neural Machine Translation (NMT) can contribute to the automatisation of
subtitling, facilitating the work of human subtitlers and reducing turn-around
times and related costs. NMT requires high-quality, large, task-specific
training data. The existing subtitling corpora, however, are missing both
alignments to the source language audio and important information about
subtitle breaks. This poses a significant limitation for developing efficient
automatic approaches for subtitling, since the length and form of a subtitle
directly depends on the duration of the utterance. In this work, we present
MuST-Cinema, a multilingual speech translation corpus built from TED subtitles.
The corpus is comprised of (audio, transcription, translation) triplets.
Subtitle breaks are preserved by inserting special symbols. We show that the
corpus can be used to build models that efficiently segment sentences into
subtitles and propose a method for annotating existing subtitling corpora with
subtitle breaks, conforming to the constraint of length.
| 2,020 | Computation and Language |
What BERT Sees: Cross-Modal Transfer for Visual Question Generation | Pre-trained language models have recently contributed to significant advances
in NLP tasks. Recently, multi-modal versions of BERT have been developed, using
heavy pre-training relying on vast corpora of aligned textual and image data,
primarily applied to classification tasks such as VQA. In this paper, we are
interested in evaluating the visual capabilities of BERT out-of-the-box, by
avoiding pre-training made on supplementary data. We choose to study Visual
Question Generation, a task of great interest for grounded dialog, that enables
to study the impact of each modality (as input can be visual and/or textual).
Moreover, the generation aspect of the task requires an adaptation since BERT
is primarily designed as an encoder. We introduce BERT-gen, a BERT-based
architecture for text generation, able to leverage on either mono- or multi-
modal representations. The results reported under different configurations
indicate an innate capacity for BERT-gen to adapt to multi-modal data and text
generation, even with few data available, avoiding expensive pre-training. The
proposed model obtains substantial improvements over the state-of-the-art on
two established VQG datasets.
| 2,020 | Computation and Language |
Small-Footprint Open-Vocabulary Keyword Spotting with Quantized LSTM
Networks | We explore a keyword-based spoken language understanding system, in which the
intent of the user can directly be derived from the detection of a sequence of
keywords in the query. In this paper, we focus on an open-vocabulary keyword
spotting method, allowing the user to define their own keywords without having
to retrain the whole model. We describe the different design choices leading to
a fast and small-footprint system, able to run on tiny devices, for any
arbitrary set of user-defined keywords, without training data specific to those
keywords. The model, based on a quantized long short-term memory (LSTM) neural
network, trained with connectionist temporal classification (CTC), weighs less
than 500KB. Our approach takes advantage of some properties of the predictions
of CTC-trained networks to calibrate the confidence scores and implement a fast
detection algorithm. The proposed system outperforms a standard keyword-filler
model approach.
| 2,020 | Computation and Language |
KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation
Classification | Lexical relations describe how concepts are semantically related, in the form
of relation triples. The accurate prediction of lexical relations between
concepts is challenging, due to the sparsity of patterns indicating the
existence of such relations. We propose the Knowledge-Enriched Meta-Learning
(KEML) framework to address the task of lexical relation classification. In
KEML, the LKB-BERT (Lexical Knowledge Base-BERT) model is presented to learn
concept representations from massive text corpora, with rich lexical knowledge
injected by distant supervision. A probabilistic distribution of auxiliary
tasks is defined to increase the model's ability to recognize different types
of lexical relations. We further combine a meta-learning process over the
auxiliary task distribution and supervised learning to train the neural lexical
relation classifier. Experiments over multiple datasets show that KEML
outperforms state-of-the-art methods.
| 2,020 | Computation and Language |
Detecting Asks in SE attacks: Impact of Linguistic and Structural
Knowledge | Social engineers attempt to manipulate users into undertaking actions such as
downloading malware by clicking links or providing access to money or sensitive
information. Natural language processing, computational sociolinguistics, and
media-specific structural clues provide a means for detecting both the ask
(e.g., buy gift card) and the risk/reward implied by the ask, which we call
framing (e.g., lose your job, get a raise). We apply linguistic resources such
as Lexical Conceptual Structure to tackle ask detection and also leverage
structural clues such as links and their proximity to identified asks to
improve confidence in our results. Our experiments indicate that the
performance of ask detection, framing detection, and identification of the top
ask is improved by linguistically motivated classes coupled with structural
clues such as links. Our approach is implemented in a system that informs users
about social engineering risk situations.
| 2,020 | Computation and Language |
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression
of Pre-Trained Transformers | Pre-trained language models (e.g., BERT (Devlin et al., 2018) and its
variants) have achieved remarkable success in varieties of NLP tasks. However,
these models usually consist of hundreds of millions of parameters which brings
challenges for fine-tuning and online serving in real-life applications due to
latency and capacity constraints. In this work, we present a simple and
effective approach to compress large Transformer (Vaswani et al., 2017) based
pre-trained models, termed as deep self-attention distillation. The small model
(student) is trained by deeply mimicking the self-attention module, which plays
a vital role in Transformer networks, of the large model (teacher).
Specifically, we propose distilling the self-attention module of the last
Transformer layer of the teacher, which is effective and flexible for the
student. Furthermore, we introduce the scaled dot-product between values in the
self-attention module as the new deep self-attention knowledge, in addition to
the attention distributions (i.e., the scaled dot-product of queries and keys)
that have been used in existing works. Moreover, we show that introducing a
teacher assistant (Mirzadeh et al., 2019) also helps the distillation of large
pre-trained Transformer models. Experimental results demonstrate that our
monolingual model outperforms state-of-the-art baselines in different parameter
size of student models. In particular, it retains more than 99% accuracy on
SQuAD 2.0 and several GLUE benchmark tasks using 50% of the Transformer
parameters and computations of the teacher model. We also obtain competitive
results in applying deep self-attention distillation to multilingual
pre-trained models.
| 2,020 | Computation and Language |
A more abstractive summarization model | Pointer-generator network is an extremely popular method of text
summarization. More recent works in this domain still build on top of the
baseline pointer generator by augmenting a content selection phase, or by
decomposing the decoder into a contextual network and a language model.
However, all such models that are based on the pointer-generator base
architecture cannot generate novel words in the summary and mostly copy words
from the source text. In our work, we first thoroughly investigate why the
pointer-generator network is unable to generate novel words, and then address
that by adding an Out-of-vocabulary (OOV) penalty. This enables us to improve
the amount of novelty/abstraction significantly. We use normalized n-gram
novelty scores as a metric for determining the level of abstraction. Moreover,
we also report rouge scores of our model since most summarization models are
evaluated with R-1, R-2, R-L scores.
| 2,020 | Computation and Language |
Language-Independent Tokenisation Rivals Language-Specific Tokenisation
for Word Similarity Prediction | Language-independent tokenisation (LIT) methods that do not require labelled
language resources or lexicons have recently gained popularity because of their
applicability in resource-poor languages. Moreover, they compactly represent a
language using a fixed size vocabulary and can efficiently handle unseen or
rare words. On the other hand, language-specific tokenisation (LST) methods
have a long and established history, and are developed using carefully created
lexicons and training resources. Unlike subtokens produced by LIT methods, LST
methods produce valid morphological subwords. Despite the contrasting
trade-offs between LIT vs. LST methods, their performance on downstream NLP
tasks remain unclear. In this paper, we empirically compare the two approaches
using semantic similarity measurement as an evaluation task across a diverse
set of languages. Our experimental results covering eight languages show that
LST consistently outperforms LIT when the vocabulary size is large, but LIT can
produce comparable or better results than LST in many languages with
comparatively smaller (i.e. less than 100K words) vocabulary sizes, encouraging
the use of LIT when language-specific resources are unavailable, incomplete or
a smaller model is required. Moreover, we find that smoothed inverse frequency
(SIF) to be an accurate method to create word embeddings from subword
embeddings for multilingual semantic similarity prediction tasks. Further
analysis of the nearest neighbours of tokens show that semantically and
syntactically related tokens are closely embedded in subword embedding spaces
| 2,020 | Computation and Language |
Semantic Relatedness for Keyword Disambiguation: Exploiting Different
Embeddings | Understanding the meaning of words is crucial for many tasks that involve
human-machine interaction. This has been tackled by research in Word Sense
Disambiguation (WSD) in the Natural Language Processing (NLP) field. Recently,
WSD and many other NLP tasks have taken advantage of embeddings-based
representation of words, sentences, and documents. However, when it comes to
WSD, most embeddings models suffer from ambiguity as they do not capture the
different possible meanings of the words. Even when they do, the list of
possible meanings for a word (sense inventory) has to be known in advance at
training time to be included in the embeddings space. Unfortunately, there are
situations in which such a sense inventory is not known in advance (e.g., an
ontology selected at run-time), or it evolves with time and its status diverges
from the one at training time. This hampers the use of embeddings models for
WSD. Furthermore, traditional WSD techniques do not perform well in situations
in which the available linguistic information is very scarce, such as the case
of keyword-based queries. In this paper, we propose an approach to keyword
disambiguation which grounds on a semantic relatedness between words and senses
provided by an external inventory (ontology) that is not known at training
time. Building on previous works, we present a semantic relatedness measure
that uses word embeddings, and explore different disambiguation algorithms to
also exploit both word and sentence representations. Experimental results show
that this approach achieves results comparable with the state of the art when
applied for WSD, without training for a particular domain.
| 2,020 | Computation and Language |
End-to-End Entity Linking and Disambiguation leveraging Word and
Knowledge Graph Embeddings | Entity linking - connecting entity mentions in a natural language utterance
to knowledge graph (KG) entities is a crucial step for question answering over
KGs. It is often based on measuring the string similarity between the entity
label and its mention in the question. The relation referred to in the question
can help to disambiguate between entities with the same label. This can be
misleading if an incorrect relation has been identified in the relation linking
step. However, an incorrect relation may still be semantically similar to the
relation in which the correct entity forms a triple within the KG; which could
be captured by the similarity of their KG embeddings. Based on this idea, we
propose the first end-to-end neural network approach that employs KG as well as
word embeddings to perform joint relation and entity classification of simple
questions while implicitly performing entity disambiguation with the help of a
novel gating mechanism. An empirical evaluation shows that the proposed
approach achieves a performance comparable to state-of-the-art entity linking
while requiring less post-processing.
| 2,020 | Computation and Language |
Speech2Phone: A Novel and Efficient Method for Training Speaker
Recognition Models | In this paper we present an efficient method for training models for speaker
recognition using small or under-resourced datasets. This method requires less
data than other SOTA (State-Of-The-Art) methods, e.g. the Angular Prototypical
and GE2E loss functions, while achieving similar results to those methods. This
is done using the knowledge of the reconstruction of a phoneme in the speaker's
voice. For this purpose, a new dataset was built, composed of 40 male speakers,
who read sentences in Portuguese, totaling approximately 3h. We compare the
three best architectures trained using our method to select the best one, which
is the one with a shallow architecture. Then, we compared this model with the
SOTA method for the speaker recognition task: the Fast ResNet-34 trained with
approximately 2,000 hours, using the loss functions Angular Prototypical and
GE2E. Three experiments were carried out with datasets in different languages.
Among these three experiments, our model achieved the second best result in two
experiments and the best result in one of them. This highlights the importance
of our method, which proved to be a great competitor to SOTA speaker
recognition models, with 500x less data and a simpler approach.
| 2,021 | Computation and Language |
Detecting Potential Topics In News Using BERT, CRF and Wikipedia | For a news content distribution platform like Dailyhunt, Named Entity
Recognition is a pivotal task for building better user recommendation and
notification algorithms. Apart from identifying names, locations, organisations
from the news for 13+ Indian languages and use them in algorithms, we also need
to identify n-grams which do not necessarily fit in the definition of
Named-Entity, yet they are important. For example, "me too movement", "beef
ban", "alwar mob lynching". In this exercise, given an English language text,
we are trying to detect case-less n-grams which convey important information
and can be used as topics and/or hashtags for a news. Model is built using
Wikipedia titles data, private English news corpus and BERT-Multilingual
pre-trained model, Bi-GRU and CRF architecture. It shows promising results when
compared with industry best Flair, Spacy and Stanford-caseless-NER in terms of
F1 and especially Recall.
| 2,020 | Computation and Language |
Using Distributional Thesaurus Embedding for Co-hyponymy Detection | Discriminating lexical relations among distributionally similar words has
always been a challenge for natural language processing (NLP) community. In
this paper, we investigate whether the network embedding of distributional
thesaurus can be effectively utilized to detect co-hyponymy relations. By
extensive experiments over three benchmark datasets, we show that the vector
representation obtained by applying node2vec on distributional thesaurus
outperforms the state-of-the-art models for binary classification of
co-hyponymy vs. hypernymy, as well as co-hyponymy vs. meronymy, by huge
margins.
| 2,020 | Computation and Language |
Marathi To English Neural Machine Translation With Near Perfect Corpus
And Transformers | There have been very few attempts to benchmark performances of
state-of-the-art algorithms for Neural Machine Translation task on Indian
Languages. Google, Bing, Facebook and Yandex are some of the very few companies
which have built translation systems for few of the Indian Languages. Among
them, translation results from Google are supposed to be better, based on
general inspection. Bing-Translator do not even support Marathi language which
has around 95 million speakers and ranks 15th in the world in terms of combined
primary and secondary speakers. In this exercise, we trained and compared
variety of Neural Machine Marathi to English Translators trained with
BERT-tokenizer by huggingface and various Transformer based architectures using
Facebook's Fairseq platform with limited but almost correct parallel corpus to
achieve better BLEU scores than Google on Tatoeba and Wikimedia open datasets.
| 2,020 | Computation and Language |
Towards Zero-shot Learning for Automatic Phonemic Transcription | Automatic phonemic transcription tools are useful for low-resource language
documentation. However, due to the lack of training sets, only a tiny fraction
of languages have phonemic transcription tools. Fortunately, multilingual
acoustic modeling provides a solution given limited audio training data. A more
challenging problem is to build phonemic transcribers for languages with zero
training data. The difficulty of this task is that phoneme inventories often
differ between the training languages and the target language, making it
infeasible to recognize unseen phonemes. In this work, we address this problem
by adopting the idea of zero-shot learning. Our model is able to recognize
unseen phonemes in the target language without any training data. In our model,
we decompose phonemes into corresponding articulatory attributes such as vowel
and consonant. Instead of predicting phonemes directly, we first predict
distributions over articulatory attributes, and then compute phoneme
distributions with a customized acoustic model. We evaluate our model by
training it using 13 languages and testing it using 7 unseen languages. We find
that it achieves 7.7% better phoneme error rate on average over a standard
multilingual model.
| 2,020 | Computation and Language |
Train Large, Then Compress: Rethinking Model Size for Efficient Training
and Inference of Transformers | Since hardware resources are limited, the objective of training deep learning
models is typically to maximize accuracy subject to the time and memory
constraints of training and inference. We study the impact of model size in
this setting, focusing on Transformer models for NLP tasks that are limited by
compute: self-supervised pretraining and high-resource machine translation. We
first show that even though smaller Transformer models execute faster per
iteration, wider and deeper models converge in significantly fewer steps.
Moreover, this acceleration in convergence typically outpaces the additional
computational overhead of using larger models. Therefore, the most
compute-efficient training strategy is to counterintuitively train extremely
large models but stop after a small number of iterations.
This leads to an apparent trade-off between the training efficiency of large
Transformer models and the inference efficiency of small Transformer models.
However, we show that large models are more robust to compression techniques
such as quantization and pruning than small models. Consequently, one can get
the best of both worlds: heavily compressed, large models achieve higher
accuracy than lightly compressed, small models.
| 2,020 | Computation and Language |
Universal Phone Recognition with a Multilingual Allophone System | Multilingual models can improve language processing, particularly for low
resource situations, by sharing parameters across languages. Multilingual
acoustic models, however, generally ignore the difference between phonemes
(sounds that can support lexical contrasts in a particular language) and their
corresponding phones (the sounds that are actually spoken, which are language
independent). This can lead to performance degradation when combining a variety
of training languages, as identically annotated phonemes can actually
correspond to several different underlying phonetic realizations. In this work,
we propose a joint model of both language-independent phone and
language-dependent phoneme distributions. In multilingual ASR experiments over
11 languages, we find that this model improves testing performance by 2%
phoneme error rate absolute in low-resource conditions. Additionally, because
we are explicitly modeling language-independent phones, we can build a
(nearly-)universal phone recognizer that, when combined with the PHOIBLE large,
manually curated database of phone inventories, can be customized into 2,000
language dependent recognizers. Experiments on two low-resourced indigenous
languages, Inuktitut and Tusom, show that our recognizer achieves phone
accuracy improvements of more than 17%, moving a step closer to speech
recognition for all languages in the world.
| 2,020 | Computation and Language |
Echo State Neural Machine Translation | We present neural machine translation (NMT) models inspired by echo state
network (ESN), named Echo State NMT (ESNMT), in which the encoder and decoder
layer weights are randomly generated then fixed throughout training. We show
that even with this extremely simple model construction and training procedure,
ESNMT can already reach 70-80% quality of fully trainable baselines. We examine
how spectral radius of the reservoir, a key quantity that characterizes the
model, determines the model behavior. Our findings indicate that randomized
networks can work well even for complicated sequence-to-sequence prediction NLP
tasks.
| 2,020 | Computation and Language |
Analysis of diversity-accuracy tradeoff in image captioning | We investigate the effect of different model architectures, training
objectives, hyperparameter settings and decoding procedures on the diversity of
automatically generated image captions. Our results show that 1) simple
decoding by naive sampling, coupled with low temperature is a competitive and
fast method to produce diverse and accurate caption sets; 2) training with
CIDEr-based reward using Reinforcement learning harms the diversity properties
of the resulting generator, which cannot be mitigated by manipulating decoding
parameters. In addition, we propose a new metric AllSPICE for evaluating both
accuracy and diversity of a set of captions by a single value.
| 2,020 | Computation and Language |
CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue
Dataset | To advance multi-domain (cross-domain) dialogue modeling as well as alleviate
the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first
large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It
contains 6K dialogue sessions and 102K utterances for 5 domains, including
hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains
rich annotation of dialogue states and dialogue acts at both user and system
sides. About 60% of the dialogues have cross-domain user goals that favor
inter-domain dependency and encourage natural transition across domains in
conversation. We also provide a user simulator and several benchmark models for
pipelined task-oriented dialogue systems, which will facilitate researchers to
compare and evaluate their models on this corpus. The large size and rich
annotation of CrossWOZ make it suitable to investigate a variety of tasks in
cross-domain dialogue modeling, such as dialogue state tracking, policy
learning, user simulation, etc.
| 2,020 | Computation and Language |
Integrating Boundary Assembling into a DNN Framework for Named Entity
Recognition in Chinese Social Media Text | Named entity recognition is a challenging task in Natural Language
Processing, especially for informal and noisy social media text. Chinese word
boundaries are also entity boundaries, therefore, named entity recognition for
Chinese text can benefit from word boundary detection, outputted by Chinese
word segmentation. Yet Chinese word segmentation poses its own difficulty
because it is influenced by several factors, e.g., segmentation criteria,
employed algorithm, etc. Dealt improperly, it may generate a cascading failure
to the quality of named entity recognition followed. In this paper we integrate
a boundary assembling method with the state-of-the-art deep neural network
model, and incorporate the updated word boundary information into a conditional
random field model for named entity recognition. Our method shows a 2% absolute
improvement over previous state-of-the-art results.
| 2,020 | Computation and Language |
Squashed Shifted PMI Matrix: Bridging Word Embeddings and Hyperbolic
Spaces | We show that removing sigmoid transformation in the skip-gram with negative
sampling (SGNS) objective does not harm the quality of word vectors
significantly and at the same time is related to factorizing a squashed shifted
PMI matrix which, in turn, can be treated as a connection probabilities matrix
of a random graph. Empirically, such graph is a complex network, i.e. it has
strong clustering and scale-free degree distribution, and is tightly connected
with hyperbolic spaces. In short, we show the connection between static word
embeddings and hyperbolic spaces through the squashed shifted PMI matrix using
analytical and empirical methods.
| 2,020 | Computation and Language |
Improving cross-lingual model transfer by chunking | We present a shallow parser guided cross-lingual model transfer approach in
order to address the syntactic differences between source and target languages
more effectively. In this work, we assume the chunks or phrases in a sentence
as transfer units in order to address the syntactic differences between the
source and target languages arising due to the differences in ordering of words
in the phrases and the ordering of phrases in a sentence separately.
| 2,020 | Computation and Language |
Annotation of Emotion Carriers in Personal Narratives | We are interested in the problem of understanding personal narratives (PN) -
spoken or written - recollections of facts, events, and thoughts. In PN,
emotion carriers are the speech or text segments that best explain the
emotional state of the user. Such segments may include entities, verb or noun
phrases. Advanced automatic understanding of PNs requires not only the
prediction of the user emotional state but also to identify which events (e.g.
"the loss of relative" or "the visit of grandpa") or people ( e.g. "the old
group of high school mates") carry the emotion manifested during the personal
recollection. This work proposes and evaluates an annotation model for
identifying emotion carriers in spoken personal narratives. Compared to other
text genres such as news and microblogs, spoken PNs are particularly
challenging because a narrative is usually unstructured, involving multiple
sub-events and characters as well as thoughts and associated emotions perceived
by the narrator. In this work, we experiment with annotating emotion carriers
from speech transcriptions in the Ulm State-of-Mind in Speech (USoMS) corpus, a
dataset of German PNs. We believe this resource could be used for experiments
in the automatic extraction of emotion carriers from PN, a task that could
provide further advancements in narrative understanding.
| 2,020 | Computation and Language |
A Primer in BERTology: What we know about how BERT works | Transformer-based models have pushed state of the art in many areas of NLP,
but our understanding of what is behind their success is still limited. This
paper is the first survey of over 150 studies of the popular BERT model. We
review the current state of knowledge about how BERT works, what kind of
information it learns and how it is represented, common modifications to its
training objectives and architecture, the overparameterization issue and
approaches to compression. We then outline directions for future research.
| 2,020 | Computation and Language |
Few-shot Natural Language Generation for Task-Oriented Dialog | As a crucial component in task-oriented dialog systems, the Natural Language
Generation (NLG) module converts a dialog act represented in a semantic form
into a response in natural language. The success of traditional template-based
or statistical models typically relies on heavily annotated data, which is
infeasible for new domains. Therefore, it is pivotal for an NLG system to
generalize well with limited labelled data in real applications. To this end,
we present FewShotWoz, the first NLG benchmark to simulate the few-shot
learning setting in task-oriented dialog systems. Further, we develop the
SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to
acquire the controllable generation ability, and fine-tuned with only a few
domain-specific labels to adapt to new domains. Experiments on FewShotWoz and
the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly
outperforms existing methods, measured by various automatic metrics and human
evaluations.
| 2,020 | Computation and Language |
Generating Followup Questions for Interpretable Multi-hop Question
Answering | We propose a framework for answering open domain multi-hop questions in which
partial information is read and used to generate followup questions, to finally
be answered by a pretrained single-hop answer extractor. This framework makes
each hop interpretable, and makes the retrieval associated with later hops as
flexible and specific as for the first hop. As a first instantiation of this
framework, we train a pointer-generator network to predict followup questions
based on the question and partial information. This provides a novel
application of a neural question generation network, which is applied to give
weak ground truth single-hop followup questions based on the final answers and
their supporting facts. Learning to generate followup questions that select the
relevant answer spans against downstream supporting facts, while avoiding
distracting premises, poses an exciting semantic challenge for text generation.
We present an evaluation using the two-hop bridge questions of HotpotQA.
| 2,020 | Computation and Language |
Temporal Convolutional Attention-based Network For Sequence Modeling | With the development of feed-forward models, the default model for sequence
modeling has gradually evolved to replace recurrent networks. Many powerful
feed-forward models based on convolutional networks and attention mechanism
were proposed and show more potential to handle sequence modeling tasks. We
wonder that is there an architecture that can not only achieve an approximate
substitution of recurrent network, but also absorb the advantages of
feed-forward models. So we propose an exploratory architecture referred to
Temporal Convolutional Attention-based Network (TCAN) which combines temporal
convolutional network and attention mechanism. TCAN includes two parts, one is
Temporal Attention (TA) which captures relevant features inside the sequence,
the other is Enhanced Residual (ER) which extracts shallow layer's important
information and transfers to deep layers. We improve the state-of-the-art
results of bpc/perplexity to 30.28 on word-level PTB, 1.092 on character-level
PTB, and 9.20 on WikiText-2.
| 2,023 | Computation and Language |
UKARA 1.0 Challenge Track 1: Automatic Short-Answer Scoring in Bahasa
Indonesia | We describe our third-place solution to the UKARA 1.0 challenge on automated
essay scoring. The task consists of a binary classification problem on two
datasets | answers from two different questions. We ended up using two
different models for the two datasets. For task A, we applied a random forest
algorithm on features extracted using unigram with latent semantic analysis
(LSA). On the other hand, for task B, we only used logistic regression on
TF-IDF features. Our model results in F1 score of 0.812.
| 2,020 | Computation and Language |
Robust Unsupervised Neural Machine Translation with Adversarial
Denoising Training | Unsupervised neural machine translation (UNMT) has recently attracted great
interest in the machine translation community. The main advantage of the UNMT
lies in its easy collection of required large training text sentences while
with only a slightly worse performance than supervised neural machine
translation which requires expensive annotated translation pairs on some
translation tasks. In most studies, the UMNT is trained with clean data without
considering its robustness to the noisy data. However, in real-world scenarios,
there usually exists noise in the collected input sentences which degrades the
performance of the translation system since the UNMT is sensitive to the small
perturbations of the input sentences. In this paper, we first time explicitly
take the noisy data into consideration to improve the robustness of the UNMT
based systems. First of all, we clearly defined two types of noises in training
sentences, i.e., word noise and word order noise, and empirically investigate
its effect in the UNMT, then we propose adversarial training methods with
denoising process in the UNMT. Experimental results on several language pairs
show that our proposed methods substantially improved the robustness of the
conventional UNMT systems in noisy scenarios.
| 2,020 | Computation and Language |
Modeling Future Cost for Neural Machine Translation | Existing neural machine translation (NMT) systems utilize
sequence-to-sequence neural networks to generate target translation word by
word, and then make the generated word at each time-step and the counterpart in
the references as consistent as possible. However, the trained translation
model tends to focus on ensuring the accuracy of the generated target word at
the current time-step and does not consider its future cost which means the
expected cost of generating the subsequent target translation (i.e., the next
target word). To respond to this issue, we propose a simple and effective
method to model the future cost of each target word for NMT systems. In detail,
a time-dependent future cost is estimated based on the current generated target
word and its contextual information to boost the training of the NMT model.
Furthermore, the learned future context representation at the current time-step
is used to help the generation of the next target word in the decoding.
Experimental results on three widely-used translation datasets, including the
WMT14 German-to-English, WMT14 English-to-French, and WMT17 Chinese-to-English,
show that the proposed approach achieves significant improvements over strong
Transformer-based NMT baseline.
| 2,020 | Computation and Language |
DC-BERT: Decoupling Question and Document for Efficient Contextual
Encoding | Recent studies on open-domain question answering have achieved prominent
performance improvement using pre-trained language models such as BERT.
State-of-the-art approaches typically follow the "retrieve and read" pipeline
and employ BERT-based reranker to filter retrieved documents before feeding
them into the reader module. The BERT retriever takes as input the
concatenation of question and each retrieved document. Despite the success of
these approaches in terms of QA accuracy, due to the concatenation, they can
barely handle high-throughput of incoming questions each with a large
collection of retrieved documents. To address the efficiency problem, we
propose DC-BERT, a decoupled contextual encoding framework that has dual BERT
models: an online BERT which encodes the question only once, and an offline
BERT which pre-encodes all the documents and caches their encodings. On SQuAD
Open and Natural Questions Open datasets, DC-BERT achieves 10x speedup on
document retrieval, while retaining most (about 98%) of the QA performance
compared to state-of-the-art approaches for open-domain question answering.
| 2,020 | Computation and Language |
TextBrewer: An Open-Source Knowledge Distillation Toolkit for Natural
Language Processing | In this paper, we introduce TextBrewer, an open-source knowledge distillation
toolkit designed for natural language processing. It works with different
neural network models and supports various kinds of supervised learning tasks,
such as text classification, reading comprehension, sequence labeling.
TextBrewer provides a simple and uniform workflow that enables quick setting up
of distillation experiments with highly flexible configurations. It offers a
set of predefined distillation methods and can be extended with custom code. As
a case study, we use TextBrewer to distill BERT on several typical NLP tasks.
With simple configurations, we achieve results that are comparable with or even
higher than the public distilled BERT models with similar numbers of
parameters. Our toolkit is available through: http://textbrewer.hfl-rc.com
| 2,020 | Computation and Language |
Comparison of Speech Representations for Automatic Quality Estimation in
Multi-Speaker Text-to-Speech Synthesis | We aim to characterize how different speakers contribute to the perceived
output quality of multi-speaker Text-to-Speech (TTS) synthesis. We
automatically rate the quality of TTS using a neural network (NN) trained on
human mean opinion score (MOS) ratings. First, we train and evaluate our NN
model on 13 different TTS and voice conversion (VC) systems from the ASVSpoof
2019 Logical Access (LA) Dataset. Since it is not known how best to represent
speech for this task, we compare 8 different representations alongside MOSNet
frame-based features. Our representations include image-based spectrogram
features and x-vector embeddings that explicitly model different types of noise
such as T60 reverberation time. Our NN predicts MOS with a high correlation to
human judgments. We report prediction correlation and error. A key finding is
the quality achieved for certain speakers seems consistent, regardless of the
TTS or VC system. It is widely accepted that some speakers give higher quality
than others for building a TTS system: our method provides an automatic way to
identify such speakers. Finally, to see if our quality prediction models
generalize, we predict quality scores for synthetic speech using a separate
multi-speaker TTS system that was trained on LibriTTS data, and conduct our own
MOS listening test to compare human ratings with our NN predictions.
| 2,020 | Computation and Language |
Automatic Section Recognition in Obituaries | Obituaries contain information about people's values across times and
cultures, which makes them a useful resource for exploring cultural history.
They are typically structured similarly, with sections corresponding to
Personal Information, Biographical Sketch, Characteristics, Family, Gratitude,
Tribute, Funeral Information and Other aspects of the person. To make this
information available for further studies, we propose a statistical model which
recognizes these sections. To achieve that, we collect a corpus of 20058
English obituaries from TheDaily Item, Remembering.CA and The London Free
Press. The evaluation of our annotation guidelines with three annotators on
1008 obituaries shows a substantial agreement of Fleiss k = 0.87. Formulated as
an automatic segmentation task, a convolutional neural network outperforms
bag-of-words and embedding-based BiLSTMs and BiLSTM-CRFs with a micro F1 =
0.81.
| 2,020 | Computation and Language |
UniLMv2: Pseudo-Masked Language Models for Unified Language Model
Pre-Training | We propose to pre-train a unified language model for both autoencoding and
partially autoregressive language modeling tasks using a novel training
procedure, referred to as a pseudo-masked language model (PMLM). Given an input
text with masked tokens, we rely on conventional masks to learn inter-relations
between corrupted tokens and context via autoencoding, and pseudo masks to
learn intra-relations between masked spans via partially autoregressive
modeling. With well-designed position embeddings and self-attention masks, the
context encodings are reused to avoid redundant computation. Moreover,
conventional masks used for autoencoding provide global masking information, so
that all the position embeddings are accessible in partially autoregressive
language modeling. In addition, the two tasks pre-train a unified language
model as a bidirectional encoder and a sequence-to-sequence decoder,
respectively. Our experiments show that the unified language models pre-trained
using PMLM achieve new state-of-the-art results on a wide range of natural
language understanding and generation tasks across several widely used
benchmarks.
| 2,020 | Computation and Language |
Metaphoric Paraphrase Generation | This work describes the task of metaphoric paraphrase generation, in which we
are given a literal sentence and are charged with generating a metaphoric
paraphrase. We propose two different models for this task: a lexical
replacement baseline and a novel sequence to sequence model, 'metaphor
masking', that generates free metaphoric paraphrases. We use crowdsourcing to
evaluate our results, as well as developing an automatic metric for evaluating
metaphoric paraphrases. We show that while the lexical replacement baseline is
capable of producing accurate paraphrases, they often lack metaphoricity, while
our metaphor masking model excels in generating metaphoric sentences while
performing nearly as well with regard to fluency and paraphrase quality.
| 2,020 | Computation and Language |
Do all Roads Lead to Rome? Understanding the Role of Initialization in
Iterative Back-Translation | Back-translation provides a simple yet effective approach to exploit
monolingual corpora in Neural Machine Translation (NMT). Its iterative variant,
where two opposite NMT models are jointly trained by alternately using a
synthetic parallel corpus generated by the reverse model, plays a central role
in unsupervised machine translation. In order to start producing sound
translations and provide a meaningful training signal to each other, existing
approaches rely on either a separate machine translation system to warm up the
iterative procedure, or some form of pre-training to initialize the weights of
the model. In this paper, we analyze the role that such initialization plays in
iterative back-translation. Is the behavior of the final system heavily
dependent on it? Or does iterative back-translation converge to a similar
solution given any reasonable initialization? Through a series of empirical
experiments over a diverse set of warmup systems, we show that, although the
quality of the initial system does affect final performance, its effect is
relatively small, as iterative back-translation has a strong tendency to
convergence to a similar solution. As such, the margin of improvement left for
the initialization method is narrow, suggesting that future research should
focus more on improving the iterative mechanism itself.
| 2,021 | Computation and Language |
AraBERT: Transformer-based Model for Arabic Language Understanding | The Arabic language is a morphologically rich language with relatively few
resources and a less explored syntax compared to English. Given these
limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment
Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA),
have proven to be very challenging to tackle. Recently, with the surge of
transformers based models, language-specific BERT based models have proven to
be very efficient at language understanding, provided they are pre-trained on a
very large corpus. Such models were able to set new standards and achieve
state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT
specifically for the Arabic language in the pursuit of achieving the same
success that BERT did for the English language. The performance of AraBERT is
compared to multilingual BERT from Google and other state-of-the-art
approaches. The results showed that the newly developed AraBERT achieved
state-of-the-art performance on most tested Arabic NLP tasks. The pretrained
araBERT models are publicly available on https://github.com/aub-mind/arabert
hoping to encourage research and applications for Arabic NLP.
| 2,021 | Computation and Language |
Depth-Adaptive Graph Recurrent Network for Text Classification | The Sentence-State LSTM (S-LSTM) is a powerful and high efficient graph
recurrent network, which views words as nodes and performs layer-wise recurrent
steps between them simultaneously. Despite its successes on text
representations, the S-LSTM still suffers from two drawbacks. Firstly, given a
sentence, certain words are usually more ambiguous than others, and thus more
computation steps need to be taken for these difficult words and vice versa.
However, the S-LSTM takes fixed computation steps for all words, irrespective
of their hardness. The secondary one comes from the lack of sequential
information (e.g., word order) that is inherently important for natural
language. In this paper, we try to address these issues and propose a
depth-adaptive mechanism for the S-LSTM, which allows the model to learn how
many computational steps to conduct for different words as required. In
addition, we integrate an extra RNN layer to inject sequential information,
which also serves as an input feature for the decision of adaptive depths.
Results on the classic text classification task (24 datasets in various sizes
and domains) show that our model brings significant improvements against the
conventional S-LSTM and other high-performance models (e.g., the Transformer),
meanwhile achieving a good accuracy-speed trade off.
| 2,020 | Computation and Language |
Voice trigger detection from LVCSR hypothesis lattices using
bidirectional lattice recurrent neural networks | We propose a method to reduce false voice triggers of a speech-enabled
personal assistant by post-processing the hypothesis lattice of a server-side
large-vocabulary continuous speech recognizer (LVCSR) via a neural network. We
first discuss how an estimate of the posterior probability of the trigger
phrase can be obtained from the hypothesis lattice using known techniques to
perform detection, then investigate a statistical model that processes the
lattice in a more explicitly data-driven, discriminative manner. We propose
using a Bidirectional Lattice Recurrent Neural Network (LatticeRNN) for the
task, and show that it can significantly improve detection accuracy over using
the 1-best result or the posterior.
| 2,019 | Computation and Language |
Clinical Text Summarization with Syntax-Based Negation and Semantic
Concept Identification | In the era of clinical information explosion, a good strategy for clinical
text summarization is helpful to improve the clinical workflow. The ideal
summarization strategy can preserve important information in the informative
but less organized, ill-structured clinical narrative texts. Instead of using
pure statistical learning approaches, which are difficult to interpret and
explain, we utilized knowledge of computational linguistics with human
experts-curated biomedical knowledge base to achieve the interpretable and
meaningful clinical text summarization. Our research objective is to use the
biomedical ontology with semantic information, and take the advantage from the
language hierarchical structure, the constituency tree, in order to identify
the correct clinical concepts and the corresponding negation information, which
is critical for summarizing clinical concepts from narrative text. We achieved
the clinically acceptable performance for both negation detection and concept
identification, and the clinical concepts with common negated patterns can be
identified and negated by the proposed method.
| 2,020 | Computation and Language |
StructSum: Summarization via Structured Representations | Abstractive text summarization aims at compressing the information of a long
source document into a rephrased, condensed summary. Despite advances in
modeling techniques, abstractive summarization models still suffer from several
key challenges: (i) layout bias: they overfit to the style of training corpora;
(ii) limited abstractiveness: they are optimized to copying n-grams from the
source rather than generating novel abstractive summaries; (iii) lack of
transparency: they are not interpretable. In this work, we propose a framework
based on document-level structure induction for summarization to address these
challenges. To this end, we propose incorporating latent and explicit
dependencies across sentences in the source document into end-to-end
single-document summarization models. Our framework complements standard
encoder-decoder summarization models by augmenting them with rich
structure-aware document representations based on implicitly learned (latent)
structures and externally-derived linguistic (explicit) structures. We show
that our summarization framework, trained on the CNN/DM dataset, improves the
coverage of content in the source documents, generates more abstractive
summaries by generating more novel n-grams, and incorporates interpretable
sentence-level structures, while performing on par with standard baselines.
| 2,021 | Computation and Language |
Learning from Easy to Complex: Adaptive Multi-curricula Learning for
Neural Dialogue Generation | Current state-of-the-art neural dialogue systems are mainly data-driven and
are trained on human-generated responses. However, due to the subjectivity and
open-ended nature of human conversations, the complexity of training dialogues
varies greatly. The noise and uneven complexity of query-response pairs impede
the learning efficiency and effects of the neural dialogue generation models.
What is more, so far, there are no unified dialogue complexity measurements,
and the dialogue complexity embodies multiple aspects of
attributes---specificity, repetitiveness, relevance, etc. Inspired by human
behaviors of learning to converse, where children learn from easy dialogues to
complex ones and dynamically adjust their learning progress, in this paper, we
first analyze five dialogue attributes to measure the dialogue complexity in
multiple perspectives on three publicly available corpora. Then, we propose an
adaptive multi-curricula learning framework to schedule a committee of the
organized curricula. The framework is established upon the reinforcement
learning paradigm, which automatically chooses different curricula at the
evolving learning process according to the learning status of the neural
dialogue generation model. Extensive experiments conducted on five
state-of-the-art models demonstrate its learning efficiency and effectiveness
with respect to 13 automatic evaluation metrics and human judgments.
| 2,020 | Computation and Language |
Style Example-Guided Text Generation using Generative Adversarial
Transformers | We introduce a language generative model framework for generating a styled
paragraph based on a context sentence and a style reference example. The
framework consists of a style encoder and a texts decoder. The style encoder
extracts a style code from the reference example, and the text decoder
generates texts based on the style code and the context. We propose a novel
objective function to train our framework. We also investigate different
network design choices. We conduct extensive experimental validation with
comparison to strong baselines to validate the effectiveness of the proposed
framework using a newly collected dataset with diverse text styles. Both code
and dataset will be released upon publication.
| 2,020 | Computation and Language |
PhoBERT: Pre-trained language models for Vietnamese | We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the
first public large-scale monolingual language models pre-trained for
Vietnamese. Experimental results show that PhoBERT consistently outperforms the
recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and
improves the state-of-the-art in multiple Vietnamese-specific NLP tasks
including Part-of-speech tagging, Dependency parsing, Named-entity recognition
and Natural language inference. We release PhoBERT to facilitate future
research and downstream applications for Vietnamese NLP. Our PhoBERT models are
available at https://github.com/VinAIResearch/PhoBERT
| 2,020 | Computation and Language |
Multi-View Learning for Vision-and-Language Navigation | Learning to navigate in a visual environment following natural language
instructions is a challenging task because natural language instructions are
highly variable, ambiguous, and under-specified. In this paper, we present a
novel training paradigm, Learn from EveryOne (LEO), which leverages multiple
instructions (as different views) for the same trajectory to resolve language
ambiguity and improve generalization. By sharing parameters across
instructions, our approach learns more effectively from limited training data
and generalizes better in unseen environments. On the recent Room-to-Room (R2R)
benchmark dataset, LEO achieves 16% improvement (absolute) over a greedy agent
as the base agent (25.3% $\rightarrow$ 41.4%) in Success Rate weighted by Path
Length (SPL). Further, LEO is complementary to most existing models for
vision-and-language navigation, allowing for easy integration with the existing
techniques, leading to LEO+, which creates the new state of the art, pushing
the R2R benchmark to 62% (9% absolute improvement).
| 2,020 | Computation and Language |
Identification of primary and collateral tracks in stuttered speech | Disfluent speech has been previously addressed from two main perspectives:
the clinical perspective focusing on diagnostic, and the Natural Language
Processing (NLP) perspective aiming at modeling these events and detect them
for downstream tasks. In addition, previous works often used different metrics
depending on whether the input features are text or speech, making it difficult
to compare the different contributions. Here, we introduce a new evaluation
framework for disfluency detection inspired by the clinical and NLP perspective
together with the theory of performance from \cite{clark1996using} which
distinguishes between primary and collateral tracks. We introduce a novel
forced-aligned disfluency dataset from a corpus of semi-directed interviews,
and present baseline results directly comparing the performance of text-based
features (word and span information) and speech-based (acoustic-prosodic
information). Finally, we introduce new audio features inspired by the
word-based span features. We show experimentally that using these features
outperformed the baselines for speech-based predictions on the present dataset.
| 2,020 | Computation and Language |
Gated Mechanism for Attention Based Multimodal Sentiment Analysis | Multimodal sentiment analysis has recently gained popularity because of its
relevance to social media posts, customer service calls and video blogs. In
this paper, we address three aspects of multimodal sentiment analysis; 1. Cross
modal interaction learning, i.e. how multiple modalities contribute to the
sentiment, 2. Learning long-term dependencies in multimodal interactions and 3.
Fusion of unimodal and cross modal cues. Out of these three, we find that
learning cross modal interactions is beneficial for this problem. We perform
experiments on two benchmark datasets, CMU Multimodal Opinion level Sentiment
Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity
(CMU-MOSEI) corpus. Our approach on both these tasks yields accuracies of 83.9%
and 81.1% respectively, which is 1.6% and 1.34% absolute improvement over
current state-of-the-art.
| 2,020 | Computation and Language |
Natural Language Processing Advancements By Deep Learning: A Survey | Natural Language Processing (NLP) helps empower intelligent machines by
enhancing a better understanding of the human language for linguistic-based
human-computer communication. Recent developments in computational power and
the advent of large amounts of linguistic data have heightened the need and
demand for automating semantic analysis using data-driven approaches. The
utilization of data-driven strategies is pervasive now due to the significant
improvements demonstrated through the usage of deep learning methods in areas
such as Computer Vision, Automatic Speech Recognition, and in particular, NLP.
This survey categorizes and addresses the different aspects and applications of
NLP that have benefited from deep learning. It covers core NLP tasks and
applications and describes how deep learning methods and models advance these
areas. We further analyze and compare different approaches and state-of-the-art
models.
| 2,021 | Computation and Language |
Med7: a transferable clinical natural language processing model for
electronic health records | The field of clinical natural language processing has been advanced
significantly since the introduction of deep learning models. The
self-supervised representation learning and the transfer learning paradigm
became the methods of choice in many natural language processing application,
in particular in the settings with the dearth of high quality manually
annotated data. Electronic health record systems are ubiquitous and the
majority of patients' data are now being collected electronically and in
particular in the form of free text. Identification of medical concepts and
information extraction is a challenging task, yet important ingredient for
parsing unstructured data into structured and tabulated format for downstream
analytical tasks. In this work we introduced a named-entity recognition model
for clinical natural language processing. The model is trained to recognise
seven categories: drug names, route, frequency, dosage, strength, form,
duration. The model was first self-supervisedly pre-trained by predicting the
next word, using a collection of 2 million free-text patients' records from
MIMIC-III corpora and then fine-tuned on the named-entity recognition task. The
model achieved a lenient (strict) micro-averaged F1 score of 0.957 (0.893)
across all seven categories. Additionally, we evaluated the transferability of
the developed model using the data from the Intensive Care Unit in the US to
secondary care mental health records (CRIS) in the UK. A direct application of
the trained NER model to CRIS data resulted in reduced performance of F1=0.762,
however after fine-tuning on a small sample from CRIS, the model achieved a
reasonable performance of F1=0.944. This demonstrated that despite a close
similarity between the data sets and the NER tasks, it is essential to
fine-tune on the target domain data in order to achieve more accurate results.
| 2,020 | Computation and Language |
Transfer Learning for Context-Aware Spoken Language Understanding | Spoken language understanding (SLU) is a key component of task-oriented
dialogue systems. SLU parses natural language user utterances into semantic
frames. Previous work has shown that incorporating context information
significantly improves SLU performance for multi-turn dialogues. However,
collecting a large-scale human-labeled multi-turn dialogue corpus for the
target domains is complex and costly. To reduce dependency on the collection
and annotation effort, we propose a Context Encoding Language Transformer
(CELT) model facilitating exploiting various context information for SLU. We
explore different transfer learning approaches to reduce dependency on data
collection and annotation. In addition to unsupervised pre-training using
large-scale general purpose unlabeled corpora, such as Wikipedia, we explore
unsupervised and supervised adaptive training approaches for transfer learning
to benefit from other in-domain and out-of-domain dialogue corpora.
Experimental results demonstrate that the proposed model with the proposed
transfer learning approaches achieves significant improvement on the SLU
performance over state-of-the-art models on two large-scale single-turn
dialogue benchmarks and one large-scale multi-turn dialogue benchmark.
| 2,019 | Computation and Language |
Controllable Time-Delay Transformer for Real-Time Punctuation Prediction
and Disfluency Detection | With the increased applications of automatic speech recognition (ASR) in
recent years, it is essential to automatically insert punctuation marks and
remove disfluencies in transcripts, to improve the readability of the
transcripts as well as the performance of subsequent applications, such as
machine translation, dialogue systems, and so forth. In this paper, we propose
a Controllable Time-delay Transformer (CT-Transformer) model that jointly
completes the punctuation prediction and disfluency detection tasks in real
time. The CT-Transformer model facilitates freezing partial outputs with
controllable time delay to fulfill the real-time constraints in partial
decoding required by subsequent applications. We further propose a fast
decoding strategy to minimize latency while maintaining competitive
performance. Experimental results on the IWSLT2011 benchmark dataset and an
in-house Chinese annotated dataset demonstrate that the proposed approach
outperforms the previous state-of-the-art models on F-scores and achieves a
competitive inference speed.
| 2,020 | Computation and Language |
Improving Candidate Generation for Low-resource Cross-lingual Entity
Linking | Cross-lingual entity linking (XEL) is the task of finding referents in a
target-language knowledge base (KB) for mentions extracted from source-language
texts. The first step of (X)EL is candidate generation, which retrieves a list
of plausible candidate entities from the target-language KB for each mention.
Approaches based on resources from Wikipedia have proven successful in the
realm of relatively high-resource languages (HRL), but these do not extend well
to low-resource languages (LRL) with few, if any, Wikipedia pages. Recently,
transfer learning methods have been shown to reduce the demand for resources in
the LRL by utilizing resources in closely-related languages, but the
performance still lags far behind their high-resource counterparts. In this
paper, we first assess the problems faced by current entity candidate
generation methods for low-resource XEL, then propose three improvements that
(1) reduce the disconnect between entity mentions and KB entries, and (2)
improve the robustness of the model to low-resource scenarios. The methods are
simple, but effective: we experiment with our approach on seven XEL datasets
and find that they yield an average gain of 16.9% in Top-30 gold candidate
recall, compared to state-of-the-art baselines. Our improved model also yields
an average gain of 7.9% in in-KB accuracy of end-to-end XEL.
| 2,020 | Computation and Language |
Benchmark Performance of Machine And Deep Learning Based Methodologies
for Urdu Text Document Classification | In order to provide benchmark performance for Urdu text document
classification, the contribution of this paper is manifold. First, it pro-vides
a publicly available benchmark dataset manually tagged against 6 classes.
Second, it investigates the performance impact of traditional machine learning
based Urdu text document classification methodologies by embedding 10
filter-based feature selection algorithms which have been widely used for other
languages. Third, for the very first time, it as-sesses the performance of
various deep learning based methodologies for Urdu text document
classification. In this regard, for experimentation, we adapt 10 deep learning
classification methodologies which have pro-duced best performance figures for
English text classification. Fourth, it also investigates the performance
impact of transfer learning by utiliz-ing Bidirectional Encoder Representations
from Transformers approach for Urdu language. Fifth, it evaluates the integrity
of a hybrid approach which combines traditional machine learning based feature
engineering and deep learning based automated feature engineering. Experimental
results show that feature selection approach named as Normalised Dif-ference
Measure along with Support Vector Machine outshines state-of-the-art
performance on two closed source benchmark datasets CLE Urdu Digest 1000k, and
CLE Urdu Digest 1Million with a significant margin of 32%, and 13%
respectively. Across all three datasets, Normalised Differ-ence Measure
outperforms other filter based feature selection algorithms as it significantly
uplifts the performance of all adopted machine learning, deep learning, and
hybrid approaches. The source code and presented dataset are available at
Github repository.
| 2,020 | Computation and Language |
CLUECorpus2020: A Large-scale Chinese Corpus for Pre-training Language
Model | In this paper, we introduce the Chinese corpus from CLUE organization,
CLUECorpus2020, a large-scale corpus that can be used directly for
self-supervised learning such as pre-training of a language model, or language
generation. It has 100G raw corpus with 35 billion Chinese characters, which is
retrieved from Common Crawl. To better understand this corpus, we conduct
language understanding experiments on both small and large scale, and results
show that the models trained on this corpus can achieve excellent performance
on Chinese. We release a new Chinese vocabulary with a size of 8K, which is
only one-third of the vocabulary size used in Chinese Bert released by Google.
It saves computational cost and memory while works as good as original
vocabulary. We also release both large and tiny versions of the pre-trained
model on this corpus. The former achieves the state-of-the-art result, and the
latter retains most precision while accelerating training and prediction speed
for eight times compared to Bert-base. To facilitate future work on
self-supervised learning on Chinese, we release our dataset, new vocabulary,
codes, and pre-trained models on Github.
| 2,020 | Computation and Language |
Meta-Embeddings Based On Self-Attention | Creating meta-embeddings for better performance in language modelling has
received attention lately, and methods based on concatenation or merely
calculating the arithmetic mean of more than one separately trained embeddings
to perform meta-embeddings have shown to be beneficial. In this paper, we
devise a new meta-embedding model based on the self-attention mechanism, namely
the Duo. With less than 0.4M parameters, the Duo mechanism achieves
state-of-the-art accuracy in text classification tasks such as 20NG.
Additionally, we propose a new meta-embedding sequece-to-sequence model for
machine translation, which to the best of our knowledge, is the first machine
translation model based on more than one word-embedding. Furthermore, it has
turned out that our model outperform the Transformer not only in terms of
achieving a better result, but also a faster convergence on recognized
benchmarks, such as the WMT 2014 English-to-French translation task.
| 2,020 | Computation and Language |
Seshat: A tool for managing and verifying annotation campaigns of audio
data | We introduce Seshat, a new, simple and open-source software to efficiently
manage annotations of speech corpora. The Seshat software allows users to
easily customise and manage annotations of large audio corpora while ensuring
compliance with the formatting and naming conventions of the annotated output
files. In addition, it includes procedures for checking the content of
annotations following specific rules that can be implemented in personalised
parsers. Finally, we propose a double-annotation mode, for which Seshat
computes automatically an associated inter-annotator agreement with the
$\gamma$ measure taking into account the categorisation and segmentation
discrepancies.
| 2,020 | Computation and Language |
XGPT: Cross-modal Generative Pre-Training for Image Captioning | While many BERT-based cross-modal pre-trained models produce excellent
results on downstream understanding tasks like image-text retrieval and VQA,
they cannot be applied to generation tasks directly. In this paper, we propose
XGPT, a new method of Cross-modal Generative Pre-Training for Image Captioning
that is designed to pre-train text-to-image caption generators through three
novel generation tasks, including Image-conditioned Masked Language Modeling
(IMLM), Image-conditioned Denoising Autoencoding (IDA), and Text-conditioned
Image Feature Generation (TIFG). As a result, the pre-trained XGPT can be
fine-tuned without any task-specific architecture modifications to create
state-of-the-art models for image captioning. Experiments show that XGPT
obtains new state-of-the-art results on the benchmark datasets, including COCO
Captions and Flickr30k Captions. We also use XGPT to generate new image
captions as data augmentation for the image retrieval task and achieve
significant improvement on all recall metrics.
| 2,020 | Computation and Language |
Multi-Task Learning with Auxiliary Speaker Identification for
Conversational Emotion Recognition | Conversational emotion recognition (CER) has attracted increasing interests
in the natural language processing (NLP) community. Different from the vanilla
emotion recognition, effective speaker-sensitive utterance representation is
one major challenge for CER. In this paper, we exploit speaker identification
(SI) as an auxiliary task to enhance the utterance representation in
conversations. By this method, we can learn better speaker-aware contextual
representations from the additional SI corpus. Experiments on two benchmark
datasets demonstrate that the proposed architecture is highly effective for
CER, obtaining new state-of-the-art results on two datasets.
| 2,020 | Computation and Language |
Improving Uyghur ASR systems with decoders using morpheme-based language
models | Uyghur is a minority language, and its resources for Automatic Speech
Recognition (ASR) research are always insufficient. THUYG-20 is currently the
only open-sourced dataset of Uyghur speeches. State-of-the-art results of its
clean and noiseless speech test task haven't been updated since the first
release, which shows a big gap in the development of ASR between mainstream
languages and Uyghur. In this paper, we try to bridge the gap by ultimately
optimizing the ASR systems, and by developing a morpheme-based decoder,
MLDG-Decoder (Morpheme Lattice Dynamically Generating Decoder for Uyghur
DNN-HMM systems), which has long been missing. We have open-sourced the
decoder. The MLDG-Decoder employs an algorithm, named as "on-the-fly
composition with FEBABOS", to allow the back-off states and transitions to play
the role of a relay station in on-the-fly composition. The algorithm empowers
the dynamically generated graph to constrain the morpheme sequences in the
lattices as effectively as the static and fully composed graph does when a
4-Gram morpheme-based Language Model (LM) is used. We have trained deeper and
wider neural network acoustic models, and experimented with three kinds of
decoding schemes. The experimental results show that the decoding based on the
static and fully composed graph reduces state-of-the-art Word Error Rate (WER)
on the clean and noiseless speech test task in THUYG-20 to 14.24%. The
MLDG-Decoder reduces the WER to 14.54% while keeping the memory consumption
reasonable. Based on the open-sourced MLDG-Decoder, readers can easily
reproduce the experimental results in this paper.
| 2,020 | Computation and Language |
Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation | Domain adaptation has recently become a key problem in dialogue systems
research. Deep learning, while being the preferred technique for modeling such
systems, works best given massive training data. However, in the real-world
scenario, such resources aren't available for every new domain, so the ability
to train with a few dialogue examples can be considered essential. Pre-training
on large data sources and adapting to the target data has become the standard
method for few-shot problems within the deep learning framework. In this paper,
we present the winning entry at the fast domain adaptation task of DSTC8, a
hybrid generative-retrieval model based on GPT-2 fine-tuned to the multi-domain
MetaLWOz dataset. Robust and diverse in response generation, our model uses
retrieval logic as a fallback, being SoTA on MetaLWOz in human evaluation (>4%
improvement over the 2nd place system) and attaining competitive generalization
performance in adaptation to the unseen MultiWOZ dataset.
| 2,020 | Computation and Language |
HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks
with Hyperdimensional Computing enabled Embedding of n-gram Statistics | Recent advances in Deep Learning have led to a significant performance
increase on several NLP tasks, however, the models become more and more
computationally demanding. Therefore, this paper tackles the domain of
computationally efficient algorithms for NLP tasks. In particular, it
investigates distributed representations of n-gram statistics of texts. The
representations are formed using hyperdimensional computing enabled embedding.
These representations then serve as features, which are used as input to
standard classifiers. We investigate the applicability of the embedding on one
large and three small standard datasets for classification tasks using nine
classifiers. The embedding achieved on par F1 scores while decreasing the time
and memory requirements by several times compared to the conventional n-gram
statistics, e.g., for one of the classifiers on a small dataset, the memory
reduction was 6.18 times; while train and test speed-ups were 4.62 and 3.84
times, respectively. For many classifiers on the large dataset, memory
reduction was ca. 100 times and train and test speed-ups were over 100 times.
Importantly, the usage of distributed representations formed via
hyperdimensional computing allows dissecting strict dependency between the
dimensionality of the representation and n-gram size, thus, opening a room for
tradeoffs.
| 2,021 | Computation and Language |
SeMemNN: A Semantic Matrix-Based Memory Neural Network for Text
Classification | Text categorization is the task of assigning labels to documents written in a
natural language, and it has numerous real-world applications including
sentiment analysis as well as traditional topic assignment tasks. In this
paper, we propose 5 different configurations for the semantic matrix-based
memory neural network with end-to-end learning manner and evaluate our proposed
method on two corpora of news articles (AG news, Sogou news). The best
performance of our proposed method outperforms the baseline VDCNN models on the
text classification task and gives a faster speed for learning semantics.
Moreover, we also evaluate our model on small scale datasets. The results show
that our proposed method can still achieve better results in comparison to
VDCNN on the small scale dataset. This paper is to appear in the Proceedings of
the 2020 IEEE 14th International Conference on Semantic Computing (ICSC 2020),
San Diego, California, 2020.
| 2,020 | Computation and Language |
Restoration of Fragmentary Babylonian Texts Using Recurrent Neural
Networks | The main source of information regarding ancient Mesopotamian history and
culture are clay cuneiform tablets. Despite being an invaluable resource, many
tablets are fragmented leading to missing information. Currently these missing
parts are manually completed by experts. In this work we investigate the
possibility of assisting scholars and even automatically completing the breaks
in ancient Akkadian texts from Achaemenid period Babylonia by modelling the
language using recurrent neural networks.
| 2,022 | Computation and Language |
Posterior-GAN: Towards Informative and Coherent Response Generation with
Posterior Generative Adversarial Network | Neural conversational models learn to generate responses by taking into
account the dialog history. These models are typically optimized over the
query-response pairs with a maximum likelihood estimation objective. However,
the query-response tuples are naturally loosely coupled, and there exist
multiple responses that can respond to a given query, which leads the
conversational model learning burdensome. Besides, the general dull response
problem is even worsened when the model is confronted with meaningless response
training instances. Intuitively, a high-quality response not only responds to
the given query but also links up to the future conversations, in this paper,
we leverage the query-response-future turn triples to induce the generated
responses that consider both the given context and the future conversations. To
facilitate the modeling of these triples, we further propose a novel
encoder-decoder based generative adversarial learning framework, Posterior
Generative Adversarial Network (Posterior-GAN), which consists of a forward and
a backward generative discriminator to cooperatively encourage the generated
response to be informative and coherent by two complementary assessment
perspectives. Experimental results demonstrate that our method effectively
boosts the informativeness and coherence of the generated response on both
automatic and human evaluation, which verifies the advantages of considering
two assessment perspectives.
| 2,020 | Computation and Language |
Sequential Neural Networks for Noetic End-to-End Response Selection | The noetic end-to-end response selection challenge as one track in the 7th
Dialog System Technology Challenges (DSTC7) aims to push the state of the art
of utterance classification for real world goal-oriented dialog systems, for
which participants need to select the correct next utterances from a set of
candidates for the multi-turn context. This paper presents our systems that are
ranked top 1 on both datasets under this challenge, one focused and small
(Advising) and the other more diverse and large (Ubuntu). Previous
state-of-the-art models use hierarchy-based (utterance-level and token-level)
neural networks to explicitly model the interactions among different turns'
utterances for context modeling. In this paper, we investigate a sequential
matching model based only on chain sequence for multi-turn response selection.
Our results demonstrate that the potentials of sequential matching approaches
have not yet been fully exploited in the past for multi-turn response
selection. In addition to ranking top 1 in the challenge, the proposed model
outperforms all previous models, including state-of-the-art hierarchy-based
models, on two large-scale public multi-turn response selection benchmark
datasets.
| 2,020 | Computation and Language |
Evaluating Low-Resource Machine Translation between Chinese and
Vietnamese with Back-Translation | Back translation (BT) has been widely used and become one of standard
techniques for data augmentation in Neural Machine Translation (NMT), BT has
proven to be helpful for improving the performance of translation effectively,
especially for low-resource scenarios. While most works related to BT mainly
focus on European languages, few of them study languages in other areas around
the world. In this paper, we investigate the impacts of BT on Asia language
translations between the extremely low-resource Chinese and Vietnamese language
pair. We evaluate and compare the effects of different sizes of synthetic data
on both NMT and Statistical Machine Translation (SMT) models for Chinese to
Vietnamese and Vietnamese to Chinese, with character-based and word-based
settings. Some conclusions from previous works are partially confirmed and we
also draw some other interesting findings and conclusions, which are beneficial
to understand BT further.
| 2,020 | Computation and Language |
Unsupervised Adversarial Domain Adaptation for Implicit Discourse
Relation Classification | Implicit discourse relations are not only more challenging to classify, but
also to annotate, than their explicit counterparts. We tackle situations where
training data for implicit relations are lacking, and exploit domain adaptation
from explicit relations (Ji et al., 2015). We present an unsupervised
adversarial domain adaptive network equipped with a reconstruction component.
Our system outperforms prior works and other adversarial benchmarks for
unsupervised domain adaptation. Additionally, we extend our system to take
advantage of labeled data if some are available.
| 2,020 | Computation and Language |
Data Augmentation using Pre-trained Transformer Models | Language model based pre-trained models such as BERT have provided
significant gains across different NLP tasks. In this paper, we study different
types of transformer based pre-trained models such as auto-regressive models
(GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional
data augmentation. We show that prepending the class labels to text sequences
provides a simple yet effective way to condition the pre-trained models for
data augmentation. Additionally, on three classification benchmarks,
pre-trained Seq2Seq model outperforms other data augmentation methods in a
low-resource setting. Further, we explore how different pre-trained model based
data augmentation differs in-terms of data diversity, and how well such methods
preserve the class-label information.
| 2,021 | Computation and Language |
jiant: A Software Toolkit for Research on General-Purpose Text
Understanding Models | We introduce jiant, an open source toolkit for conducting multitask and
transfer learning experiments on English NLU tasks. jiant enables modular and
configuration-driven experimentation with state-of-the-art models and
implements a broad set of tasks for probing, transfer learning, and multitask
training experiments. jiant implements over 50 NLU tasks, including all GLUE
and SuperGLUE benchmark tasks. We demonstrate that jiant reproduces published
performance on a variety of tasks and models, including BERT and RoBERTa. jiant
is available at https://jiant.info.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.