Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Visual Agreement Regularized Training for Multi-Modal Machine
Translation | Multi-modal machine translation aims at translating the source sentence into
a different language in the presence of the paired image. Previous work
suggests that additional visual information only provides dispensable help to
translation, which is needed in several very special cases such as translating
ambiguous words. To make better use of visual information, this work presents
visual agreement regularized training. The proposed approach jointly trains the
source-to-target and target-to-source translation models and encourages them to
share the same focus on the visual information when generating semantically
equivalent visual words (e.g. "ball" in English and "ballon" in French).
Besides, a simple yet effective multi-head co-attention model is also
introduced to capture interactions between visual and textual features. The
results show that our approaches can outperform competitive baselines by a
large margin on the Multi30k dataset. Further analysis demonstrates that the
proposed regularized training can effectively improve the agreement of
attention on the image, leading to better use of visual information.
| 2,019 | Computation and Language |
A Multi-cascaded Model with Data Augmentation for Enhanced Paraphrase
Detection in Short Texts | Paraphrase detection is an important task in text analytics with numerous
applications such as plagiarism detection, duplicate question identification,
and enhanced customer support helpdesks. Deep models have been proposed for
representing and classifying paraphrases. These models, however, require large
quantities of human-labeled data, which is expensive to obtain. In this work,
we present a data augmentation strategy and a multi-cascaded model for improved
paraphrase detection in short texts. Our data augmentation strategy considers
the notions of paraphrases and non-paraphrases as binary relations over the set
of texts. Subsequently, it uses graph theoretic concepts to efficiently
generate additional paraphrase and non-paraphrase pairs in a sound manner. Our
multi-cascaded model employs three supervised feature learners (cascades) based
on CNN and LSTM networks with and without soft-attention. The learned features,
together with hand-crafted linguistic features, are then forwarded to a
discriminator network for final classification. Our model is both wide and deep
and provides greater robustness across clean and noisy short texts. We evaluate
our approach on three benchmark datasets and show that it produces a comparable
or state-of-the-art performance on all three.
| 2,020 | Computation and Language |
Job Prediction: From Deep Neural Network Models to Applications | Determining the job is suitable for a student or a person looking for work
based on their job's descriptions such as knowledge and skills that are
difficult, as well as how employers must find ways to choose the candidates
that match the job they require. In this paper, we focus on studying the job
prediction using different deep neural network models including TextCNN,
Bi-GRU-LSTM-CNN, and Bi-GRU-CNN with various pre-trained word embeddings on the
IT Job dataset. In addition, we also proposed a simple and effective ensemble
model combining different deep neural network models. The experimental results
illustrated that our proposed ensemble model achieved the highest result with
an F1 score of 72.71%. Moreover, we analyze these experimental results to have
insights about this problem to find better solutions in the future.
| 2,020 | Computation and Language |
Encoding word order in complex embeddings | Sequential word order is important when processing text. Currently, neural
networks (NNs) address this by modeling word position using position
embeddings. The problem is that position embeddings capture the position of
individual words, but not the ordered relationship (e.g., adjacency or
precedence) between individual word positions. We present a novel and
principled solution for modeling both the global absolute positions of words
and their order relationships. Our solution generalizes word embeddings,
previously defined as independent vectors, to continuous word functions over a
variable (position). The benefit of continuous functions over variable
positions is that word representations shift smoothly with increasing
positions. Hence, word representations in different positions can correlate
with each other in a continuous function. The general solution of these
functions is extended to complex-valued domain due to richer representations.
We extend CNN, RNN and Transformer NNs to complex-valued versions to
incorporate our complex embedding (we make all code available). Experiments on
text classification, machine translation and language modeling show gains over
both classical word embeddings and position-enriched word embeddings. To our
knowledge, this is the first work in NLP to link imaginary numbers in
complex-valued representations to concrete meanings (i.e., word order).
| 2,020 | Computation and Language |
Knowledge-guided Text Structuring in Clinical Trials | Clinical trial records are variable resources or the analysis of patients and
diseases. Information extraction from free text such as eligibility criteria
and summary of results and conclusions in clinical trials would better support
computer-based eligibility query formulation and electronic patient screening.
Previous research has focused on extracting information from eligibility
criteria, with usually a single pair of medical entity and attribute, but
seldom considering other kinds of free text with multiple entities, attributes
and relations that are more complex for parsing. In this paper, we propose a
knowledge-guided text structuring framework with an automatically generated
knowledge base as training corpus and word dependency relations as context
information to transfer free text into formal, computer-interpretable
representations. Experimental results show that our method can achieve overall
high precision and recall, demonstrating the effectiveness and efficiency of
the proposed method.
| 2,019 | Computation and Language |
All-in-One Image-Grounded Conversational Agents | As single-task accuracy on individual language and image tasks has improved
substantially in the last few years, the long-term goal of a generally skilled
agent that can both see and talk becomes more feasible to explore. In this
work, we focus on leveraging individual language and image tasks, along with
resources that incorporate both vision and language towards that objective. We
design an architecture that combines state-of-the-art Transformer and ResNeXt
modules fed into a novel attentive multimodal module to produce a combined
model trained on many tasks. We provide a thorough analysis of the components
of the model, and transfer performance when training on one, some, or all of
the tasks. Our final models provide a single system that obtains good results
on all vision and language tasks considered, and improves the state-of-the-art
in image-grounded conversational applications.
| 2,020 | Computation and Language |
Natural language processing of MIMIC-III clinical notes for identifying
diagnosis and procedures with neural networks | Coding diagnosis and procedures in medical records is a crucial process in
the healthcare industry, which includes the creation of accurate billings,
receiving reimbursements from payers, and creating standardized patient care
records. In the United States, Billing and Insurance related activities cost
around $471 billion in 2012 which constitutes about 25% of all the U.S hospital
spending. In this paper, we report the performance of a natural language
processing model that can map clinical notes to medical codes, and predict
final diagnosis from unstructured entries of history of present illness,
symptoms at the time of admission, etc. Previous studies have demonstrated that
deep learning models perform better at such mapping when compared to
conventional machine learning models. Therefore, we employed state-of-the-art
deep learning method, ULMFiT on the largest emergency department clinical notes
dataset MIMIC III which has 1.2M clinical notes to select for the top-10 and
top-50 diagnosis and procedure codes. Our models were able to predict the
top-10 diagnoses and procedures with 80.3% and 80.5% accuracy, whereas the
top-50 ICD-9 codes of diagnosis and procedures are predicted with 70.7% and
63.9% accuracy. Prediction of diagnosis and procedures from unstructured
clinical notes benefit human coders to save time, eliminate errors and minimize
costs. With promising scores from our present model, the next step would be to
deploy this on a small-scale real-world scenario and compare it with human
coders as the gold standard. We believe that further research of this approach
can create highly accurate predictions that can ease the workflow in a clinical
setting.
| 2,020 | Computation and Language |
Robust Cross-lingual Embeddings from Parallel Sentences | Recent advances in cross-lingual word embeddings have primarily relied on
mapping-based methods, which project pretrained word embeddings from different
languages into a shared space through a linear transformation. However, these
approaches assume word embedding spaces are isomorphic between different
languages, which has been shown not to hold in practice (S{\o}gaard et al.,
2018), and fundamentally limits their performance. This motivates investigating
joint learning methods which can overcome this impediment, by simultaneously
learning embeddings across languages via a cross-lingual term in the training
objective. We propose a bilingual extension of the CBOW method which leverages
sentence-aligned corpora to obtain robust cross-lingual word and sentence
representations. Our approach significantly improves cross-lingual sentence
retrieval performance over all other approaches while maintaining parity with
the current state-of-the-art methods on word-translation. It also achieves
parity with a deep RNN method on a zero-shot cross-lingual document
classification task, requiring far fewer computational resources for training
and inference. As an additional advantage, our bilingual method leads to a much
more pronounced improvement in the the quality of monolingual word vectors
compared to other competing methods.
| 2,020 | Computation and Language |
Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic | In this paper, we describe our team's effort on the semantic text question
similarity task of NSURL 2019. Our top performing system utilizes several
innovative data augmentation techniques to enlarge the training data. Then, it
takes ELMo pre-trained contextual embeddings of the data and feeds them into an
ON-LSTM network with self-attention. This results in sequence representation
vectors that are used to predict the relation between the question pairs. The
model is ranked in the 1st place with 96.499 F1-score (same as the second place
F1-score) and the 2nd place with 94.848 F1-score (differs by 1.076 F1-score
from the first place) on the public and private leaderboards, respectively.
| 2,020 | Computation and Language |
ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension | Reading comprehension is one of the crucial tasks for furthering research in
natural language understanding. A lot of diverse reading comprehension datasets
have recently been introduced to study various phenomena in natural language,
ranging from simple paraphrase matching and entity typing to entity tracking
and understanding the implications of the context. Given the availability of
many such datasets, comprehensive and reliable evaluation is tedious and
time-consuming for researchers working on this problem. We present an
evaluation server, ORB, that reports performance on seven diverse reading
comprehension datasets, encouraging and facilitating testing a single model's
capability in understanding a wide variety of reading phenomena. The evaluation
server places no restrictions on how models are trained, so it is a suitable
test bed for exploring training paradigms and representation learning for
general reading facility. As more suitable datasets are released, they will be
added to the evaluation server. We also collect and include synthetic
augmentations for these datasets, testing how well models can handle
out-of-domain questions.
| 2,020 | Computation and Language |
\AE THEL: Automatically Extracted Typelogical Derivations for Dutch | We present {\AE}THEL, a semantic compositionality dataset for written Dutch.
{\AE}THEL consists of two parts. First, it contains a lexicon of supertags for
about 900 000 words in context. The supertags correspond to types of the simply
typed linear lambda-calculus, enhanced with dependency decorations that capture
grammatical roles supplementary to function-argument structures. On the basis
of these types, {\AE}THEL further provides 72 192 validated derivations,
presented in four formats: natural-deduction and sequent-style proofs, linear
logic proofnets and the associated programs (lambda terms) for meaning
composition. {\AE}THEL's types and derivations are obtained by means of an
extraction algorithm applied to the syntactic analyses of LASSY Small, the gold
standard corpus of written Dutch. We discuss the extraction algorithm and show
how `virtual elements' in the original LASSY annotation of unbounded
dependencies and coordination phenomena give rise to higher-order types. We
suggest some example usecases highlighting the benefits of a type-driven
approach at the syntax semantics interface. The following resources are
open-sourced with {\AE}THEL: the lexical mappings between words and types, a
subset of the dataset consisting of 7 924 semantic parses, and the Python code
that implements the extraction algorithm.
| 2,020 | Computation and Language |
Likelihood Ratios and Generative Classifiers for Unsupervised
Out-of-Domain Detection In Task Oriented Dialog | The task of identifying out-of-domain (OOD) input examples directly at
test-time has seen renewed interest recently due to increased real world
deployment of models. In this work, we focus on OOD detection for natural
language sentence inputs to task-based dialog systems. Our findings are
three-fold: First, we curate and release ROSTD (Real Out-of-Domain Sentences
From Task-oriented Dialog) - a dataset of 4K OOD examples for the publicly
available dataset from (Schuster et al. 2019). In contrast to existing settings
which synthesize OOD examples by holding out a subset of classes, our examples
were authored by annotators with apriori instructions to be out-of-domain with
respect to the sentences in an existing dataset. Second, we explore likelihood
ratio based approaches as an alternative to currently prevalent paradigms.
Specifically, we reformulate and apply these approaches to natural language
inputs. We find that they match or outperform the latter on all datasets, with
larger improvements on non-artificial OOD benchmarks such as our dataset. Our
ablations validate that specifically using likelihood ratios rather than plain
likelihood is necessary to discriminate well between OOD and in-domain data.
Third, we propose learning a generative classifier and computing a marginal
likelihood (ratio) for OOD detection. This allows us to use a principled
likelihood while at the same time exploiting training-time labels. We find that
this approach outperforms both simple likelihood (ratio) based and other prior
approaches. We are hitherto the first to investigate the use of generative
classifiers for OOD detection at test-time.
| 2,020 | Computation and Language |
AraNet: A Deep Learning Toolkit for Arabic Social Media | We describe AraNet, a collection of deep learning Arabic social media
processing tools. Namely, we exploit an extensive host of publicly available
and novel social media datasets to train bidirectional encoders from
transformer models (BERT) to predict age, dialect, gender, emotion, irony, and
sentiment. AraNet delivers state-of-the-art performance on a number of the
cited tasks and competitively on others. In addition, AraNet has the advantage
of being exclusively based on a deep learning framework and hence feature
engineering free. To the best of our knowledge, AraNet is the first to performs
predictions across such a wide range of tasks for Arabic NLP and thus meets a
critical needs. We publicly release AraNet to accelerate research and
facilitate comparisons across the different tasks.
| 2,020 | Computation and Language |
The Shmoop Corpus: A Dataset of Stories with Loosely Aligned Summaries | Understanding stories is a challenging reading comprehension problem for
machines as it requires reading a large volume of text and following long-range
dependencies. In this paper, we introduce the Shmoop Corpus: a dataset of 231
stories that are paired with detailed multi-paragraph summaries for each
individual chapter (7,234 chapters), where the summary is chronologically
aligned with respect to the story chapter. From the corpus, we construct a set
of common NLP tasks, including Cloze-form question answering and a simplified
form of abstractive summarization, as benchmarks for reading comprehension on
stories. We then show that the chronological alignment provides a strong
supervisory signal that learning-based methods can exploit leading to
significant improvements on these tasks. We believe that the unique structure
of this corpus provides an important foothold towards making machine story
comprehension more approachable.
| 2,020 | Computation and Language |
An Empirical Study of Factors Affecting Language-Independent Models | Scaling existing applications and solutions to multiple human languages has
traditionally proven to be difficult, mainly due to the language-dependent
nature of preprocessing and feature engineering techniques employed in
traditional approaches. In this work, we empirically investigate the factors
affecting language-independent models built with multilingual representations,
including task type, language set and data resource. On two most representative
NLP tasks -- sentence classification and sequence labeling, we show that
language-independent models can be comparable to or even outperforms the models
trained using monolingual data, and they are generally more effective on
sentence classification. We experiment language-independent models with many
different languages and show that they are more suitable for typologically
similar languages. We also explore the effects of different data sizes when
training and testing language-independent models, and demonstrate that they are
not only suitable for high-resource languages, but also very effective in
low-resource languages.
| 2,020 | Computation and Language |
"Hinglish" Language -- Modeling a Messy Code-Mixed Language | With a sharp rise in fluency and users of "Hinglish" in linguistically
diverse country, India, it has increasingly become important to analyze social
content written in this language in platforms such as Twitter, Reddit,
Facebook. This project focuses on using deep learning techniques to tackle a
classification problem in categorizing social content written in Hindi-English
into Abusive, Hate-Inducing and Not offensive categories. We utilize
bi-directional sequence models with easy text augmentation techniques such as
synonym replacement, random insertion, random swap, and random deletion to
produce a state of the art classifier that outperforms the previous work done
on analyzing this dataset.
| 2,020 | Computation and Language |
Revisiting Paraphrase Question Generator using Pairwise Discriminator | In this paper, we propose a method for obtaining sentence-level embeddings.
While the problem of securing word-level embeddings is very well studied, we
propose a novel method for obtaining sentence-level embeddings. This is
obtained by a simple method in the context of solving the paraphrase generation
task. If we use a sequential encoder-decoder model for generating paraphrase,
we would like the generated paraphrase to be semantically close to the original
sentence. One way to ensure this is by adding constraints for true paraphrase
embeddings to be close and unrelated paraphrase candidate sentence embeddings
to be far. This is ensured by using a sequential pair-wise discriminator that
shares weights with the encoder that is trained with a suitable loss function.
Our loss function penalizes paraphrase sentence embedding distances from being
too large. This loss is used in combination with a sequential encoder-decoder
network. We also validated our method by evaluating the obtained embeddings for
a sentiment analysis task. The proposed method results in semantic embeddings
and outperforms the state-of-the-art on the paraphrase generation and sentiment
analysis task on standard datasets. These results are also shown to be
statistically significant.
| 2,020 | Computation and Language |
Amharic-Arabic Neural Machine Translation | Many automatic translation works have been addressed between major European
language pairs, by taking advantage of large scale parallel corpora, but very
few research works are conducted on the Amharic-Arabic language pair due to its
parallel data scarcity. Two Long Short-Term Memory (LSTM) and Gated Recurrent
Units (GRU) based Neural Machine Translation (NMT) models are developed using
Attention-based Encoder-Decoder architecture which is adapted from the
open-source OpenNMT system. In order to perform the experiment, a small
parallel Quranic text corpus is constructed by modifying the existing
monolingual Arabic text and its equivalent translation of Amharic language text
corpora available on Tanzile. LSTM and GRU based NMT models and Google
Translation system are compared and found that LSTM based OpenNMT outperforms
GRU based OpenNMT and Google Translation system, with a BLEU score of 12%, 11%,
and 6% respectively.
| 2,020 | Computation and Language |
CASE: Context-Aware Semantic Expansion | In this paper, we define and study a new task called Context-Aware Semantic
Expansion (CASE). Given a seed term in a sentential context, we aim to suggest
other terms that well fit the context as the seed. CASE has many interesting
applications such as query suggestion, computer-assisted writing, and word
sense disambiguation, to name a few. Previous explorations, if any, only
involve some similar tasks, and all require human annotations for evaluation.
In this study, we demonstrate that annotations for this task can be harvested
at scale from existing corpora, in a fully automatic manner. On a dataset of
1.8 million sentences thus derived, we propose a network architecture that
encodes the context and seed term separately before suggesting alternative
terms. The context encoder in this architecture can be easily extended by
incorporating seed-aware attention. Our experiments demonstrate that
competitive results are achieved with appropriate choices of context encoder
and attention scoring function.
| 2,020 | Computation and Language |
oLMpics -- On what Language Model Pre-training Captures | Recent success of pre-trained language models (LMs) has spurred widespread
interest in the language capabilities that they possess. However, efforts to
understand whether LM representations are useful for symbolic reasoning tasks
have been limited and scattered. In this work, we propose eight reasoning
tasks, which conceptually require operations such as comparison, conjunction,
and composition. A fundamental challenge is to understand whether the
performance of a LM on a task should be attributed to the pre-trained
representations or to the process of fine-tuning on the task data. To address
this, we propose an evaluation protocol that includes both zero-shot evaluation
(no fine-tuning), as well as comparing the learning curve of a fine-tuned LM to
the learning curve of multiple controls, which paints a rich picture of the LM
capabilities. Our main findings are that: (a) different LMs exhibit
qualitatively different reasoning abilities, e.g., RoBERTa succeeds in
reasoning tasks where BERT fails completely; (b) LMs do not reason in an
abstract manner and are context-dependent, e.g., while RoBERTa can compare
ages, it can do so only when the ages are in the typical range of human ages;
(c) On half of our reasoning tasks all models fail completely. Our findings and
infrastructure can help future work on designing new datasets, models and
objective functions for pre-training.
| 2,020 | Computation and Language |
LayoutLM: Pre-training of Text and Layout for Document Image
Understanding | Pre-training techniques have been verified successfully in a variety of NLP
tasks in recent years. Despite the widespread use of pre-training models for
NLP applications, they almost exclusively focus on text-level manipulation,
while neglecting layout and style information that is vital for document image
understanding. In this paper, we propose the \textbf{LayoutLM} to jointly model
interactions between text and layout information across scanned document
images, which is beneficial for a great number of real-world document image
understanding tasks such as information extraction from scanned documents.
Furthermore, we also leverage image features to incorporate words' visual
information into LayoutLM. To the best of our knowledge, this is the first time
that text and layout are jointly learned in a single framework for
document-level pre-training. It achieves new state-of-the-art results in
several downstream tasks, including form understanding (from 70.72 to 79.27),
receipt understanding (from 94.02 to 95.24) and document image classification
(from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly
available at \url{https://aka.ms/layoutlm}.
| 2,020 | Computation and Language |
OTEANN: Estimating the Transparency of Orthographies with an Artificial
Neural Network | To transcribe spoken language to written medium, most alphabets enable an
unambiguous sound-to-letter rule. However, some writing systems have distanced
themselves from this simple concept and little work exists in Natural Language
Processing (NLP) on measuring such distance. In this study, we use an
Artificial Neural Network (ANN) model to evaluate the transparency between
written words and their pronunciation, hence its name Orthographic Transparency
Estimation with an ANN (OTEANN). Based on datasets derived from Wikimedia
dictionaries, we trained and tested this model to score the percentage of
correct predictions in phoneme-to-grapheme and grapheme-to-phoneme translation
tasks. The scores obtained on 17 orthographies were in line with the
estimations of other studies. Interestingly, the model also provided insight
into typical mistakes made by learners who only consider the phonemic rule in
reading and writing.
| 2,021 | Computation and Language |
What Does My QA Model Know? Devising Controlled Probes using Expert
Knowledge | Open-domain question answering (QA) is known to involve several underlying
knowledge and reasoning challenges, but are models actually learning such
knowledge when trained on benchmark tasks? To investigate this, we introduce
several new challenge tasks that probe whether state-of-the-art QA models have
general knowledge about word definitions and general taxonomic reasoning, both
of which are fundamental to more complex forms of reasoning and are widespread
in benchmark datasets. As an alternative to expensive crowd-sourcing, we
introduce a methodology for automatically building datasets from various types
of expert knowledge (e.g., knowledge graphs and lexical taxonomies), allowing
for systematic control over the resulting probes and for a more comprehensive
evaluation. We find automatically constructing probes to be vulnerable to
annotation artifacts, which we carefully control for. Our evaluation confirms
that transformer-based QA models are already predisposed to recognize certain
types of structural lexical knowledge. However, it also reveals a more nuanced
picture: their performance degrades substantially with even a slight increase
in the number of hops in the underlying taxonomic hierarchy, or as more
challenging distractor candidate answers are introduced. Further, even when
these models succeed at the standard instance-level evaluation, they leave much
room for improvement when assessed at the level of clusters of semantically
connected probes (e.g., all Isa questions about a concept).
| 2,020 | Computation and Language |
Text Classification for Azerbaijani Language Using Machine Learning and
Embedding | Text classification systems will help to solve the text clustering problem in
the Azerbaijani language. There are some text-classification applications for
foreign languages, but we tried to build a newly developed system to solve this
problem for the Azerbaijani language. Firstly, we tried to find out potential
practice areas. The system will be useful in a lot of areas. It will be mostly
used in news feed categorization. News websites can automatically categorize
news into classes such as sports, business, education, science, etc. The system
is also used in sentiment analysis for product reviews. For example, the
company shares a photo of a new product on Facebook and the company receives a
thousand comments for new products. The systems classify the comments into
categories like positive or negative. The system can also be applied in
recommended systems, spam filtering, etc. Various machine learning techniques
such as Naive Bayes, SVM, Decision Trees have been devised to solve the text
classification problem in Azerbaijani language.
| 2,020 | Computation and Language |
Semantics- and Syntax-related Subvectors in the Skip-gram Embeddings | We show that the skip-gram embedding of any word can be decomposed into two
subvectors which roughly correspond to semantic and syntactic roles of the
word.
| 2,020 | Computation and Language |
End-to-end Named Entity Recognition and Relation Extraction using
Pre-trained Language Models | Named entity recognition (NER) and relation extraction (RE) are two important
tasks in information extraction and retrieval (IE \& IR). Recent work has
demonstrated that it is beneficial to learn these tasks jointly, which avoids
the propagation of error inherent in pipeline-based systems and improves
performance. However, state-of-the-art joint models typically rely on external
natural language processing (NLP) tools, such as dependency parsers, limiting
their usefulness to domains (e.g. news) where those tools perform well. The few
neural, end-to-end models that have been proposed are trained almost completely
from scratch. In this paper, we propose a neural, end-to-end model for jointly
extracting entities and their relations which does not rely on external NLP
tools and which integrates a large, pre-trained language model. Because the
bulk of our model's parameters are pre-trained and we eschew recurrence for
self-attention, our model is fast to train. On 5 datasets across 3 domains, our
model matches or exceeds state-of-the-art performance, sometimes by a large
margin.
| 2,020 | Computation and Language |
Learning Numeral Embeddings | Word embedding is an essential building block for deep learning methods for
natural language processing. Although word embedding has been extensively
studied over the years, the problem of how to effectively embed numerals, a
special subset of words, is still underexplored. Existing word embedding
methods do not learn numeral embeddings well because there are an infinite
number of numerals and their individual appearances in training corpora are
highly scarce. In this paper, we propose two novel numeral embedding methods
that can handle the out-of-vocabulary (OOV) problem for numerals. We first
induce a finite set of prototype numerals using either a self-organizing map or
a Gaussian mixture model. We then represent the embedding of a numeral as a
weighted average of the prototype number embeddings. Numeral embeddings
represented in this manner can be plugged into existing word embedding learning
approaches such as skip-gram for training. We evaluated our methods and showed
its effectiveness on four intrinsic and extrinsic tasks: word similarity,
embedding numeracy, numeral prediction, and sequence labeling.
| 2,020 | Computation and Language |
Deep Reinforced Self-Attention Masks for Abstractive Summarization
(DR.SAS) | We present a novel architectural scheme to tackle the abstractive
summarization problem based on the CNN/DMdataset which fuses Reinforcement
Learning (RL) withUniLM, which is a pre-trained Deep Learning Model, to solve
various natural language tasks. We have tested the limits of learning
fine-grained attention in Transformers to improve the summarization quality.
UniLM applies attention to the entire token space in a global fashion. We
propose DR.SAS which applies the Actor-Critic (AC) algorithm to learn a dynamic
self-attention distribution over the tokens to reduce redundancy and generate
factual and coherent summaries to improve the quality of summarization. After
performing hyperparameter tuning, we achievedbetter ROUGE results compared to
the baseline. Our model tends to be more extractive/factual yet coherent in
detail because of optimization over ROUGE rewards. We present detailed error
analysis with examples of the strengths and limitations of our model. Our
codebase will be publicly available on our GitHub.
| 2,020 | Computation and Language |
Simultaneous Identification of Tweet Purpose and Position | Tweet classification has attracted considerable attention recently. Most of
the existing work on tweet classification focuses on topic classification,
which classifies tweets into several predefined categories, and sentiment
classification, which classifies tweets into positive, negative and neutral.
Since tweets are different from conventional text in that they generally are of
limited length and contain informal, irregular or new words, so it is difficult
to determine user intention to publish a tweet and user attitude towards
certain topic. In this paper, we aim to simultaneously classify tweet purpose,
i.e., the intention for user to publish a tweet, and position, i.e.,
supporting, opposing or being neutral to a given topic. By transforming this
problem to a multi-label classification problem, a multi-label classification
method with post-processing is proposed. Experiments on real-world data sets
demonstrate the effectiveness of this method and the results outperform the
individual classification methods.
| 2,020 | Computation and Language |
Deep Attentive Ranking Networks for Learning to Order Sentences | We present an attention-based ranking framework for learning to order
sentences given a paragraph. Our framework is built on a bidirectional sentence
encoder and a self-attention based transformer network to obtain an input order
invariant representation of paragraphs. Moreover, it allows seamless training
using a variety of ranking based loss functions, such as pointwise, pairwise,
and listwise ranking. We apply our framework on two tasks: Sentence Ordering
and Order Discrimination. Our framework outperforms various state-of-the-art
methods on these tasks on a variety of evaluation metrics. We also show that it
achieves better results when using pairwise and listwise ranking losses, rather
than the pointwise ranking loss, which suggests that incorporating relative
positions of two or more sentences in the loss function contributes to better
learning.
| 2,020 | Computation and Language |
Building chatbots from large scale domain-specific knowledge bases:
challenges and opportunities | Popular conversational agents frameworks such as Alexa Skills Kit (ASK) and
Google Actions (gActions) offer unprecedented opportunities for facilitating
the development and deployment of voice-enabled AI solutions in various
verticals. Nevertheless, understanding user utterances with high accuracy
remains a challenging task with these frameworks. Particularly, when building
chatbots with large volume of domain-specific entities. In this paper, we
describe the challenges and lessons learned from building a large scale virtual
assistant for understanding and responding to equipment-related complaints. In
the process, we describe an alternative scalable framework for: 1) extracting
the knowledge about equipment components and their associated problem entities
from short texts, and 2) learning to identify such entities in user utterances.
We show through evaluation on a real dataset that the proposed framework,
compared to off-the-shelf popular ones, scales better with large volume of
entities being up to 30% more accurate, and is more effective in understanding
user utterances with domain-specific entities.
| 2,020 | Computation and Language |
Stacked DeBERT: All Attention in Incomplete Data for Text Classification | In this paper, we propose Stacked DeBERT, short for Stacked Denoising
Bidirectional Encoder Representations from Transformers. This novel model
improves robustness in incomplete data, when compared to existing systems, by
designing a novel encoding scheme in BERT, a powerful language representation
model solely based on attention mechanisms. Incomplete data in natural language
processing refer to text with missing or incorrect words, and its presence can
hinder the performance of current models that were not implemented to withstand
such noises, but must still perform well even under duress. This is due to the
fact that current approaches are built for and trained with clean and complete
data, and thus are not able to extract features that can adequately represent
incomplete data. Our proposed approach consists of obtaining intermediate input
representations by applying an embedding layer to the input tokens followed by
vanilla transformers. These intermediate features are given as input to novel
denoising transformers which are responsible for obtaining richer input
representations. The proposed approach takes advantage of stacks of multilayer
perceptrons for the reconstruction of missing words' embeddings by extracting
more abstract and meaningful hidden feature vectors, and bidirectional
transformers for improved embedding representation. We consider two datasets
for training and evaluation: the Chatbot Natural Language Understanding
Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows
improved F1-scores and better robustness in informal/incorrect texts present in
tweets and in texts with Speech-to-Text error in the sentiment and intent
classification tasks.
| 2,021 | Computation and Language |
Chemical-induced Disease Relation Extraction with Dependency Information
and Prior Knowledge | Chemical-disease relation (CDR) extraction is significantly important to
various areas of biomedical research and health care. Nowadays, many
large-scale biomedical knowledge bases (KBs) containing triples about entity
pairs and their relations have been built. KBs are important resources for
biomedical relation extraction. However, previous research pays little
attention to prior knowledge. In addition, the dependency tree contains
important syntactic and semantic information, which helps to improve relation
extraction. So how to effectively use it is also worth studying. In this paper,
we propose a novel convolutional attention network (CAN) for CDR extraction.
Firstly, we extract the shortest dependency path (SDP) between chemical and
disease pairs in a sentence, which includes a sequence of words, dependency
directions, and dependency relation tags. Then the convolution operations are
performed on the SDP to produce deep semantic dependency features. After that,
an attention mechanism is employed to learn the importance/weight of each
semantic dependency vector related to knowledge representations learned from
KBs. Finally, in order to combine dependency information and prior knowledge,
the concatenation of weighted semantic dependency representations and knowledge
representations is fed to the softmax layer for classification. Experiments on
the BioCreative V CDR dataset show that our method achieves comparable
performance with the state-of-the-art systems, and both dependency information
and prior knowledge play important roles in CDR extraction task.
| 2,018 | Computation and Language |
Question Type Classification Methods Comparison | The paper presents a comparative study of state-of-the-art approaches for
question classification task: Logistic Regression, Convolutional Neural
Networks (CNN), Long Short-Term Memory Network (LSTM) and Quasi-Recurrent
Neural Networks (QRNN). All models use pre-trained GLoVe word embeddings and
trained on human-labeled data. The best accuracy is achieved using CNN model
with five convolutional layers and various kernel sizes stacked in parallel,
followed by one fully connected layer. The model reached 90.7% accuracy on TREC
10 test set. All the model architectures in this paper were developed from
scratch on PyTorch, in few cases based on reliable open-source implementation.
| 2,020 | Computation and Language |
Read Beyond the Lines: Understanding the Implied Textual Meaning via a
Skim and Intensive Reading Model | The nonliteral interpretation of a text is hard to be understood by machine
models due to its high context-sensitivity and heavy usage of figurative
language. In this study, inspired by human reading comprehension, we propose a
novel, simple, and effective deep neural framework, called Skim and Intensive
Reading Model (SIRM), for figuring out implied textual meaning. The proposed
SIRM consists of two main components, namely the skim reading component and
intensive reading component. N-gram features are quickly extracted from the
skim reading component, which is a combination of several convolutional neural
networks, as skim (entire) information. An intensive reading component enables
a hierarchical investigation for both local (sentence) and global (paragraph)
representation, which encapsulates the current embedding and the contextual
information with a dense connection. More specifically, the contextual
information includes the near-neighbor information and the skim information
mentioned above. Finally, besides the normal training loss function, we employ
an adversarial loss function as a penalty over the skim reading component to
eliminate noisy information arisen from special figurative words in the
training data. To verify the effectiveness, robustness, and efficiency of the
proposed architecture, we conduct extensive comparative experiments on several
sarcasm benchmarks and an industrial spam dataset with metaphors. Experimental
results indicate that (1) the proposed model, which benefits from context
modeling and consideration of figurative language, outperforms existing
state-of-the-art solutions, with comparable parameter scale and training speed;
(2) the SIRM yields superior robustness in terms of parameter size sensitivity;
(3) compared with ablation and addition variants of the SIRM, the final
framework is efficient enough.
| 2,020 | Computation and Language |
TED: A Pretrained Unsupervised Summarization Model with Theme Modeling
and Denoising | Text summarization aims to extract essential information from a piece of text
and transform the text into a concise version. Existing unsupervised
abstractive summarization models leverage recurrent neural networks framework
while the recently proposed transformer exhibits much more capability.
Moreover, most of previous summarization models ignore abundant unlabeled
corpora resources available for pretraining. In order to address these issues,
we propose TED, a transformer-based unsupervised abstractive summarization
system with pretraining on large-scale data. We first leverage the lead bias in
news articles to pretrain the model on millions of unlabeled corpora. Next, we
finetune TED on target domains through theme modeling and a denoising
autoencoder to enhance the quality of generated summaries. Notably, TED
outperforms all unsupervised abstractive baselines on NYT, CNN/DM and English
Gigaword datasets with various document styles. Further analysis shows that the
summaries generated by TED are highly abstractive, and each component in the
objective function of TED is highly effective.
| 2,020 | Computation and Language |
"Love is as Complex as Math": Metaphor Generation System for Social
Chatbot | As the wide adoption of intelligent chatbot in human daily life, user demands
for such systems evolve from basic task-solving conversations to more casual
and friend-like communication. To meet the user needs and build emotional bond
with users, it is essential for social chatbots to incorporate more human-like
and advanced linguistic features. In this paper, we investigate the usage of a
commonly used rhetorical device by human -- metaphor for social chatbot. Our
work first designs a metaphor generation framework, which generates topic-aware
and novel figurative sentences. By embedding the framework into a chatbot
system, we then enables the chatbot to communicate with users using figurative
language. Human annotators validate the novelty and properness of the generated
metaphors. More importantly, we evaluate the effects of employing metaphors in
human-chatbot conversations. Experiments indicate that our system effectively
arouses user interests in communicating with our chatbot, resulting in
significantly longer human-chatbot conversations.
| 2,020 | Computation and Language |
On the comparability of Pre-trained Language Models | Recent developments in unsupervised representation learning have successfully
established the concept of transfer learning in NLP. Mainly three forces are
driving the improvements in this area of research: More elaborated
architectures are making better use of contextual information. Instead of
simply plugging in static pre-trained representations, these are learned based
on surrounding context in end-to-end trainable models with more intelligently
designed language modelling objectives. Along with this, larger corpora are
used as resources for pre-training large language models in a self-supervised
fashion which are afterwards fine-tuned on supervised tasks. Advances in
parallel computing as well as in cloud computing, made it possible to train
these models with growing capacities in the same or even in shorter time than
previously established models. These three developments agglomerate in new
state-of-the-art (SOTA) results being revealed in a higher and higher
frequency. It is not always obvious where these improvements originate from, as
it is not possible to completely disentangle the contributions of the three
driving forces. We set ourselves to providing a clear and concise overview on
several large pre-trained language models, which achieved SOTA results in the
last two years, with respect to their use of new architectures and resources.
We want to clarify for the reader where the differences between the models are
and we furthermore attempt to gain some insight into the single contributions
of lexical/computational improvements as well as of architectural changes. We
explicitly do not intend to quantify these contributions, but rather see our
work as an overview in order to identify potential starting points for
benchmark comparisons. Furthermore, we tentatively want to point at potential
possibilities for improvement in the field of open-sourcing and reproducible
research.
| 2,020 | Computation and Language |
Two-Level Transformer and Auxiliary Coherence Modeling for Improved Text
Segmentation | Breaking down the structure of long texts into semantically coherent segments
makes the texts more readable and supports downstream applications like
summarization and retrieval. Starting from an apparent link between text
coherence and segmentation, we introduce a novel supervised model for text
segmentation with simple but explicit coherence modeling. Our model -- a neural
architecture consisting of two hierarchically connected Transformer networks --
is a multi-task learning model that couples the sentence-level segmentation
objective with the coherence objective that differentiates correct sequences of
sentences from corrupt ones. The proposed model, dubbed Coherence-Aware Text
Segmentation (CATS), yields state-of-the-art segmentation performance on a
collection of benchmark datasets. Furthermore, by coupling CATS with
cross-lingual word embeddings, we demonstrate its effectiveness in zero-shot
language transfer: it can successfully segment texts in languages unseen in
training.
| 2,020 | Computation and Language |
Adapting Deep Learning for Sentiment Classification of Code-Switched
Informal Short Text | Nowadays, an abundance of short text is being generated that uses nonstandard
writing styles influenced by regional languages. Such informal and
code-switched content are under-resourced in terms of labeled datasets and
language models even for popular tasks like sentiment classification. In this
work, we (1) present a labeled dataset called MultiSenti for sentiment
classification of code-switched informal short text, (2) explore the
feasibility of adapting resources from a resource-rich language for an informal
one, and (3) propose a deep learning-based model for sentiment classification
of code-switched informal short text. We aim to achieve this without any
lexical normalization, language translation, or code-switching indication. The
performance of the proposed models is compared with three existing multilingual
sentiment classification models. The results show that the proposed model
performs better in general and adapting character-based embeddings yield
equivalent performance while being computationally more efficient than training
word-based domain-specific embeddings.
| 2,020 | Computation and Language |
A Comprehensive Survey of Multilingual Neural Machine Translation | We present a survey on multilingual neural machine translation (MNMT), which
has gained a lot of traction in the recent years. MNMT has been useful in
improving translation quality as a result of translation knowledge transfer
(transfer learning). MNMT is more promising and interesting than its
statistical machine translation counterpart because end-to-end modeling and
distributed representations open new avenues for research on machine
translation. Many approaches have been proposed in order to exploit
multilingual parallel corpora for improving translation quality. However, the
lack of a comprehensive survey makes it difficult to determine which approaches
are promising and hence deserve further exploration. In this paper, we present
an in-depth survey of existing literature on MNMT. We first categorize various
approaches based on their central use-case and then further categorize them
based on resource scenarios, underlying modeling principles, core-issues and
challenges. Wherever possible we address the strengths and weaknesses of
several techniques by comparing them with each other. We also discuss the
future directions that MNMT research might take. This paper is aimed towards
both, beginners and experts in NMT. We hope this paper will serve as a starting
point as well as a source of new ideas for researchers and engineers interested
in MNMT.
| 2,020 | Computation and Language |
Transformer-based language modeling and decoding for conversational
speech recognition | We propose a way to use a transformer-based language model in conversational
speech recognition. Specifically, we focus on decoding efficiently in a
weighted finite-state transducer framework. We showcase an approach to lattice
re-scoring that allows for longer range history captured by a transfomer-based
language model and takes advantage of a transformer's ability to avoid
computing sequentially.
| 2,020 | Computation and Language |
Computationally Efficient NER Taggers with Combined Embeddings and
Constrained Decoding | Current State-of-the-Art models in Named Entity Recognition (NER) are neural
models with a Conditional Random Field (CRF) as the final network layer, and
pre-trained "contextual embeddings". The CRF layer is used to facilitate global
coherence between labels, and the contextual embeddings provide a better
representation of words in context. However, both of these improvements come at
a high computational cost. In this work, we explore two simple techniques that
substantially improve NER performance over a strong baseline with negligible
cost. First, we use multiple pre-trained embeddings as word representations via
concatenation. Second, we constrain the tagger, trained using a cross-entropy
loss, during decoding to eliminate illegal transitions. While training a tagger
on CoNLL 2003 we find a $786$\% speed-up over a contextual embeddings-based
tagger without sacrificing strong performance. We also show that the
concatenation technique works across multiple tasks and datasets. We analyze
aspects of similarity and coverage between pre-trained embeddings and the
dynamics of tag co-occurrence to explain why these techniques work. We provide
an open source implementation of our tagger using these techniques in three
popular deep learning frameworks --- TensorFlow, Pytorch, and DyNet.
| 2,021 | Computation and Language |
Automatic Business Process Structure Discovery using Ordered Neurons
LSTM: A Preliminary Study | Automatic process discovery from textual process documentations is highly
desirable to reduce time and cost of Business Process Management (BPM)
implementation in organizations. However, existing automatic process discovery
approaches mainly focus on identifying activities out of the documentations.
Deriving the structural relationships between activities, which is important in
the whole process discovery scope, is still a challenge. In fact, a business
process has latent semantic hierarchical structure which defines different
levels of detail to reflect the complex business logic. Recent findings in
neural machine learning area show that the meaningful linguistic structure can
be induced by joint language modeling and structure learning. Inspired by these
findings, we propose to retrieve the latent hierarchical structure present in
the textual business process documents by building a neural network that
leverages a novel recurrent architecture, Ordered Neurons LSTM (ON-LSTM), with
process-level language model objective. We tested the proposed approach on data
set of Process Description Documents (PDD) from our practical Robotic Process
Automation (RPA) projects. Preliminary experiments showed promising results.
| 2,020 | Computation and Language |
Generating Word and Document Embeddings for Sentiment Analysis | Sentiments of words differ from one corpus to another. Inducing general
sentiment lexicons for languages and using them cannot, in general, produce
meaningful results for different domains. In this paper, we combine contextual
and supervised information with the general semantic representations of words
occurring in the dictionary. Contexts of words help us capture the
domain-specific information and supervised scores of words are indicative of
the polarities of those words. When we combine supervised features of words
with the features extracted from their dictionary definitions, we observe an
increase in the success rates. We try out the combinations of contextual,
supervised, and dictionary-based approaches, and generate original vectors. We
also combine the word2vec approach with hand-crafted features. We induce
domain-specific sentimental vectors for two corpora, which are the movie domain
and the Twitter datasets in Turkish. When we thereafter generate document
vectors and employ the support vector machines method utilising those vectors,
our approaches perform better than the baseline studies for Turkish with a
significant margin. We evaluated our models on two English corpora as well and
these also outperformed the word2vec approach. It shows that our approaches are
cross-domain and portable to other languages.
| 2,019 | Computation and Language |
Improving Entity Linking by Modeling Latent Entity Type Information | Existing state of the art neural entity linking models employ attention-based
bag-of-words context model and pre-trained entity embeddings bootstrapped from
word embeddings to assess topic level context compatibility. However, the
latent entity type information in the immediate context of the mention is
neglected, which causes the models often link mentions to incorrect entities
with incorrect type. To tackle this problem, we propose to inject latent entity
type information into the entity embeddings based on pre-trained BERT. In
addition, we integrate a BERT-based entity similarity score into the local
context model of a state-of-the-art model to better capture latent entity type
information. Our model significantly outperforms the state-of-the-art entity
linking models on standard benchmark (AIDA-CoNLL). Detailed experiment analysis
demonstrates that our model corrects most of the type errors produced by the
direct baseline.
| 2,020 | Computation and Language |
Speaker-aware speech-transformer | Recently, end-to-end (E2E) models become a competitive alternative to the
conventional hybrid automatic speech recognition (ASR) systems. However, they
still suffer from speaker mismatch in training and testing condition. In this
paper, we use Speech-Transformer (ST) as the study platform to investigate
speaker aware training of E2E models. We propose a model called Speaker-Aware
Speech-Transformer (SAST), which is a standard ST equipped with a speaker
attention module (SAM). The SAM has a static speaker knowledge block (SKB) that
is made of i-vectors. At each time step, the encoder output attends to the
i-vectors in the block, and generates a weighted combined speaker embedding
vector, which helps the model to normalize the speaker variations. The SAST
model trained in this way becomes independent of specific training speakers and
thus generalizes better to unseen testing speakers. We investigate different
factors of SAM. Experimental results on the AISHELL-1 task show that SAST
achieves a relative 6.5% CER reduction (CERR) over the speaker-independent (SI)
baseline. Moreover, we demonstrate that SAST still works quite well even if the
i-vectors in SKB all come from a different data source other than the acoustic
training set.
| 2,020 | Computation and Language |
Stance Detection Benchmark: How Robust Is Your Stance Detection? | Stance Detection (StD) aims to detect an author's stance towards a certain
topic or claim and has become a key component in applications like fake news
detection, claim validation, and argument search. However, while stance is
easily detected by humans, machine learning models are clearly falling short of
this task. Given the major differences in dataset sizes and framing of StD
(e.g. number of classes and inputs), we introduce a StD benchmark that learns
from ten StD datasets of various domains in a multi-dataset learning (MDL)
setting, as well as from related tasks via transfer learning. Within this
benchmark setup, we are able to present new state-of-the-art results on five of
the datasets. Yet, the models still perform well below human capabilities and
even simple adversarial attacks severely hurt the performance of MDL models.
Deeper investigation into this phenomenon suggests the existence of biases
inherited from multiple datasets by design. Our analysis emphasizes the need of
focus on robustness and de-biasing strategies in multi-task learning
approaches. The benchmark dataset and code is made available.
| 2,020 | Computation and Language |
A Survey on Machine Reading Comprehension Systems | Machine reading comprehension is a challenging task and hot topic in natural
language processing. Its goal is to develop systems to answer the questions
regarding a given context. In this paper, we present a comprehensive survey on
different aspects of machine reading comprehension systems, including their
approaches, structures, input/outputs, and research novelties. We illustrate
the recent trends in this field based on 241 reviewed papers from 2016 to 2020.
Our investigations demonstrate that the focus of research has changed in recent
years from answer extraction to answer generation, from single to
multi-document reading comprehension, and from learning from scratch to using
pre-trained embeddings. We also discuss the popular datasets and the evaluation
metrics in this field. The paper ends with investigating the most cited papers
and their contributions.
| 2,020 | Computation and Language |
Information Extraction based on Named Entity for Tourism Corpus | Tourism information is scattered around nowadays. To search for the
information, it is usually time consuming to browse through the results from
search engine, select and view the details of each accommodation. In this
paper, we present a methodology to extract particular information from full
text returned from the search engine to facilitate the users. Then, the users
can specifically look to the desired relevant information. The approach can be
used for the same task in other domains. The main steps are 1) building
training data and 2) building recognition model. First, the tourism data is
gathered and the vocabularies are built. The raw corpus is used to train for
creating vocabulary embedding. Also, it is used for creating annotated data.
The process of creating named entity annotation is presented. Then, the
recognition model of a given entity type can be built. From the experiments,
given hotel description, the model can extract the desired entity,i.e, name,
location, facility. The extracted data can further be stored as a structured
information, e.g., in the ontology format, for future querying and inference.
The model for automatic named entity identification, based on machine learning,
yields the error ranging 8%-25%.
| 2,019 | Computation and Language |
Morphological Word Segmentation on Agglutinative Languages for Neural
Machine Translation | Neural machine translation (NMT) has achieved impressive performance on
machine translation task in recent years. However, in consideration of
efficiency, a limited-size vocabulary that only contains the top-N highest
frequency words are employed for model training, which leads to many rare and
unknown words. It is rather difficult when translating from the low-resource
and morphologically-rich agglutinative languages, which have complex morphology
and large vocabulary. In this paper, we propose a morphological word
segmentation method on the source-side for NMT that incorporates morphology
knowledge to preserve the linguistic and semantic information in the word
structure while reducing the vocabulary size at training time. It can be
utilized as a preprocessing tool to segment the words in agglutinative
languages for other natural language processing (NLP) tasks. Experimental
results show that our morphologically motivated word segmentation method is
better suitable for the NMT model, which achieves significant improvements on
Turkish-English and Uyghur-Chinese machine translation tasks on account of
reducing data sparseness and language complexity.
| 2,020 | Computation and Language |
Why Moli\`ere most likely did write his plays | As for Shakespeare, a hard-fought debate has emerged about Moli\`ere, a
supposedly uneducated actor who, according to some, could not have written the
masterpieces attributed to him. In the past decades, the century-old thesis
according to which Pierre Corneille would be their actual author has become
popular, mostly because of new works in computational linguistics. These
results are reassessed here through state-of-the-art attribution methods. We
study a corpus of comedies in verse by major authors of Moli\`ere and
Corneille's time. Analysis of lexicon, rhymes, word forms, affixes,
morphosyntactic sequences, and function words do not give any clue that another
author among the major playwrights of the time would have written the plays
signed under the name Moli\`ere.
| 2,019 | Computation and Language |
Exploring Benefits of Transfer Learning in Neural Machine Translation | Neural machine translation is known to require large numbers of parallel
training sentences, which generally prevent it from excelling on low-resource
language pairs. This thesis explores the use of cross-lingual transfer learning
on neural networks as a way of solving the problem with the lack of resources.
We propose several transfer learning approaches to reuse a model pretrained on
a high-resource language pair. We pay particular attention to the simplicity of
the techniques. We study two scenarios: (a) when we reuse the high-resource
model without any prior modifications to its training process and (b) when we
can prepare the first-stage high-resource model for transfer learning in
advance. For the former scenario, we present a proof-of-concept method by
reusing a model trained by other researchers. In the latter scenario, we
present a method which reaches even larger improvements in translation
performance. Apart from proposed techniques, we focus on an in-depth analysis
of transfer learning techniques and try to shed some light on transfer learning
improvements. We show how our techniques address specific problems of
low-resource languages and are suitable even in high-resource transfer
learning. We evaluate the potential drawbacks and behavior by studying transfer
learning in various situations, for example, under artificially damaged
training corpora, or with fixed various model parts.
| 2,020 | Computation and Language |
RECAST: Interactive Auditing of Automatic Toxicity Detection Models | As toxic language becomes nearly pervasive online, there has been increasing
interest in leveraging the advancements in natural language processing (NLP),
from very large transformer models to automatically detecting and removing
toxic comments. Despite the fairness concerns, lack of adversarial robustness,
and limited prediction explainability for deep learning systems, there is
currently little work for auditing these systems and understanding how they
work for both developers and users. We present our ongoing work, RECAST, an
interactive tool for examining toxicity detection models by visualizing
explanations for predictions and providing alternative wordings for detected
toxic speech.
| 2,020 | Computation and Language |
Text Complexity Classification Based on Linguistic Information:
Application to Intelligent Tutoring of ESL | The goal of this work is to build a classifier that can identify text
complexity within the context of teaching reading to English as a Second
Language (ESL) learners. To present language learners with texts that are
suitable to their level of English, a set of features that can describe the
phonological, morphological, lexical, syntactic, discursive, and psychological
complexity of a given text were identified. Using a corpus of 6171 texts, which
had already been classified into three different levels of difficulty by ESL
experts, different experiments were conducted with five machine learning
algorithms. The results showed that the adopted linguistic features provide a
good overall classification performance (F-Score = 0.97). A scalability
evaluation was conducted to test if such a classifier could be used within real
applications, where it can be, for example, plugged into a search engine or a
web-scraping module. In this evaluation, the texts in the test set are not only
different from those from the training set but also of different types (ESL
texts vs. children reading texts). Although the overall performance of the
classifier decreased significantly (F-Score = 0.65), the confusion matrix shows
that most of the classification errors are between the classes two and three
(the middle-level classes) and that the system has a robust performance in
categorizing texts of class one and four. This behavior can be explained by the
difference in classification criteria between the two corpora. Hence, the
observed results confirm the usability of such a classifier within a real-world
application.
| 2,020 | Computation and Language |
Attention over Parameters for Dialogue Systems | Dialogue systems require a great deal of different but complementary
expertise to assist, inform, and entertain humans. For example, different
domains (e.g., restaurant reservation, train ticket booking) of goal-oriented
dialogue systems can be viewed as different skills, and so does ordinary
chatting abilities of chit-chat dialogue systems. In this paper, we propose to
learn a dialogue system that independently parameterizes different dialogue
skills, and learns to select and combine each of them through Attention over
Parameters (AoP). The experimental results show that this approach achieves
competitive performance on a combined dataset of MultiWOZ, In-Car Assistant,
and Persona-Chat. Finally, we demonstrate that each dialogue skill is
effectively learned and can be combined with other skills to produce selective
responses.
| 2,020 | Computation and Language |
Paraphrase Generation with Latent Bag of Words | Paraphrase generation is a longstanding important problem in natural language
processing.
In addition, recent progress in deep generative models has shown promising
results on discrete latent variables for text generation.
Inspired by variational autoencoders with discrete latent structures, in this
work, we propose a latent bag of words (BOW) model for paraphrase generation.
We ground the semantics of a discrete latent variable by the BOW from the
target sentences.
We use this latent variable to build a fully differentiable content planning
and surface realization model.
Specifically, we use source words to predict their neighbors and model the
target BOW with a mixture of softmax.
We use Gumbel top-k reparameterization to perform differentiable subset
sampling from the predicted BOW distribution.
We retrieve the sampled word embeddings and use them to augment the decoder
and guide its generation search space.
Our latent BOW model not only enhances the decoder, but also exhibits clear
interpretability.
We show the model interpretability with regard to \emph{(i)} unsupervised
learning of word neighbors \emph{(ii)} the step-by-step generation procedure.
Extensive experiments demonstrate the transparent and effective generation
process of this model.\footnote{Our code can be found at
\url{https://github.com/FranxYao/dgm_latent_bow}}
| 2,020 | Computation and Language |
Learning Speaker Embedding with Momentum Contrast | Speaker verification can be formulated as a representation learning task,
where speaker-discriminative embeddings are extracted from utterances of
variable lengths. Momentum Contrast (MoCo) is a recently proposed unsupervised
representation learning framework, and has shown its effectiveness for learning
good feature representation for downstream vision tasks. In this work, we apply
MoCo to learn speaker embedding from speech segments. We explore MoCo for both
unsupervised learning and pretraining settings. In the unsupervised scenario,
embedding is learned by MoCo from audio data without using any speaker specific
information. On a large scale dataset with $2,500$ speakers, MoCo can achieve
EER $4.275\%$ trained unsupervisedly, and the EER can decrease further to
$3.58\%$ if extra unlabelled data are used. In the pretraining scenario,
encoder trained by MoCo is used to initialize the downstream supervised
training. With finetuning on the MoCo trained model, the equal error rate (EER)
reduces $13.7\%$ relative ($1.44\%$ to $1.242\%$) compared to a carefully tuned
baseline training from scratch. Comparative study confirms the effectiveness of
MoCo learning good speaker embedding.
| 2,020 | Computation and Language |
Latent Opinions Transfer Network for Target-Oriented Opinion Words
Extraction | Target-oriented opinion words extraction (TOWE) is a new subtask of ABSA,
which aims to extract the corresponding opinion words for a given opinion
target in a sentence. Recently, neural network methods have been applied to
this task and achieve promising results. However, the difficulty of annotation
causes the datasets of TOWE to be insufficient, which heavily limits the
performance of neural models. By contrast, abundant review sentiment
classification data are easily available at online review sites. These reviews
contain substantial latent opinions information and semantic patterns. In this
paper, we propose a novel model to transfer these opinions knowledge from
resource-rich review sentiment classification datasets to low-resource task
TOWE. To address the challenges in the transfer process, we design an effective
transformation method to obtain latent opinions, then integrate them into TOWE.
Extensive experimental results show that our model achieves better performance
compared to other state-of-the-art methods and significantly outperforms the
base model without transferring opinions knowledge. Further analysis validates
the effectiveness of our model.
| 2,020 | Computation and Language |
Knowledge-aware Attention Network for Protein-Protein Interaction
Extraction | Protein-protein interaction (PPI) extraction from published scientific
literature provides additional support for precision medicine efforts. However,
many of the current PPI extraction methods need extensive feature engineering
and cannot make full use of the prior knowledge in knowledge bases (KB). KBs
contain huge amounts of structured information about entities and
relationships, therefore plays a pivotal role in PPI extraction. This paper
proposes a knowledge-aware attention network (KAN) to fuse prior knowledge
about protein-protein pairs and context information for PPI extraction. The
proposed model first adopts a diagonal-disabled multi-head attention mechanism
to encode context sequence along with knowledge representations learned from
KB. Then a novel multi-dimensional attention mechanism is used to select the
features that can best describe the encoded context. Experiment results on the
BioCreative VI PPI dataset show that the proposed approach could acquire
knowledge-aware dependencies between different words in a sequence and lead to
a new state-of-the-art performance.
| 2,019 | Computation and Language |
Leveraging Prior Knowledge for Protein-Protein Interaction Extraction
with Memory Network | Automatically extracting Protein-Protein Interactions (PPI) from biomedical
literature provides additional support for precision medicine efforts. This
paper proposes a novel memory network-based model (MNM) for PPI extraction,
which leverages prior knowledge about protein-protein pairs with memory
networks. The proposed MNM captures important context clues related to
knowledge representations learned from knowledge bases. Both entity embeddings
and relation embeddings of prior knowledge are effective in improving the PPI
extraction model, leading to a new state-of-the-art performance on the
BioCreative VI PPI dataset. The paper also shows that multiple computational
layers over an external memory are superior to long short-term memory networks
with the local memories.
| 2,018 | Computation and Language |
Heaps' law and Heaps functions in tagged texts: Evidences of their
linguistic relevance | We study the relationship between vocabulary size and text length in a corpus
of $75$ literary works in English, authored by six writers, distinguishing
between the contributions of three grammatical classes (or ``tags,'' namely,
{\it nouns}, {\it verbs}, and {\it others}), and analyze the progressive
appearance of new words of each tag along each individual text. While the
power-law relation prescribed by Heaps' law is satisfactorily fulfilled by
total vocabulary sizes and text lengths, the appearance of new words in each
text is on the whole well described by the average of random shufflings of the
text, which does not obey a power law. Deviations from this average, however,
are statistically significant and show a systematic trend across the corpus.
Specifically, they reveal that the appearance of new words along each text is
predominantly retarded with respect to the average of random shufflings.
Moreover, different tags are shown to add systematically distinct contributions
to this tendency, with {\it verbs} and {\it others} being respectively more and
less retarded than the mean trend, and {\it nouns} following instead this
overall mean. These statistical systematicities are likely to point to the
existence of linguistically relevant information stored in the different
variants of Heaps' law, a feature that is still in need of extensive
assessment.
| 2,020 | Computation and Language |
Multipurpose Intelligent Process Automation via Conversational Assistant | Intelligent Process Automation (IPA) is an emerging technology with a primary
goal to assist the knowledge worker by taking care of repetitive, routine and
low-cognitive tasks. Conversational agents that can interact with users in a
natural language are potential application for IPA systems. Such intelligent
agents can assist the user by answering specific questions and executing
routine tasks that are ordinarily performed in a natural language (i.e.,
customer support). In this work, we tackle a challenge of implementing an IPA
conversational assistant in a real-world industrial setting with a lack of
structured training data. Our proposed system brings two significant benefits:
First, it reduces repetitive and time-consuming activities and, therefore,
allows workers to focus on more intelligent processes. Second, by interacting
with users, it augments the resources with structured and to some extent
labeled training data. We showcase the usage of the latter by re-implementing
several components of our system with Transfer Learning (TL) methods.
| 2,020 | Computation and Language |
Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs | Large-scale knowledge graphs (KGs) are shown to become more important in
current information systems. To expand the coverage of KGs, previous studies on
knowledge graph completion need to collect adequate training instances for
newly-added relations. In this paper, we consider a novel formulation,
zero-shot learning, to free this cumbersome curation. For newly-added
relations, we attempt to learn their semantic features from their text
descriptions and hence recognize the facts of unseen relations with no examples
being seen. For this purpose, we leverage Generative Adversarial Networks
(GANs) to establish the connection between text and knowledge graph domain: The
generator learns to generate the reasonable relation embeddings merely with
noisy text descriptions. Under this setting, zero-shot learning is naturally
converted to a traditional supervised classification task. Empirically, our
method is model-agnostic that could be potentially applied to any version of KG
embeddings, and consistently yields performance improvements on NELL and Wiki
dataset.
| 2,020 | Computation and Language |
A Neural Approach to Discourse Relation Signal Detection | Previous data-driven work investigating the types and distributions of
discourse relation signals, including discourse markers such as 'however' or
phrases such as 'as a result' has focused on the relative frequencies of signal
words within and outside text from each discourse relation. Such approaches do
not allow us to quantify the signaling strength of individual instances of a
signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to
assess the distribution of ambiguity for signals, or to identify words that
hinder discourse relation identification in context ('anti-signals' or
'distractors'). In this paper we present a data-driven approach to signal
detection using a distantly supervised neural network and develop a metric,
Delta s (or 'delta-softmax'), to quantify signaling strength. Ranging between
-1 and 1 and relying on recent advances in contextualized words embeddings, the
metric represents each word's positive or negative contribution to the
identifiability of a relation in specific instances in context. Based on an
English corpus annotated for discourse relations using Rhetorical Structure
Theory and signal type annotations anchored to specific tokens, our analysis
examines the reliability of the metric, the places where it overlaps with and
differs from human judgments, and the implications for identifying features
that neural models may need in order to perform better on automatic discourse
relation classification.
| 2,020 | Computation and Language |
LTP: A New Active Learning Strategy for CRF-Based Named Entity
Recognition | In recent years, deep learning has achieved great success in many natural
language processing tasks including named entity recognition. The shortcoming
is that a large amount of manually-annotated data is usually required. Previous
studies have demonstrated that active learning could elaborately reduce the
cost of data annotation, but there is still plenty of room for improvement. In
real applications we found existing uncertainty-based active learning
strategies have two shortcomings. Firstly, these strategies prefer to choose
long sequence explicitly or implicitly, which increase the annotation burden of
annotators. Secondly, some strategies need to invade the model and modify to
generate some additional information for sample selection, which will increase
the workload of the developer and increase the training/prediction time of the
model. In this paper, we first examine traditional active learning strategies
in a specific case of BiLstm-CRF that has widely used in named entity
recognition on several typical datasets. Then we propose an uncertainty-based
active learning strategy called Lowest Token Probability (LTP) which combines
the input and output of CRF to select informative instance. LTP is simple and
powerful strategy that does not favor long sequences and does not need to
invade the model. We test LTP on multiple datasets, and the experiments show
that LTP performs slightly better than traditional strategies with obviously
less annotation tokens on both sentence-level accuracy and entity-level
F1-score. Related code have been release on https://github.com/HIT-ICES/AL-NER
| 2,020 | Computation and Language |
REST: A Thread Embedding Approach for Identifying and Classifying
User-specified Information in Security Forums | How can we extract useful information from a security forum? We focus on
identifying threads of interest to a security professional: (a) alerts of
worrisome events, such as attacks, (b) offering of malicious services and
products, (c) hacking information to perform malicious acts, and (d) useful
security-related experiences. The analysis of security forums is in its infancy
despite several promising recent works. Novel approaches are needed to address
the challenges in this domain: (a) the difficulty in specifying the "topics" of
interest efficiently, and (b) the unstructured and informal nature of the text.
We propose, REST, a systematic methodology to: (a) identify threads of interest
based on a, possibly incomplete, bag of words, and (b) classify them into one
of the four classes above. The key novelty of the work is a multi-step weighted
embedding approach: we project words, threads and classes in appropriate
embedding spaces and establish relevance and similarity there. We evaluate our
method with real data from three security forums with a total of 164k posts and
21K threads. First, REST robustness to initial keyword selection can extend the
user-provided keyword set and thus, it can recover from missing keywords.
Second, REST categorizes the threads into the classes of interest with superior
accuracy compared to five other methods: REST exhibits an accuracy between
63.3-76.9%. We see our approach as a first step for harnessing the wealth of
information of online forums in a user-friendly way, since the user can loosely
specify her keywords of interest.
| 2,020 | Computation and Language |
Multiplex Word Embeddings for Selectional Preference Acquisition | Conventional word embeddings represent words with fixed vectors, which are
usually trained based on co-occurrence patterns among words. In doing so,
however, the power of such representations is limited, where the same word
might be functionalized separately under different syntactic relations. To
address this limitation, one solution is to incorporate relational dependencies
of different words into their embeddings. Therefore, in this paper, we propose
a multiplex word embedding model, which can be easily extended according to
various relations among words. As a result, each word has a center embedding to
represent its overall semantics, and several relational embeddings to represent
its relational dependencies. Compared to existing models, our model can
effectively distinguish words with respect to different relations without
introducing unnecessary sparseness. Moreover, to accommodate various relations,
we use a small dimension for relational embeddings and our model is able to
keep their effectiveness. Experiments on selectional preference acquisition and
word similarity demonstrate the effectiveness of the proposed model, and a
further study of scalability also proves that our embeddings only need 1/20 of
the original embedding size to achieve better performance.
| 2,020 | Computation and Language |
Resolving the Scope of Speculation and Negation using Transformer-Based
Architectures | Speculation is a naturally occurring phenomena in textual data, forming an
integral component of many systems, especially in the biomedical information
retrieval domain. Previous work addressing cue detection and scope resolution
(the two subtasks of speculation detection) have ranged from rule-based systems
to deep learning-based approaches. In this paper, we apply three popular
transformer-based architectures, BERT, XLNet and RoBERTa to this task, on two
publicly available datasets, BioScope Corpus and SFU Review Corpus, reporting
substantial improvements over previously reported results (by at least 0.29 F1
points on cue detection and 4.27 F1 points on scope resolution). We also
experiment with joint training of the model on multiple datasets, which
outperforms the single dataset training approach by a good margin. We observe
that XLNet consistently outperforms BERT and RoBERTa, contrary to results on
other benchmark datasets. To confirm this observation, we apply XLNet and
RoBERTa to negation detection and scope resolution, reporting state-of-the-art
results on negation scope resolution for the BioScope Corpus (increase of 3.16
F1 points on the BioScope Full Papers, 0.06 F1 points on the BioScope
Abstracts) and the SFU Review Corpus (increase of 0.3 F1 points).
| 2,020 | Computation and Language |
Binary and Multitask Classification Model for Dutch Anaphora Resolution:
Die/Dat Prediction | The correct use of Dutch pronouns 'die' and 'dat' is a stumbling block for
both native and non-native speakers of Dutch due to the multiplicity of
syntactic functions and the dependency on the antecedent's gender and number.
Drawing on previous research conducted on neural context-dependent dt-mistake
correction models (Heyman et al. 2018), this study constructs the first neural
network model for Dutch demonstrative and relative pronoun resolution that
specifically focuses on the correction and part-of-speech prediction of these
two pronouns. Two separate datasets are built with sentences obtained from,
respectively, the Dutch Europarl corpus (Koehn 2015) - which contains the
proceedings of the European Parliament from 1996 to the present - and the SoNaR
corpus (Oostdijk et al. 2013) - which contains Dutch texts from a variety of
domains such as newspapers, blogs and legal texts. Firstly, a binary
classification model solely predicts the correct 'die' or 'dat'. The classifier
with a bidirectional long short-term memory architecture achieves 84.56%
accuracy. Secondly, a multitask classification model simultaneously predicts
the correct 'die' or 'dat' and its part-of-speech tag. The model containing a
combination of a sentence and context encoder with both a bidirectional long
short-term memory architecture results in 88.63% accuracy for die/dat
prediction and 87.73% accuracy for part-of-speech prediction. More
evenly-balanced data, larger word embeddings, an extra bidirectional long
short-term memory layer and integrated part-of-speech knowledge positively
affects die/dat prediction performance, while a context encoder architecture
raises part-of-speech prediction performance. This study shows promising
results and can serve as a starting point for future research on machine
learning models for Dutch anaphora resolution.
| 2,020 | Computation and Language |
Open Challenge for Correcting Errors of Speech Recognition Systems | The paper announces the new long-term challenge for improving the performance
of automatic speech recognition systems. The goal of the challenge is to
investigate methods of correcting the recognition results on the basis of
previously made errors by the speech processing system. The dataset prepared
for the task is described and evaluation criteria are presented.
| 2,019 | Computation and Language |
Offensive Language Detection: A Comparative Analysis | Offensive behaviour has become pervasive in the Internet community.
Individuals take the advantage of anonymity in the cyber world and indulge in
offensive communications which they may not consider in the real life.
Governments, online communities, companies etc are investing into prevention of
offensive behaviour content in social media. One of the most effective solution
for tacking this enigmatic problem is the use of computational techniques to
identify offensive content and take action. The current work focuses on
detecting offensive language in English tweets. The dataset used for the
experiment is obtained from SemEval-2019 Task 6 on Identifying and Categorizing
Offensive Language in Social Media (OffensEval). The dataset contains 14,460
annotated English tweets. The present paper provides a comparative analysis and
Random kitchen sink (RKS) based approach for offensive language detection. We
explore the effectiveness of Google sentence encoder, Fasttext, Dynamic mode
decomposition (DMD) based features and Random kitchen sink (RKS) method for
offensive language detection. From the experiments and evaluation we observed
that RKS with fastetxt achieved competing results. The evaluation measures used
are accuracy, precision, recall, f1-score.
| 2,020 | Computation and Language |
Simulating Lexical Semantic Change from Sense-Annotated Data | We present a novel procedure to simulate lexical semantic change from
synchronic sense-annotated data, and demonstrate its usefulness for assessing
lexical semantic change detection models. The induced dataset represents a
stronger correspondence to empirically observed lexical semantic change than
previous synthetic datasets, because it exploits the intimate relationship
between synchronic polysemy and diachronic change. We publish the data and
provide the first large-scale evaluation gold standard for LSC detection
models.
| 2,020 | Computation and Language |
A Scalable Chatbot Platform Leveraging Online Community Posts: A
Proof-of-Concept Study | The development of natural language processing algorithms and the explosive
growth of conversational data are encouraging researches on the human-computer
conversation. Still, getting qualified conversational data on a large scale is
difficult and expensive. In this paper, we verify the feasibility of
constructing a data-driven chatbot with processed online community posts by
using them as pseudo-conversational data. We argue that chatbots for various
purposes can be built extensively through the pipeline exploiting the common
structure of community posts. Our experiment demonstrates that chatbots created
along the pipeline can yield the proper responses.
| 2,020 | Computation and Language |
Learning to Multi-Task Learn for Better Neural Machine Translation | Scarcity of parallel sentence pairs is a major challenge for training high
quality neural machine translation (NMT) models in bilingually low-resource
scenarios, as NMT is data-hungry. Multi-task learning is an elegant approach to
inject linguistic-related inductive biases into NMT, using auxiliary syntactic
and semantic tasks, to improve generalisation. The challenge, however, is to
devise effective training schedules, prescribing when to make use of the
auxiliary tasks during the training process to fill the knowledge gaps of the
main translation task, a setting referred to as biased-MTL. Current approaches
for the training schedule are based on hand-engineering heuristics, whose
effectiveness vary in different MTL settings. We propose a novel framework for
learning the training schedule, ie learning to multi-task learn, for the MTL
setting of interest. We formulate the training schedule as a Markov decision
process which paves the way to employ policy learning methods to learn the
scheduling policy. We effectively and efficiently learn the training schedule
policy within the imitation learning framework using an oracle policy algorithm
that dynamically sets the importance weights of auxiliary tasks based on their
contributions to the generalisability of the main NMT task. Experiments on
low-resource NMT settings show the resulting automatically learned training
schedulers are competitive with the best heuristics, and lead to up to +1.1
BLEU score improvements.
| 2,020 | Computation and Language |
Machine Learning Approaches for Amharic Parts-of-speech Tagging | Part-of-speech (POS) tagging is considered as one of the basic but necessary
tools which are required for many Natural Language Processing (NLP)
applications such as word sense disambiguation, information retrieval,
information processing, parsing, question answering, and machine translation.
Performance of the current POS taggers in Amharic is not as good as that of the
contemporary POS taggers available for English and other European languages.
The aim of this work is to improve POS tagging performance for the Amharic
language, which was never above 91%. Usage of morphological knowledge, an
extension of the existing annotated data, feature extraction, parameter tuning
by applying grid search and the tagging algorithms have been examined and
obtained significant performance difference from the previous works. We have
used three different datasets for POS experiments.
| 2,020 | Computation and Language |
Co-evolution of language and agents in referential games | Referential games offer a grounded learning environment for neural agents
which accounts for the fact that language is functionally used to communicate.
However, they do not take into account a second constraint considered to be
fundamental for the shape of human language: that it must be learnable by new
language learners.
Cogswell et al. (2019) introduced cultural transmission within referential
games through a changing population of agents to constrain the emerging
language to be learnable. However, the resulting languages remain inherently
biased by the agents' underlying capabilities.
In this work, we introduce Language Transmission Engine to model both
cultural and architectural evolution in a population of agents. As our core
contribution, we empirically show that the optimal situation is to take into
account also the learning biases of the language learners and thus let language
and agents co-evolve. When we allow the agent population to evolve through
architectural evolution, we achieve across the board improvements on all
considered metrics and surpass the gains made with cultural transmission. These
results stress the importance of studying the underlying agent architecture and
pave the way to investigate the co-evolution of language and agent in language
emergence studies.
| 2,021 | Computation and Language |
Towards Minimal Supervision BERT-based Grammar Error Correction | Current grammatical error correction (GEC) models typically consider the task
as sequence generation, which requires large amounts of annotated data and
limit the applications in data-limited settings. We try to incorporate
contextual information from pre-trained language model to leverage annotation
and benefit multilingual scenarios. Results show strong potential of
Bidirectional Encoder Representations from Transformers (BERT) in grammatical
error correction task.
| 2,020 | Computation and Language |
Does syntax need to grow on trees? Sources of hierarchical inductive
bias in sequence-to-sequence networks | Learners that are exposed to the same training data might generalize
differently due to differing inductive biases. In neural network models,
inductive biases could in theory arise from any aspect of the model
architecture. We investigate which architectural factors affect the
generalization behavior of neural sequence-to-sequence models trained on two
syntactic tasks, English question formation and English tense reinflection. For
both tasks, the training set is consistent with a generalization based on
hierarchical structure and a generalization based on linear order. All
architectural factors that we investigated qualitatively affected how models
generalized, including factors with no clear connection to hierarchical
structure. For example, LSTMs and GRUs displayed qualitatively different
inductive biases. However, the only factor that consistently contributed a
hierarchical bias across tasks was the use of a tree-structured model rather
than a model with sequential recurrence, suggesting that human-like syntactic
generalization requires architectural syntactic structure.
| 2,020 | Computation and Language |
PatentTransformer-2: Controlling Patent Text Generation by Structural
Metadata | PatentTransformer is our codename for patent text generation based on
Transformer-based models. Our goal is "Augmented Inventing." In this second
version, we leverage more of the structural metadata in patents. The structural
metadata includes patent title, abstract, and dependent claim, in addition to
independent claim previously. Metadata controls what kind of patent text for
the model to generate. Also, we leverage the relation between metadata to build
a text-to-text generation flow, for example, from a few words to a title, the
title to an abstract, the abstract to an independent claim, and the independent
claim to multiple dependent claims. The text flow can go backward because the
relation is trained bidirectionally. We release our GPT-2 models trained from
scratch and our code for inference so that readers can verify and generate
patent text on their own. As for generation quality, we measure it by both
ROUGE and Google Universal Sentence Encoder.
| 2,020 | Computation and Language |
Learning Cross-Context Entity Representations from Text | Language modeling tasks, in which words, or word-pieces, are predicted on the
basis of a local context, have been very effective for learning word embeddings
and context dependent representations of phrases. Motivated by the observation
that efforts to code world knowledge into machine readable knowledge bases or
human readable encyclopedias tend to be entity-centric, we investigate the use
of a fill-in-the-blank task to learn context independent representations of
entities from the text contexts in which those entities were mentioned. We show
that large scale training of neural models allows us to learn high quality
entity representations, and we demonstrate successful results on four domains:
(1) existing entity-level typing benchmarks, including a 64% error reduction
over previous work on TypeNet (Murty et al., 2018); (2) a novel few-shot
category reconstruction task; (3) existing entity linking benchmarks, where we
match the state-of-the-art on CoNLL-Aida without linking-specific features and
obtain a score of 89.8% on TAC-KBP 2010 without using any alias table, external
knowledge base or in domain training data and (4) answering trivia questions,
which uniquely identify entities. Our global entity representations encode
fine-grained type categories, such as Scottish footballers, and can answer
trivia questions such as: Who was the last inmate of Spandau jail in Berlin?
| 2,020 | Computation and Language |
Revisiting Challenges in Data-to-Text Generation with Fact Grounding | Data-to-text generation models face challenges in ensuring data fidelity by
referring to the correct input source. To inspire studies in this area, Wiseman
et al. (2017) introduced the RotoWire corpus on generating NBA game summaries
from the box- and line-score tables. However, limited attempts have been made
in this direction and the challenges remain. We observe a prominent bottleneck
in the corpus where only about 60% of the summary contents can be grounded to
the boxscore records. Such information deficiency tends to misguide a
conditioned language model to produce unconditioned random facts and thus leads
to factual hallucinations. In this work, we restore the information balance and
revamp this task to focus on fact-grounded data-to-text generation. We
introduce a purified and larger-scale dataset, RotoWire-FG (Fact-Grounding),
with 50% more data from the year 2017-19 and enriched input tables, hoping to
attract more research focuses in this direction. Moreover, we achieve improved
data fidelity over the state-of-the-art models by integrating a new form of
table reconstruction as an auxiliary task to boost the generation quality.
| 2,020 | Computation and Language |
Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study | While neural network-based models have achieved impressive performance on a
large body of NLP tasks, the generalization behavior of different models
remains poorly understood: Does this excellent performance imply a perfect
generalization model, or are there still some limitations? In this paper, we
take the NER task as a testbed to analyze the generalization behavior of
existing models from different perspectives and characterize the differences of
their generalization abilities through the lens of our proposed measures, which
guides us to better design models and training methods. Experiments with
in-depth analyses diagnose the bottleneck of existing neural NER models in
terms of breakdown performance analysis, annotation errors, dataset bias, and
category relationships, which suggest directions for improvement. We have
released the datasets: (ReCoNLL, PLONER) for the future research at our project
page: http://pfliu.com/InterpretNER/. As a by-product of this paper, we have
open-sourced a project that involves a comprehensive summary of recent NER
papers and classifies them into different research topics:
https://github.com/pfliu-nlp/Named-Entity-Recognition-NER-Papers.
| 2,020 | Computation and Language |
Stochastic Natural Language Generation Using Dependency Information | This article presents a stochastic corpus-based model for generating natural
language text. Our model first encodes dependency relations from training data
through a feature set, then concatenates these features to produce a new
dependency tree for a given meaning representation, and finally generates a
natural language utterance from the produced dependency tree. We test our model
on nine domains from tabular, dialogue act and RDF format. Our model
outperforms the corpus-based state-of-the-art methods trained on tabular
datasets and also achieves comparable results with neural network-based
approaches trained on dialogue act, E2E and WebNLG datasets for BLEU and ERR
evaluation metrics. Also, by reporting Human Evaluation results, we show that
our model produces high-quality utterances in aspects of informativeness and
naturalness as well as quality.
| 2,020 | Computation and Language |
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence
Pre-training | This paper presents a new sequence-to-sequence pre-training model called
ProphetNet, which introduces a novel self-supervised objective named future
n-gram prediction and the proposed n-stream self-attention mechanism. Instead
of optimizing one-step-ahead prediction in the traditional sequence-to-sequence
model, the ProphetNet is optimized by n-step ahead prediction that predicts the
next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for
the future tokens and prevent overfitting on strong local correlations. We
pre-train ProphetNet using a base scale dataset (16GB) and a large-scale
dataset (160GB), respectively. Then we conduct experiments on CNN/DailyMail,
Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question
generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the
same scale pre-training corpus.
| 2,020 | Computation and Language |
Joint Reasoning for Multi-Faceted Commonsense Knowledge | Commonsense knowledge (CSK) supports a variety of AI applications, from
visual understanding to chatbots. Prior works on acquiring CSK, such as
ConceptNet, have compiled statements that associate concepts, like everyday
objects or activities, with properties that hold for most or some instances of
the concept. Each concept is treated in isolation from other concepts, and the
only quantitative measure (or ranking) of properties is a confidence score that
the statement is valid. This paper aims to overcome these limitations by
introducing a multi-faceted model of CSK statements and methods for joint
reasoning over sets of inter-related statements. Our model captures four
different dimensions of CSK statements: plausibility, typicality, remarkability
and salience, with scoring and ranking along each dimension. For example,
hyenas drinking water is typical but not salient, whereas hyenas eating
carcasses is salient. For reasoning and ranking, we develop a method with soft
constraints, to couple the inference over concepts that are related in in a
taxonomic hierarchy. The reasoning is cast into an integer linear programming
(ILP), and we leverage the theory of reduction costs of a relaxed LP to compute
informative rankings. This methodology is applied to several large CSK
collections. Our evaluation shows that we can consolidate these inputs into
much cleaner and more expressive knowledge. Results are available at
https://dice.mpi-inf.mpg.de.
| 2,020 | Computation and Language |
Mining customer product reviews for product development: A summarization
process | This research set out to identify and structure from online reviews the words
and expressions related to customers' likes and dislikes to guide product
development. Previous methods were mainly focused on product features. However,
reviewers express their preference not only on product features. In this paper,
based on an extensive literature review in design science, the authors propose
a summarization model containing multiples aspects of user preference, such as
product affordances, emotions, usage conditions. Meanwhile, the linguistic
patterns describing these aspects of preference are discovered and drafted as
annotation guidelines. A case study demonstrates that with the proposed model
and the annotation guidelines, human annotators can structure the online
reviews with high inter-agreement. As high inter-agreement human annotation
results are essential for automatizing the online review summarization process
with the natural language processing, this study provides materials for the
future study of automatization.
| 2,019 | Computation and Language |
AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural
Architecture Search | Large pre-trained language models such as BERT have shown their effectiveness
in various natural language processing tasks. However, the huge parameter size
makes them difficult to be deployed in real-time applications that require
quick inference with limited resources. Existing methods compress BERT into
small models while such compression is task-independent, i.e., the same
compressed BERT for all different downstream tasks. Motivated by the necessity
and benefits of task-oriented BERT compression, we propose a novel compression
method, AdaBERT, that leverages differentiable Neural Architecture Search to
automatically compress BERT into task-adaptive small models for specific tasks.
We incorporate a task-oriented knowledge distillation loss to provide search
hints and an efficiency-aware loss as search constraints, which enables a good
trade-off between efficiency and effectiveness for task-adaptive BERT
compression. We evaluate AdaBERT on several NLP tasks, and the results
demonstrate that those task-adaptive compressed models are 12.7x to 29.3x
faster than BERT in inference time and 11.5x to 17.0x smaller in terms of
parameter size, while comparable performance is maintained.
| 2,021 | Computation and Language |
CLUENER2020: Fine-grained Named Entity Recognition Dataset and Benchmark
for Chinese | In this paper, we introduce the NER dataset from CLUE organization
(CLUENER2020), a well-defined fine-grained dataset for named entity recognition
in Chinese. CLUENER2020 contains 10 categories. Apart from common labels like
person, organization, and location, it contains more diverse categories. It is
more challenging than current other Chinese NER datasets and could better
reflect real-world applications. For comparison, we implement several
state-of-the-art baselines as sequence labeling tasks and report human
performance, as well as its analysis. To facilitate future work on fine-grained
NER for Chinese, we release our dataset, baselines, and leader-board.
| 2,020 | Computation and Language |
Multi-Source Domain Adaptation for Text Classification via
DistanceNet-Bandits | Domain adaptation performance of a learning algorithm on a target domain is a
function of its source domain error and a divergence measure between the data
distribution of these two domains. We present a study of various distance-based
measures in the context of NLP tasks, that characterize the dissimilarity
between domains based on sample estimates. We first conduct analysis
experiments to show which of these distance measures can best differentiate
samples from same versus different domains, and are correlated with empirical
results. Next, we develop a DistanceNet model which uses these distance
measures, or a mixture of these distance measures, as an additional loss
function to be minimized jointly with the task's loss function, so as to
achieve better unsupervised domain adaptation. Finally, we extend this model to
a novel DistanceNet-Bandit model, which employs a multi-armed bandit controller
to dynamically switch between multiple source domains and allow the model to
learn an optimal trajectory and mixture of domains for transfer to the
low-resource target domain. We conduct experiments on popular sentiment
analysis datasets with several diverse domains and show that our DistanceNet
model, as well as its dynamic bandit variant, can outperform competitive
baselines in the context of unsupervised domain adaptation.
| 2,020 | Computation and Language |
On the Replicability of Combining Word Embeddings and Retrieval Models | We replicate recent experiments attempting to demonstrate an attractive
hypothesis about the use of the Fisher kernel framework and mixture models for
aggregating word embeddings towards document representations and the use of
these representations in document classification, clustering, and retrieval.
Specifically, the hypothesis was that the use of a mixture model of von
Mises-Fisher (VMF) distributions instead of Gaussian distributions would be
beneficial because of the focus on cosine distances of both VMF and the vector
space model traditionally used in information retrieval. Previous experiments
had validated this hypothesis. Our replication was not able to validate it,
despite a large parameter scan space.
| 2,020 | Computation and Language |
Bi-Decoder Augmented Network for Neural Machine Translation | Neural Machine Translation (NMT) has become a popular technology in recent
years, and the encoder-decoder framework is the mainstream among all the
methods. It's obvious that the quality of the semantic representations from
encoding is very crucial and can significantly affect the performance of the
model. However, existing unidirectional source-to-target architectures may
hardly produce a language-independent representation of the text because they
rely heavily on the specific relations of the given language pairs. To
alleviate this problem, in this paper, we propose a novel Bi-Decoder Augmented
Network (BiDAN) for the neural machine translation task. Besides the original
decoder which generates the target language sequence, we add an auxiliary
decoder to generate back the source language sequence at the training time.
Since each decoder transforms the representations of the input text into its
corresponding language, jointly training with two target ends can make the
shared encoder has the potential to produce a language-independent semantic
space. We conduct extensive experiments on several NMT benchmark datasets and
the results demonstrate the effectiveness of our proposed approach.
| 2,020 | Computation and Language |
Balancing the composition of word embeddings across heterogenous data
sets | Word embeddings capture semantic relationships based on contextual
information and are the basis for a wide variety of natural language processing
applications. Notably these relationships are solely learned from the data and
subsequently the data composition impacts the semantic of embeddings which
arguably can lead to biased word vectors. Given qualitatively different data
subsets, we aim to align the influence of single subsets on the resulting word
vectors, while retaining their quality. In this regard we propose a criteria to
measure the shift towards a single data subset and develop approaches to meet
both objectives. We find that a weighted average of the two subset embeddings
balances the influence of those subsets while word similarity performance
decreases. We further propose a promising optimization approach to balance
influences and quality of word embeddings.
| 2,020 | Computation and Language |
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning | Word embeddings, i.e., low-dimensional vector representations such as GloVe
and SGNS, encode word "meaning" in the sense that distances between words'
vectors correspond to their semantic proximity. This enables transfer learning
of semantics for a variety of natural language processing tasks.
Word embeddings are typically trained on large public corpora such as
Wikipedia or Twitter. We demonstrate that an attacker who can modify the corpus
on which the embedding is trained can control the "meaning" of new and existing
words by changing their locations in the embedding space. We develop an
explicit expression over corpus features that serves as a proxy for distance
between words and establish a causative relationship between its values and
embedding distances. We then show how to use this relationship for two
adversarial objectives: (1) make a word a top-ranked neighbor of another word,
and (2) move a word from one semantic cluster to another.
An attack on the embedding can affect diverse downstream tasks, demonstrating
for the first time the power of data poisoning in transfer learning scenarios.
We use this attack to manipulate query expansion in information retrieval
systems such as resume search, make certain names more or less visible to named
entity recognition models, and cause new words to be translated to a particular
target word regardless of the language. Finally, we show how the attacker can
generate linguistically likely corpus modifications, thus fooling defenses that
attempt to filter implausible sentences from the corpus using a language model.
| 2,020 | Computation and Language |
Robust Speaker Recognition Using Speech Enhancement And Attention Model | In this paper, a novel architecture for speaker recognition is proposed by
cascading speech enhancement and speaker processing. Its aim is to improve
speaker recognition performance when speech signals are corrupted by noise.
Instead of individually processing speech enhancement and speaker recognition,
the two modules are integrated into one framework by a joint optimisation using
deep neural networks. Furthermore, to increase robustness against noise, a
multi-stage attention mechanism is employed to highlight the speaker related
features learned from context information in time and frequency domain. To
evaluate speaker identification and verification performance of the proposed
approach, we test it on the dataset of VoxCeleb1, one of mostly used benchmark
datasets. Moreover, the robustness of our proposed approach is also tested on
VoxCeleb1 data when being corrupted by three types of interferences, general
noise, music, and babble, at different signal-to-noise ratio (SNR) levels. The
obtained results show that the proposed approach using speech enhancement and
multi-stage attention models outperforms two strong baselines not using them in
most acoustic conditions in our experiments.
| 2,020 | Computation and Language |
Non-Autoregressive Machine Translation with Disentangled Context
Transformer | State-of-the-art neural machine translation models generate a translation
from left to right and every step is conditioned on the previously generated
tokens. The sequential nature of this generation process causes fundamental
latency in inference since we cannot generate multiple tokens in each sentence
in parallel. We propose an attention-masking based model, called Disentangled
Context (DisCo) transformer, that simultaneously generates all tokens given
different contexts. The DisCo transformer is trained to predict every output
token given an arbitrary subset of the other reference tokens. We also develop
the parallel easy-first inference algorithm, which iteratively refines every
token in parallel and reduces the number of required iterations. Our extensive
experiments on 7 translation directions with varying data sizes demonstrate
that our model achieves competitive, if not better, performance compared to the
state of the art in non-autoregressive machine translation while significantly
reducing decoding time on average. Our code is available at
https://github.com/facebookresearch/DisCo.
| 2,020 | Computation and Language |
A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation | Story generation, namely generating a reasonable story from a leading
context, is an important but challenging task. In spite of the success in
modeling fluency and local coherence, existing neural language generation
models (e.g., GPT-2) still suffer from repetition, logic conflicts, and lack of
long-range coherence in generated stories. We conjecture that this is because
of the difficulty of associating relevant commonsense knowledge, understanding
the causal relationships, and planning entities and events with proper temporal
order. In this paper, we devise a knowledge-enhanced pretraining model for
commonsense story generation. We propose to utilize commonsense knowledge from
external knowledge bases to generate reasonable stories. To further capture the
causal and temporal dependencies between the sentences in a reasonable story,
we employ multi-task learning which combines a discriminative objective to
distinguish true and fake stories during fine-tuning. Automatic and manual
evaluation shows that our model can generate more reasonable stories than
state-of-the-art baselines, particularly in terms of logic and global
coherence.
| 2,020 | Computation and Language |
FGN: Fusion Glyph Network for Chinese Named Entity Recognition | Chinese NER is a challenging task. As pictographs, Chinese characters contain
latent glyph information, which is often overlooked. In this paper, we propose
the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph
information, this method may also add extra interactive information with the
fusion mechanism. The major innovations of FGN include: (1) a novel CNN
structure called CGS-CNN is proposed to capture both glyph information and
interactive information between glyphs from neighboring characters. (2) we
provide a method with sliding window and Slice-Attention to fuse the BERT
representation and glyph representation for a character, which may capture
potential interactive knowledge between context and glyph. Experiments are
conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger
achieves new state-of-the-arts performance for Chinese NER. Further, more
experiments are conducted to investigate the influences of various components
and settings in FGN.
| 2,020 | Computation and Language |
Improving Spoken Language Understanding By Exploiting ASR N-best
Hypotheses | In a modern spoken language understanding (SLU) system, the natural language
understanding (NLU) module takes interpretations of a speech from the automatic
speech recognition (ASR) module as the input. The NLU module usually uses the
first best interpretation of a given speech in downstream tasks such as domain
and intent classification. However, the ASR module might misrecognize some
speeches and the first best interpretation could be erroneous and noisy. Solely
relying on the first best interpretation could make the performance of
downstream tasks non-optimal. To address this issue, we introduce a series of
simple yet efficient models for improving the understanding of semantics of the
input speeches by collectively exploiting the n-best speech interpretations
from the ASR module.
| 2,020 | Computation and Language |
Detecting New Word Meanings: A Comparison of Word Embedding Models in
Spanish | Semantic neologisms (SN) are defined as words that acquire a new word meaning
while maintaining their form. Given the nature of this kind of neologisms, the
task of identifying these new word meanings is currently performed manually by
specialists at observatories of neology. To detect SN in a semi-automatic way,
we developed a system that implements a combination of the following
strategies: topic modeling, keyword extraction, and word sense disambiguation.
The role of topic modeling is to detect the themes that are treated in the
input text. Themes within a text give clues about the particular meaning of the
words that are used, for example: viral has one meaning in the context of
computer science (CS) and another when talking about health. To extract
keywords, we used TextRank with POS tag filtering. With this method, we can
obtain relevant words that are already part of the Spanish lexicon. We use a
deep learning model to determine if a given keyword could have a new meaning.
Embeddings that are different from all the known meanings (or topics) indicate
that a word might be a valid SN candidate. In this study, we examine the
following word embedding models: Word2Vec, Sense2Vec, and FastText. The models
were trained with equivalent parameters using Wikipedia in Spanish as corpora.
Then we used a list of words and their concordances (obtained from our database
of neologisms) to show the different embeddings that each model yields.
Finally, we present a comparison of these outcomes with the concordances of
each word to show how we can determine if a word could be a valid candidate for
SN.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.