Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
On the Unintended Social Bias of Training Language Generation Models
with Data from Local Media | There are concerns that neural language models may preserve some of the
stereotypes of the underlying societies that generate the large corpora needed
to train these models. For example, gender bias is a significant problem when
generating text, and its unintended memorization could impact the user
experience of many applications (e.g., the smart-compose feature in Gmail).
In this paper, we introduce a novel architecture that decouples the
representation learning of a neural model from its memory management role. This
architecture allows us to update a memory module with an equal ratio across
gender types addressing biased correlations directly in the latent space. We
experimentally show that our approach can mitigate the gender bias
amplification in the automatic generation of articles news while providing
similar perplexity values when extending the Sequence2Sequence architecture.
| 2,019 | Computation and Language |
BERT Goes to Law School: Quantifying the Competitive Advantage of Access
to Large Legal Corpora in Contract Understanding | Fine-tuning language models, such as BERT, on domain specific corpora has
proven to be valuable in domains like scientific papers and biomedical text. In
this paper, we show that fine-tuning BERT on legal documents similarly provides
valuable improvements on NLP tasks in the legal domain. Demonstrating this
outcome is significant for analyzing commercial agreements, because obtaining
large legal corpora is challenging due to their confidential nature. As such,
we show that having access to large legal corpora is a competitive advantage
for commercial applications, and academic research on analyzing contracts.
| 2,019 | Computation and Language |
Select, Answer and Explain: Interpretable Multi-hop Reading
Comprehension over Multiple Documents | Interpretable multi-hop reading comprehension (RC) over multiple documents is
a challenging problem because it demands reasoning over multiple information
sources and explaining the answer prediction by providing supporting evidences.
In this paper, we propose an effective and interpretable Select, Answer and
Explain (SAE) system to solve the multi-document RC problem. Our system first
filters out answer-unrelated documents and thus reduce the amount of
distraction information. This is achieved by a document classifier trained with
a novel pairwise learning-to-rank loss. The selected answer-related documents
are then input to a model to jointly predict the answer and supporting
sentences. The model is optimized with a multi-task learning objective on both
token level for answer prediction and sentence level for supporting sentences
prediction, together with an attention-based interaction between these two
tasks. Evaluated on HotpotQA, a challenging multi-hop RC data set, the proposed
SAE system achieves top competitive performance in distractor setting compared
to other existing systems on the leaderboard.
| 2,020 | Computation and Language |
What Gets Echoed? Understanding the "Pointers" in Explanations of
Persuasive Arguments | Explanations are central to everyday life, and are a topic of growing
interest in the AI community. To investigate the process of providing natural
language explanations, we leverage the dynamics of the /r/ChangeMyView
subreddit to build a dataset with 36K naturally occurring explanations of why
an argument is persuasive. We propose a novel word-level prediction task to
investigate how explanations selectively reuse, or echo, information from what
is being explained (henceforth, explanandum). We develop features to capture
the properties of a word in the explanandum, and show that our proposed
features not only have relatively strong predictive power on the echoing of a
word in an explanation, but also enhance neural methods of generating
explanations. In particular, while the non-contextual properties of a word
itself are more valuable for stopwords, the interaction between the constituent
parts of an explanandum is crucial in predicting the echoing of content words.
We also find intriguing patterns of a word being echoed. For example, although
nouns are generally less likely to be echoed, subjects and objects can,
depending on their source, be more likely to be echoed in the explanations.
| 2,019 | Computation and Language |
DialoGPT: Large-Scale Generative Pre-training for Conversational
Response Generation | We present a large, tunable neural conversational response generation model,
DialoGPT (dialogue generative pre-trained transformer). Trained on 147M
conversation-like exchanges extracted from Reddit comment chains over a period
spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch
transformer to attain a performance close to human both in terms of automatic
and human evaluation in single-turn dialogue settings. We show that
conversational systems that leverage DialoGPT generate more relevant,
contentful and context-consistent responses than strong baseline systems. The
pre-trained model and training pipeline are publicly released to facilitate
research into neural response generation and the development of more
intelligent open-domain dialogue systems.
| 2,020 | Computation and Language |
Uncover Sexual Harassment Patterns from Personal Stories by Joint Key
Element Extraction and Categorization | The number of personal stories about sexual harassment shared online has
increased exponentially in recent years. This is in part inspired by the
\#MeToo and \#TimesUp movements. Safecity is an online forum for people who
experienced or witnessed sexual harassment to share their personal experiences.
It has collected \textgreater 10,000 stories so far. Sexual harassment occurred
in a variety of situations, and categorization of the stories and extraction of
their key elements will provide great help for the related parties to
understand and address sexual harassment. In this study, we manually annotated
those stories with labels in the dimensions of location, time, and harassers'
characteristics, and marked the key elements related to these dimensions.
Furthermore, we applied natural language processing technologies with joint
learning schemes to automatically categorize these stories in those dimensions
and extract key elements at the same time. We also uncovered significant
patterns from the categorized sexual harassment stories. We believe our
annotated data set, proposed algorithms, and analysis will help people who have
been harassed, authorities, researchers and other related parties in various
ways, such as automatically filling reports, enlightening the public in order
to prevent future harassment, and enabling more effective, faster action to be
taken.
| 2,019 | Computation and Language |
Sentence-Level BERT and Multi-Task Learning of Age and Gender in Social
Media | Social media currently provide a window on our lives, making it possible to
learn how people from different places, with different backgrounds, ages, and
genders use language. In this work we exploit a newly-created Arabic dataset
with ground truth age and gender labels to learn these attributes both
individually and in a multi-task setting at the sentence level. Our models are
based on variations of deep bidirectional neural networks. More specifically,
we build models with gated recurrent units and bidirectional encoder
representations from transformers (BERT). We show the utility of multi-task
learning (MTL) on the two tasks and identify task-specific attention as a
superior choice in this context. We also find that a single-task BERT model
outperform our best MTL models on the two tasks. We report tweet-level accuracy
of 51.43% for the age task (three-way) and 65.30% on the gender task (binary),
both of which outperforms our baselines with a large margin. Our models are
language-agnostic, and so can be applied to other languages.
| 2,019 | Computation and Language |
Credibility-based Fake News Detection | Fake news can significantly misinform people who often rely on online sources
and social media for their information. Current research on fake news detection
has mostly focused on analyzing fake news content and how it propagates on a
network of users. In this paper, we emphasize the detection of fake news by
assessing its credibility. By analyzing public fake news data, we show that
information on news sources (and authors) can be a strong indicator of
credibility. Our findings suggest that an author's history of association with
fake news, and the number of authors of a news article, can play a significant
role in detecting fake news. Our approach can help improve traditional fake
news detection methods, wherein content features are often used to detect fake
news.
| 2,019 | Computation and Language |
Automatic Detection of Generated Text is Easiest when Humans are Fooled | Recent advancements in neural language modelling make it possible to rapidly
generate vast amounts of human-sounding text. The capabilities of humans and
automatic discriminators to detect machine-generated text have been a large
source of research interest, but humans and machines rely on different cues to
make their decisions. Here, we perform careful benchmarking and analysis of
three popular sampling-based decoding strategies---top-$k$, nucleus sampling,
and untruncated random sampling---and show that improvements in decoding
methods have primarily optimized for fooling humans. This comes at the expense
of introducing statistical abnormalities that make detection easy for automatic
systems. We also show that though both human and automatic detector performance
improve with longer excerpt length, even multi-sentence excerpts can fool
expert human raters over 30% of the time. Our findings reveal the importance of
using both human and automatic detectors to assess the humanness of text
generation systems.
| 2,020 | Computation and Language |
Machine Translation Evaluation using Bi-directional Entailment | In this paper, we propose a new metric for Machine Translation (MT)
evaluation, based on bi-directional entailment. We show that machine generated
translation can be evaluated by determining paraphrasing with a reference
translation provided by a human translator. We hypothesize, and show through
experiments, that paraphrasing can be detected by evaluating entailment
relationship in the forward and backward direction. Unlike conventional
metrics, like BLEU or METEOR, our approach uses deep learning to determine the
semantic similarity between candidate and reference translation for generating
scores rather than relying upon simple n-gram overlap. We use BERT's
pre-trained implementation of transformer networks, fine-tuned on MNLI corpus,
for natural language inferencing. We apply our evaluation metric on WMT'14 and
WMT'17 dataset to evaluate systems participating in the translation task and
find that our metric has a better correlation with the human annotated score
compared to the other traditional metrics at system level.
| 2,019 | Computation and Language |
How to Pre-Train Your Model? Comparison of Different Pre-Training Models
for Biomedical Question Answering | Using deep learning models on small scale datasets would result in
overfitting. To overcome this problem, the process of pre-training a model and
fine-tuning it to the small scale dataset has been used extensively in domains
such as image processing. Similarly for question answering, pre-training and
fine-tuning can be done in several ways. Commonly reading comprehension models
are used for pre-training, but we show that other types of pre-training can
work better. We compare two pre-training models based on reading comprehension
and open domain question answering models and determine the performance when
fine-tuned and tested over BIOASQ question answering dataset. We find open
domain question answering model to be a better fit for this task rather than
reading comprehension model.
| 2,019 | Computation and Language |
ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram
Representations | The pre-training of text encoders normally processes text as a sequence of
tokens corresponding to small text units, such as word pieces in English and
characters in Chinese. It omits information carried by larger text granularity,
and thus the encoders cannot easily adapt to certain combinations of
characters. This leads to a loss of important semantic information, which is
especially problematic for Chinese because the language does not have explicit
word boundaries. In this paper, we propose ZEN, a BERT-based Chinese (Z) text
encoder Enhanced by N-gram representations, where different combinations of
characters are considered during training. As a result, potential word or phase
boundaries are explicitly pre-trained and fine-tuned with the character encoder
(BERT). Therefore ZEN incorporates the comprehensive information of both the
character sequence and words or phrases it contains. Experimental results
illustrated the effectiveness of ZEN on a series of Chinese NLP tasks. We show
that ZEN, using less resource than other published encoders, can achieve
state-of-the-art performance on most tasks. Moreover, it is shown that
reasonable performance can be obtained when ZEN is trained on a small corpus,
which is important for applying pre-training techniques to scenarios with
limited data. The code and pre-trained models of ZEN are available at
https://github.com/sinovation/zen.
| 2,019 | Computation and Language |
Design and Challenges of Cloze-Style Reading Comprehension Tasks on
Multiparty Dialogue | This paper analyzes challenges in cloze-style reading comprehension on
multiparty dialogue and suggests two new tasks for more comprehensive
predictions of personal entities in daily conversations. We first demonstrate
that there are substantial limitations to the evaluation methods of previous
work, namely that randomized assignment of samples to training and test data
substantially decreases the complexity of cloze-style reading comprehension.
According to our analysis, replacing the random data split with a chronological
data split reduces test accuracy on previous single-variable passage completion
task from 72\% to 34\%, that leaves much more room to improve. Our proposed
tasks extend the previous single-variable passage completion task by replacing
more character mentions with variables. Several deep learning models are
developed to validate these three tasks. A thorough error analysis is provided
to understand the challenges and guide the future direction of this research.
| 2,021 | Computation and Language |
Posing Fair Generalization Tasks for Natural Language Inference | Deep learning models for semantics are generally evaluated using naturalistic
corpora. Adversarial methods, in which models are evaluated on new examples
with known semantic properties, have begun to reveal that good performance at
these naturalistic tasks can hide serious shortcomings. However, we should
insist that these evaluations be fair -that the models are given data
sufficient to support the requisite kinds of generalization. In this paper, we
define and motivate a formal notion of fairness in this sense. We then apply
these ideas to natural language inference by constructing very challenging but
provably fair artificial datasets and showing that standard neural models fail
to generalize in the required ways; only task-specific models that jointly
compose the premise and hypothesis are able to achieve high performance, and
even these models do not solve the task perfectly.
| 2,019 | Computation and Language |
Controlling Text Complexity in Neural Machine Translation | This work introduces a machine translation task where the output is aimed at
audiences of different levels of target language proficiency. We collect a high
quality dataset of news articles available in English and Spanish, written for
diverse grade levels and propose a method to align segments across comparable
bilingual articles. The resulting dataset makes it possible to train multi-task
sequence-to-sequence models that translate Spanish into English targeted at an
easier reading grade level than the original Spanish. We show that these
multi-task models outperform pipeline approaches that translate and simplify
text independently.
| 2,019 | Computation and Language |
Question Answering for Privacy Policies: Combining Computational and
Legal Perspectives | Privacy policies are long and complex documents that are difficult for users
to read and understand, and yet, they have legal effects on how user data is
collected, managed and used. Ideally, we would like to empower users to inform
themselves about issues that matter to them, and enable them to selectively
explore those issues. We present PrivacyQA, a corpus consisting of 1750
questions about the privacy policies of mobile applications, and over 3500
expert annotations of relevant answers. We observe that a strong neural
baseline underperforms human performance by almost 0.3 F1 on PrivacyQA,
suggesting considerable room for improvement for future systems. Further, we
use this dataset to shed light on challenges to question answerability, with
domain-general implications for any question answering system. The PrivacyQA
corpus offers a challenging corpus for question answering, with genuine
real-world utility.
| 2,019 | Computation and Language |
Low-dimensional Semantic Space: from Text to Word Embedding | This article focuses on the study of Word Embedding, a feature-learning
technique in Natural Language Processing that maps words or phrases to
low-dimensional vectors. Beginning with the linguistic theories concerning
contextual similarities - "Distributional Hypothesis" and "Context of
Situation", this article introduces two ways of numerical representation of
text: One-hot and Distributed Representation. In addition, this article
presents statistical-based Language Models(such as Co-occurrence Matrix and
Singular Value Decomposition) as well as Neural Network Language Models (NNLM,
such as Continuous Bag-of-Words and Skip-Gram). This article also analyzes how
Word Embedding can be applied to the study of word-sense disambiguation and
diachronic linguistics.
| 2,019 | Computation and Language |
Interpreting Verbal Irony: Linguistic Strategies and the Connection to
the Type of Semantic Incongruity | Human communication often involves the use of verbal irony or sarcasm, where
the speakers usually mean the opposite of what they say. To better understand
how verbal irony is expressed by the speaker and interpreted by the hearer we
conduct a crowdsourcing task: given an utterance expressing verbal irony, users
are asked to verbalize their interpretation of the speaker's ironic message. We
propose a typology of linguistic strategies for verbal irony interpretation and
link it to various theoretical linguistic frameworks. We design computational
models to capture these strategies and present empirical studies aimed to
answer three questions: (1) what is the distribution of linguistic strategies
used by hearers to interpret ironic messages?; (2) do hearers adopt similar
strategies for interpreting the speaker's ironic intent?; and (3) does the type
of semantic incongruity in the ironic message (explicit vs. implicit) influence
the choice of interpretation strategies by the hearers?
| 2,020 | Computation and Language |
Machine Translation in Pronunciation Space | The research in machine translation community focus on translation in text
space. However, humans are in fact also good at direct translation in
pronunciation space. Some existing translation systems, such as simultaneous
machine translation, are inherently more natural and thus potentially more
robust by directly translating in pronunciation space. In this paper, we
conduct large scale experiments on a self-built dataset with about $20$M En-Zh
pairs of text sentences and corresponding pronunciation sentences. We proposed
three new categories of translations: $1)$ translating a pronunciation sentence
in source language into a pronunciation sentence in target language (P2P-Tran),
$2)$ translating a text sentence in source language into a pronunciation
sentence in target language (T2P-Tran), and $3)$ translating a pronunciation
sentence in source language into a text sentence in target language (P2T-Tran),
and compare them with traditional text translation (T2T-Tran). Our experiments
clearly show that all $4$ categories of translations have comparable
performances, with small and sometimes ignorable differences.
| 2,019 | Computation and Language |
Sentiment analysis model for Twitter data in Polish language | Text mining analysis of tweets gathered during Polish presidential election
on May 10th, 2015. The project included implementation of engine to retrieve
information from Twitter, building document corpora, corpora cleaning, and
creating Term-Document Matrix. Each tweet from the text corpora was assigned a
category based on its sentiment score. The score was calculated using the
number of positive and/or negative emoticons and Polish words in each document.
The result data set was used to train and test four machine learning
classifiers, to select these providing most accurate automatic tweet
classification results. The Naive Bayes and Maximum Entropy algorithms achieved
the best accuracy of respectively 71.76% and 77.32%. All implementation tasks
were completed using R programming language.
| 2,019 | Computation and Language |
On the Effectiveness of the Pooling Methods for Biomedical Relation
Extraction with Deep Learning | Deep learning models have achieved state-of-the-art performances on many
relation extraction datasets. A common element in these deep learning models
involves the pooling mechanisms where a sequence of hidden vectors is
aggregated to generate a single representation vector, serving as the features
to perform prediction for RE. Unfortunately, the models in the literature tend
to employ different strategies to perform pooling for RE, leading to the
challenge to determine the best pooling mechanism for this problem, especially
in the biomedical domain. In order to answer this question, in this work, we
conduct a comprehensive study to evaluate the effectiveness of different
pooling mechanisms for the deep learning models in biomedical RE. The
experimental results suggest that dependency-based pooling is the best pooling
strategy for RE in the biomedical domain, yielding the state-of-the-art
performance on two benchmark datasets for this problem.
| 2,019 | Computation and Language |
Emergence of Numeric Concepts in Multi-Agent Autonomous Communication | With the rapid development of deep learning, most of current state-of-the-art
techniques in natural langauge processing are based on deep learning models
trained with argescaled static textual corpora. However, we human beings learn
and understand in a different way. Thus, grounded language learning argues that
models need to learn and understand language by the experience and perceptions
obtained by interacting with enviroments, like how humans do. With the help of
deep reinforcement learning techniques, there are already lots of works
focusing on facilitating the emergence of communication protocols that have
compositionalities like natural languages among computational agents
population. Unlike these works, we, on the other hand, focus on the numeric
concepts which correspond to abstractions in cognition and function words in
natural language. Based on a specifically designed language game, we verify
that computational agents are capable of transmitting numeric concepts during
autonomous communication, and the emergent communication protocols can reflect
the underlying structure of meaning space. Although their encodeing method is
not compositional like natural languages from a perspective of human beings,
the emergent languages can be generalised to unseen inputs and, more
importantly, are easier for models to learn. Besides, iterated learning can
help further improving the compositionality of the emergent languages, under
the measurement of topological similarity. Furthermore, we experiment another
representation method, i.e. directly encode numerals into concatenations of
one-hot vectors, and find that the emergent languages would become
compositional like human natural languages. Thus, we argue that there are 2
important factors for the emergence of compositional languages.
| 2,019 | Computation and Language |
What does a network layer hear? Analyzing hidden representations of
end-to-end ASR through speech synthesis | End-to-end speech recognition systems have achieved competitive results
compared to traditional systems. However, the complex transformations involved
between layers given highly variable acoustic signals are hard to analyze. In
this paper, we present our ASR probing model, which synthesizes speech from
hidden representations of end-to-end ASR to examine the information maintain
after each layer calculation. Listening to the synthesized speech, we observe
gradual removal of speaker variability and noise as the layer goes deeper,
which aligns with the previous studies on how deep network functions in speech
recognition. This paper is the first study analyzing the end-to-end speech
recognition model by demonstrating what each layer hears. Speaker verification
and speech enhancement measurements on synthesized speech are also conducted to
confirm our observation further.
| 2,019 | Computation and Language |
Analysing Coreference in Transformer Outputs | We analyse coreference phenomena in three neural machine translation systems
trained with different data settings with or without access to explicit intra-
and cross-sentential anaphoric information. We compare system performance on
two different genres: news and TED talks. To do this, we manually annotate (the
possibly incorrect) coreference chains in the MT outputs and evaluate the
coreference chain translations. We define an error typology that aims to go
further than pronoun translation adequacy and includes types such as incorrect
word selection or missing words. The features of coreference chains in
automatic translations are also compared to those of the source texts and human
translations. The analysis shows stronger potential translationese effects in
machine translated outputs than in human translations.
| 2,019 | Computation and Language |
Spherical Text Embedding | Unsupervised text embedding has shown great power in a wide range of NLP
tasks. While text embeddings are typically learned in the Euclidean space,
directional similarity is often more effective in tasks such as word similarity
and document clustering, which creates a gap between the training stage and
usage stage of text embedding. To close this gap, we propose a spherical
generative model based on which unsupervised word and paragraph embeddings are
jointly learned. To learn text embeddings in the spherical space, we develop an
efficient optimization algorithm with convergence guarantee based on Riemannian
optimization. Our model enjoys high efficiency and achieves state-of-the-art
performances on various text embedding tasks including word similarity and
document clustering.
| 2,019 | Computation and Language |
Understand customer reviews with less data and in short time: pretrained
language representation and active learning | In this paper, we address customer review understanding problems by using
supervised machine learning approaches, in order to achieve a fully automatic
review aspects categorisation and sentiment analysis. In general, such
supervised learning algorithms require domain-specific expert knowledge for
generating high quality labeled training data, and the cost of labeling can be
very high. To achieve an in-production customer review machine learning enabled
analysis tool with only a limited amount of data and within a reasonable
training data collection time, we propose to use pre-trained language
representation to boost model performance and active learning framework for
accelerating the iterative training process. The results show that with
integration of both components, the fully automatic review analysis can be
achieved at a much faster pace.
| 2,019 | Computation and Language |
Higher Criticism for Discriminating Word-Frequency Tables and Testing
Authorship | We adapt the Higher Criticism (HC) goodness-of-fit test to measure the
closeness between word-frequency tables. We apply this measure to authorship
attribution challenges, where the goal is to identify the author of a document
using other documents whose authorship is known. The method is simple yet
performs well without handcrafting and tuning; reporting accuracy at the state
of the art level in various current challenges. As an inherent side effect, the
HC calculation identifies a subset of discriminating words. In practice, the
identified words have low variance across documents belonging to a corpus of
homogeneous authorship. We conclude that in comparing the similarity of a new
document and a corpus of a single author, HC is mostly affected by words
characteristic of the author and is relatively unaffected by topic structure.
| 2,022 | Computation and Language |
Scrambled Translation Problem: A Problem of Denoising UNMT | In this paper, we identify an interesting kind of error in the output of
Unsupervised Neural Machine Translation (UNMT) systems like
\textit{Undreamt}(footnote). We refer to this error type as \textit{Scrambled
Translation problem}. We observe that UNMT models which use \textit{word
shuffle} noise (as in case of Undreamt) can generate correct words, but fail to
stitch them together to form phrases. As a result, words of the translated
sentence look \textit{scrambled}, resulting in decreased BLEU. We hypothesise
that the reason behind \textit{scrambled translation problem} is 'shuffling
noise' which is introduced in every input sentence as a denoising strategy. To
test our hypothesis, we experiment by retraining UNMT models with a simple
\textit{retraining} strategy. We stop the training of the Denoising UNMT model
after a pre-decided number of iterations and resume the training for the
remaining iterations -- which number is also pre-decided -- using original
sentence as input without adding any noise. Our proposed solution achieves
significant performance improvement UNMT models that train conventionally. We
demonstrate these performance gains on four language pairs, \textit{viz.},
English-French, English-German, English-Spanish, Hindi-Punjabi. Our qualitative
and quantitative analysis shows that the retraining strategy helps achieve
better alignment as observed by attention heatmap and better phrasal
translation, leading to statistically significant improvement in BLEU scores.
| 2,021 | Computation and Language |
A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking | Automated fact-checking based on machine learning is a promising approach to
identify false information distributed on the web. In order to achieve
satisfactory performance, machine learning methods require a large corpus with
reliable annotations for the different tasks in the fact-checking process.
Having analyzed existing fact-checking corpora, we found that none of them
meets these criteria in full. They are either too small in size, do not provide
detailed annotations, or are limited to a single domain. Motivated by this gap,
we present a new substantially sized mixed-domain corpus with annotations of
good quality for the core fact-checking tasks: document retrieval, evidence
extraction, stance detection, and claim validation. To aid future corpus
construction, we describe our methodology for corpus creation and annotation,
and demonstrate that it results in substantial inter-annotator agreement. As
baselines for future research, we perform experiments on our corpus with a
number of model architectures that reach high performance in similar problem
settings. Finally, to support the development of future models, we provide a
detailed error analysis for each of the tasks. Our results show that the
realistic, multi-domain setting defined by our data poses new challenges for
the existing models, providing opportunities for considerable improvement by
future systems.
| 2,019 | Computation and Language |
Detect Toxic Content to Improve Online Conversations | Social media is filled with toxic content. The aim of this paper is to build
a model that can detect insincere questions. We use the 'Quora Insincere
Questions Classification' dataset for our analysis. The dataset is composed of
sincere and insincere questions, with the majority of sincere questions. The
dataset is processed and analyzed using Python and its libraries such as
sklearn, numpy, pandas, keras etc. The dataset is converted to vector form
using word embeddings such as GloVe, Wiki-news and TF-IDF. The imbalance in the
dataset is handled by resampling techniques. We train and compare various
machine learning and deep learning models to come up with the best results.
Models discussed include SVM, Naive Bayes, GRU and LSTM.
| 2,019 | Computation and Language |
Human-centric Metric for Accelerating Pathology Reports Annotation | Pathology reports contain useful information such as the main involved organ,
diagnosis, etc. These information can be identified from the free text reports
and used for large-scale statistical analysis or serve as annotation for other
modalities such as pathology slides images. However, manual classification for
a huge number of reports on multiple tasks is labor-intensive. In this paper,
we have developed an automatic text classifier based on BERT and we propose a
human-centric metric to evaluate the model. According to the model confidence,
we identify low-confidence cases that require further expert annotation and
high-confidence cases that are automatically classified. We report the
percentage of low-confidence cases and the performance of automatically
classified cases. On the high-confidence cases, the model achieves
classification accuracy comparable to pathologists. This leads a potential of
reducing 80% to 98% of the manual annotation workload.
| 2,019 | Computation and Language |
A Holistic Natural Language Generation Framework for the Semantic Web | With the ever-growing generation of data for the Semantic Web comes an
increasing demand for this data to be made available to non-semantic Web
experts. One way of achieving this goal is to translate the languages of the
Semantic Web into natural language. We present LD2NL, a framework for
verbalizing the three key languages of the Semantic Web, i.e., RDF, OWL, and
SPARQL. Our framework is based on a bottom-up approach to verbalization. We
evaluated LD2NL in an open survey with 86 persons. Our results suggest that our
framework can generate verbalizations that are close to natural languages and
that can be easily understood by non-experts. Therewith, it enables non-domain
experts to interpret Semantic Web data with more than 91\% of the accuracy of
domain experts.
| 2,019 | Computation and Language |
A Novel Approach to Enhance the Performance of Semantic Search in
Bengali using Neural Net and other Classification Techniques | Search has for a long time been an important tool for users to retrieve
information. Syntactic search is matching documents or objects containing
specific keywords like user-history, location, preference etc. to improve the
results. However, it is often possible that the query and the best answer have
no term or very less number of terms in common and syntactic search can not
perform properly in such cases. Semantic search, on the other hand, resolves
these issues but suffers from lack of annotation, absence of WordNet in case of
low resource languages. In this work, we have demonstrated an end to end
procedure to improve the performance of semantic search using semi-supervised
and unsupervised learning algorithms. An available Bengali repository was
chosen to have seven types of semantic properties primarily to develop the
system. Performance has been tested using Support Vector Machine, Naive Bayes,
Decision Tree and Artificial Neural Network (ANN). Our system has achieved the
efficiency to predict the correct semantics using knowledge base over the time
of learning. A repository containing around a million sentences, a product of
TDIL project of Govt. of India, was used to test our system at first instance.
Then the testing has been done for other languages. Being a cognitive system it
may be very useful for improving user satisfaction in e-Governance or
m-Governance in the multilingual environment and also for other applications.
| 2,020 | Computation and Language |
Learning from Explanations with Neural Execution Tree | While deep neural networks have achieved impressive performance on a range of
NLP tasks, these data-hungry models heavily rely on labeled data, which
restricts their applications in scenarios where data annotation is expensive.
Natural language (NL) explanations have been demonstrated very useful
additional supervision, which can provide sufficient domain knowledge for
generating more labeled data over new instances, while the annotation time only
doubles. However, directly applying them for augmenting model learning
encounters two challenges: (1) NL explanations are unstructured and inherently
compositional, which asks for a modularized model to represent their semantics,
(2) NL explanations often have large numbers of linguistic variants, resulting
in low recall and limited generalization ability. In this paper, we propose a
novel Neural Execution Tree (NExT) framework to augment training data for text
classification using NL explanations. After transforming NL explanations into
executable logical forms by semantic parsing, NExT generalizes different types
of actions specified by the logical forms for labeling data instances, which
substantially increases the coverage of each NL explanation. Experiments on two
NLP tasks (relation extraction and sentiment analysis) demonstrate its
superiority over baseline methods. Its extension to multi-hop question
answering achieves performance gain with light annotation effort.
| 2,020 | Computation and Language |
A Deep Learning approach for Hindi Named Entity Recognition | Named Entity Recognition is one of the most important text processing
requirement in many NLP tasks. In this paper we use a deep architecture to
accomplish the task of recognizing named entities in a given Hindi text
sentence. Bidirectional Long Short Term Memory (BiLSTM) based techniques have
been used for NER task in literature. In this paper, we first tune BiLSTM
low-resource scenario to work for Hindi NER and propose two enhancements namely
(a) de-noising auto-encoder (DAE) LSTM and (b) conditioning LSTM which show
improvement in NER task compared to the BiLSTM approach. We use pre-trained
word embedding to represent the words in the corpus, and the NER tags of the
words are as defined by the used annotated corpora. Experiments have been
performed to analyze the performance of different word embeddings and batch
sizes which is essential for training deep models.
| 2,019 | Computation and Language |
Predictive Engagement: An Efficient Metric For Automatic Evaluation of
Open-Domain Dialogue Systems | User engagement is a critical metric for evaluating the quality of
open-domain dialogue systems. Prior work has focused on conversation-level
engagement by using heuristically constructed features such as the number of
turns and the total time of the conversation. In this paper, we investigate the
possibility and efficacy of estimating utterance-level engagement and define a
novel metric, {\em predictive engagement}, for automatic evaluation of
open-domain dialogue systems. Our experiments demonstrate that (1) human
annotators have high agreement on assessing utterance-level engagement scores;
(2) conversation-level engagement scores can be predicted from properly
aggregated utterance-level engagement scores. Furthermore, we show that the
utterance-level engagement scores can be learned from data. These scores can
improve automatic evaluation metrics for open-domain dialogue systems, as shown
by correlation with human judgements. This suggests that predictive engagement
can be used as a real-time feedback for training better dialogue models.
| 2,020 | Computation and Language |
A Failure of Aspect Sentiment Classifiers and an Adaptive Re-weighting
Solution | Aspect-based sentiment classification (ASC) is an important task in
fine-grained sentiment analysis.~Deep supervised ASC approaches typically model
this task as a pair-wise classification task that takes an aspect and a
sentence containing the aspect and outputs the polarity of the aspect in that
sentence. However, we discovered that many existing approaches fail to learn an
effective ASC classifier but more like a sentence-level sentiment classifier
because they have difficulty to handle sentences with different polarities for
different aspects.~This paper first demonstrates this problem using several
state-of-the-art ASC models. It then proposes a novel and general adaptive
re-weighting (ARW) scheme to adjust the training to dramatically improve ASC
for such complex sentences. Experimental results show that the proposed
framework is effective \footnote{The dataset and code are available at
\url{https://github.com/howardhsu/ASC_failure}.}.
| 2,019 | Computation and Language |
Emerging Cross-lingual Structure in Pretrained Language Models | We study the problem of multilingual masked language modeling, i.e. the
training of a single model on concatenated text from multiple languages, and
present a detailed study of several factors that influence why these models are
so effective for cross-lingual transfer. We show, contrary to what was
previously hypothesized, that transfer is possible even when there is no shared
vocabulary across the monolingual corpora and also when the text comes from
very different domains. The only requirement is that there are some shared
parameters in the top layers of the multi-lingual encoder. To better understand
this result, we also show that representations from independently trained
models in different languages can be aligned post-hoc quite effectively,
strongly suggesting that, much like for non-contextual word embeddings, there
are universal latent symmetries in the learned embedding spaces. For
multilingual masked language modeling, these symmetries seem to be
automatically discovered and aligned during the joint training process.
| 2,020 | Computation and Language |
Assessing Social and Intersectional Biases in Contextualized Word
Representations | Social bias in machine learning has drawn significant attention, with work
ranging from demonstrations of bias in a multitude of applications, curating
definitions of fairness for different contexts, to developing algorithms to
mitigate bias. In natural language processing, gender bias has been shown to
exist in context-free word embeddings. Recently, contextual word
representations have outperformed word embeddings in several downstream NLP
tasks. These word representations are conditioned on their context within a
sentence, and can also be used to encode the entire sentence. In this paper, we
analyze the extent to which state-of-the-art models for contextual word
representations, such as BERT and GPT-2, encode biases with respect to gender,
race, and intersectional identities. Towards this, we propose assessing bias at
the contextual word level. This novel approach captures the contextual effects
of bias missing in context-free word embeddings, yet avoids confounding effects
that underestimate bias at the sentence encoding level. We demonstrate evidence
of bias at the corpus level, find varying evidence of bias in embedding
association tests, show in particular that racial bias is strongly encoded in
contextual word models, and observe that bias effects for intersectional
minorities are exacerbated beyond their constituent minority identities.
Further, evaluating bias effects at the contextual word level captures biases
that are not captured at the sentence level, confirming the need for our novel
approach.
| 2,019 | Computation and Language |
On Compositionality in Neural Machine Translation | We investigate two specific manifestations of compositionality in Neural
Machine Translation (NMT) : (1) Productivity - the ability of the model to
extend its predictions beyond the observed length in training data and (2)
Systematicity - the ability of the model to systematically recombine known
parts and rules. We evaluate a standard Sequence to Sequence model on tests
designed to assess these two properties in NMT. We quantitatively demonstrate
that inadequate temporal processing, in the form of poor encoder
representations is a bottleneck for both Productivity and Systematicity. We
propose a simple pre-training mechanism which alleviates model performance on
the two properties and leads to a significant improvement in BLEU scores.
| 2,019 | Computation and Language |
BAS: An Answer Selection Method Using BERT Language Model | In recent years, Question Answering systems have become more popular and
widely used by users. Despite the increasing popularity of these systems, the
their performance is not even sufficient for textual data and requires further
research. These systems consist of several parts that one of them is the Answer
Selection component. This component detects the most relevant answer from a
list of candidate answers. The methods presented in previous researches have
attempted to provide an independent model to undertake the answer-selection
task. An independent model cannot comprehend the syntactic and semantic
features of questions and answers with a small training dataset. To fill this
gap, language models can be employed in implementing the answer selection part.
This action enables the model to have a better understanding of the language in
order to understand questions and answers better than previous works. In this
research, we will present the "BAS" (BERT Answer Selection) that uses the BERT
language model to comprehend language. The empirical results of applying the
model on the TrecQA Raw, TrecQA Clean, and WikiQA datasets demonstrate that
using a robust language model such as BERT can enhance the performance. Using a
more robust classifier also enhances the effect of the language model on the
answer selection component. The results demonstrate that language comprehension
is an essential requirement in natural language processing tasks such as
answer-selection.
| 2,021 | Computation and Language |
Improving Bidirectional Decoding with Dynamic Target Semantics in Neural
Machine Translation | Generally, Neural Machine Translation models generate target words in a
left-to-right (L2R) manner and fail to exploit any future (right) semantics
information, which usually produces an unbalanced translation. Recent works
attempt to utilize the right-to-left (R2L) decoder in bidirectional decoding to
alleviate this problem. In this paper, we propose a novel \textbf{D}ynamic
\textbf{I}nteraction \textbf{M}odule (\textbf{DIM}) to dynamically exploit
target semantics from R2L translation for enhancing the L2R translation
quality. Different from other bidirectional decoding approaches, DIM firstly
extracts helpful target information through addressing and reading operations,
then updates target semantics for tracking the interactive history.
Additionally, we further introduce an \textbf{agreement regularization} term
into the training objective to narrow the gap between L2R and R2L translations.
Experimental results on NIST Chinese$\Rightarrow$English and WMT'16
English$\Rightarrow$Romanian translation tasks show that our system achieves
significant improvements over baseline systems, which also reaches comparable
results compared to the state-of-the-art Transformer model with much fewer
parameters of it.
| 2,019 | Computation and Language |
LIDA: Lightweight Interactive Dialogue Annotator | Dialogue systems have the potential to change how people interact with
machines but are highly dependent on the quality of the data used to train
them. It is therefore important to develop good dialogue annotation tools which
can improve the speed and quality of dialogue data annotation. With this in
mind, we introduce LIDA, an annotation tool designed specifically for
conversation data. As far as we know, LIDA is the first dialogue annotation
system that handles the entire dialogue annotation pipeline from raw text, as
may be the output of transcription services, to structured conversation data.
Furthermore it supports the integration of arbitrary machine learning models as
annotation recommenders and also has a dedicated interface to resolve
inter-annotator disagreements such as after crowdsourcing annotations for a
dataset. LIDA is fully open source, documented and publicly available [
https://github.com/Wluper/lida ]
| 2,019 | Computation and Language |
Integrating Dictionary Feature into A Deep Learning Model for Disease
Named Entity Recognition | In recent years, Deep Learning (DL) models are becoming important due to
their demonstrated success at overcoming complex learning problems. DL models
have been applied effectively for different Natural Language Processing (NLP)
tasks such as part-of-Speech (PoS) tagging and Machine Translation (MT).
Disease Named Entity Recognition (Disease-NER) is a crucial task which aims at
extracting disease Named Entities (NEs) from text. In this paper, a DL model
for Disease-NER using dictionary information is proposed and evaluated on
National Center for Biotechnology Information (NCBI) disease corpus and BC5CDR
dataset. Word embeddings trained over general domain texts as well as
biomedical texts have been used to represent input to the proposed model. This
study also compares two different Segment Representation (SR) schemes, namely
IOB2 and IOBES for Disease-NER. The results illustrate that using dictionary
information, pre-trained word embeddings, character embeddings and CRF with
global score improves the performance of Disease-NER system.
| 2,019 | Computation and Language |
Knowing What, How and Why: A Near Complete Solution for Aspect-based
Sentiment Analysis | Target-based sentiment analysis or aspect-based sentiment analysis (ABSA)
refers to addressing various sentiment analysis tasks at a fine-grained level,
which includes but is not limited to aspect extraction, aspect sentiment
classification, and opinion extraction. There exist many solvers of the above
individual subtasks or a combination of two subtasks, and they can work
together to tell a complete story, i.e. the discussed aspect, the sentiment on
it, and the cause of the sentiment. However, no previous ABSA research tried to
provide a complete solution in one shot. In this paper, we introduce a new
subtask under ABSA, named aspect sentiment triplet extraction (ASTE).
Particularly, a solver of this task needs to extract triplets (What, How, Why)
from the inputs, which show WHAT the targeted aspects are, HOW their sentiment
polarities are and WHY they have such polarities (i.e. opinion reasons). For
instance, one triplet from "Waiters are very friendly and the pasta is simply
average" could be ('Waiters', positive, 'friendly'). We propose a two-stage
framework to address this task. The first stage predicts what, how and why in a
unified model, and then the second stage pairs up the predicted what (how) and
why from the first stage to output triplets. In the experiments, our framework
has set a benchmark performance in this novel triplet extraction task.
Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art
related methods.
| 2,019 | Computation and Language |
Discrete Argument Representation Learning for Interactive Argument Pair
Identification | In this paper, we focus on extracting interactive argument pairs from two
posts with opposite stances to a certain topic. Considering opinions are
exchanged from different perspectives of the discussing topic, we study the
discrete representations for arguments to capture varying aspects in
argumentation languages (e.g., the debate focus and the participant behavior).
Moreover, we utilize hierarchical structure to model post-wise information
incorporating contextual knowledge. Experimental results on the large-scale
dataset collected from CMV show that our proposed framework can significantly
outperform the competitive baselines. Further analyses reveal why our model
yields superior performance and prove the usefulness of our learned
representations.
| 2,019 | Computation and Language |
Adversarial Language Games for Advanced Natural Language Intelligence | We study the problem of adversarial language games, in which multiple agents
with conflicting goals compete with each other via natural language
interactions. While adversarial language games are ubiquitous in human
activities, little attention has been devoted to this field in natural language
processing. In this work, we propose a challenging adversarial language game
called Adversarial Taboo as an example, in which an attacker and a defender
compete around a target word. The attacker is tasked with inducing the defender
to utter the target word invisible to the defender, while the defender is
tasked with detecting the target word before being induced by the attacker. In
Adversarial Taboo, a successful attacker must hide its intention and subtly
induce the defender, while a competitive defender must be cautious with its
utterances and infer the intention of the attacker. Such language abilities can
facilitate many important downstream NLP tasks. To instantiate the game, we
create a game environment and a competition platform. Comprehensive experiments
and empirical studies on several baseline attack and defense strategies show
promising and interesting results. Based on the analysis on the game and
experiments, we discuss multiple promising directions for future research.
| 2,020 | Computation and Language |
Incremental Sense Weight Training for the Interpretation of
Contextualized Word Embeddings | We present a novel online algorithm that learns the essence of each dimension
in word embeddings by minimizing the within-group distance of contextualized
embedding groups. Three state-of-the-art neural-based language models are used,
Flair, ELMo, and BERT, to generate contextualized word embeddings such that
different embeddings are generated for the same word type, which are grouped by
their senses manually annotated in the SemCor dataset. We hypothesize that not
all dimensions are equally important for downstream tasks so that our algorithm
can detect unessential dimensions and discard them without hurting the
performance. To verify this hypothesis, we first mask dimensions determined
unessential by our algorithm, apply the masked word embeddings to a word sense
disambiguation task (WSD), and compare its performance against the one achieved
by the original embeddings. Several KNN approaches are experimented to
establish strong baselines for WSD. Our results show that the masked word
embeddings do not hurt the performance and can improve it by 3%. Our work can
be used to conduct future research on the interpretability of contextualized
embeddings.
| 2,020 | Computation and Language |
Sparse Lifting of Dense Vectors: Unifying Word and Sentence
Representations | As the first step in automated natural language processing, representing
words and sentences is of central importance and has attracted significant
research attention. Different approaches, from the early one-hot and
bag-of-words representation to more recent distributional dense and sparse
representations, were proposed. Despite the successful results that have been
achieved, such vectors tend to consist of uninterpretable components and face
nontrivial challenge in both memory and computational requirement in practical
applications. In this paper, we designed a novel representation model that
projects dense word vectors into a higher dimensional space and favors a highly
sparse and binary representation of word vectors with potentially interpretable
components, while trying to maintain pairwise inner products between original
vectors as much as possible. Computationally, our model is relaxed as a
symmetric non-negative matrix factorization problem which admits a fast yet
effective solution. In a series of empirical evaluations, the proposed model
exhibited consistent improvement and high potential in practical applications.
| 2,019 | Computation and Language |
RNN-T For Latency Controlled ASR With Improved Beam Search | Neural transducer-based systems such as RNN Transducers (RNN-T) for automatic
speech recognition (ASR) blend the individual components of a traditional
hybrid ASR systems (acoustic model, language model, punctuation model, inverse
text normalization) into one single model. This greatly simplifies training and
inference and hence makes RNN-T a desirable choice for ASR systems. In this
work, we investigate use of RNN-T in applications that require a tune-able
latency budget during inference time. We also improved the decoding speed of
the originally proposed RNN-T beam search algorithm. We evaluated our proposed
system on English videos ASR dataset and show that neural RNN-T models can
achieve comparable WER and better computational efficiency compared to a well
tuned hybrid ASR baseline.
| 2,020 | Computation and Language |
A Joint Model for Definition Extraction with Syntactic Connection and
Semantic Consistency | Definition Extraction (DE) is one of the well-known topics in Information
Extraction that aims to identify terms and their corresponding definitions in
unstructured texts. This task can be formalized either as a sentence
classification task (i.e., containing term-definition pairs or not) or a
sequential labeling task (i.e., identifying the boundaries of the terms and
definitions). The previous works for DE have only focused on one of the two
approaches, failing to model the inter-dependencies between the two tasks. In
this work, we propose a novel model for DE that simultaneously performs the two
tasks in a single framework to benefit from their inter-dependencies. Our model
features deep learning architectures to exploit the global structures of the
input sentences as well as the semantic consistencies between the terms and the
definitions, thereby improving the quality of the representation vectors for
DE. Besides the joint inference between sentence classification and sequential
labeling, the proposed model is fundamentally different from the prior work for
DE in that the prior work has only employed the local structures of the input
sentences (i.e., word-to-word relations), and not yet considered the semantic
consistencies between terms and definitions. In order to implement these novel
ideas, our model presents a multi-task learning framework that employs graph
convolutional neural networks and predicts the dependency paths between the
terms and the definitions. We also seek to enforce the consistency between the
representations of the terms and definitions both globally (i.e., increasing
semantic consistency between the representations of the entire sentences and
the terms/definitions) and locally (i.e., promoting the similarity between the
representations of the terms and the definitions).
| 2,020 | Computation and Language |
Improving Slot Filling by Utilizing Contextual Information | Slot Filling (SF) is one of the sub-tasks of Spoken Language Understanding
(SLU) which aims to extract semantic constituents from a given natural language
utterance. It is formulated as a sequence labeling task. Recently, it has been
shown that contextual information is vital for this task. However, existing
models employ contextual information in a restricted manner, e.g., using
self-attention. Such methods fail to distinguish the effects of the context on
the word representation and the word label. To address this issue, in this
paper, we propose a novel method to incorporate the contextual information in
two different levels, i.e., representation level and task-specific (i.e.,
label) level. Our extensive experiments on three benchmark datasets on SF show
the effectiveness of our model leading to new state-of-the-art results on all
three benchmark datasets for the task of SF.
| 2,020 | Computation and Language |
Coreference Resolution as Query-based Span Prediction | In this paper, we present an accurate and extensible approach for the
coreference resolution task. We formulate the problem as a span prediction
task, like in machine reading comprehension (MRC): A query is generated for
each candidate mention using its surrounding context, and a span prediction
module is employed to extract the text spans of the coreferences within the
document using the generated query. This formulation comes with the following
key advantages: (1) The span prediction strategy provides the flexibility of
retrieving mentions left out at the mention proposal stage; (2) In the MRC
framework, encoding the mention and its context explicitly in a query makes it
possible to have a deep and thorough examination of cues embedded in the
context of coreferent mentions; and (3) A plethora of existing MRC datasets can
be used for data augmentation to improve the model's generalization capability.
Experiments demonstrate significant performance boost over previous models,
with 87.5 (+2.5) F1 score on the GAP benchmark and 83.1 (+3.5) F1 score on the
CoNLL-2012 benchmark.
| 2,020 | Computation and Language |
Focus on What's Informative and Ignore What's not: Communication
Strategies in a Referential Game | Research in multi-agent cooperation has shown that artificial agents are able
to learn to play a simple referential game while developing a shared lexicon.
This lexicon is not easy to analyze, as it does not show many properties of a
natural language. In a simple referential game with two neural network-based
agents, we analyze the object-symbol mapping trying to understand what kind of
strategy was used to develop the emergent language. We see that, when the
environment is uniformly distributed, the agents rely on a random subset of
features to describe the objects. When we modify the objects making one feature
non-uniformly distributed,the agents realize it is less informative and start
to ignore it, and, surprisingly, they make a better use of the remaining
features. This interesting result suggests that more natural, less uniformly
distributed environments might aid in spurring the emergence of better-behaved
languages.
| 2,019 | Computation and Language |
Deepening Hidden Representations from Pre-trained Language Models | Transformer-based pre-trained language models have proven to be effective for
learning contextualized language representation. However, current approaches
only take advantage of the output of the encoder's final layer when fine-tuning
the downstream tasks. We argue that only taking single layer's output restricts
the power of pre-trained representation. Thus we deepen the representation
learned by the model by fusing the hidden representation in terms of an
explicit HIdden Representation Extractor (HIRE), which automatically absorbs
the complementary representation with respect to the output from the final
layer. Utilizing RoBERTa as the backbone encoder, our proposed improvement over
the pre-trained models is shown effective on multiple natural language
understanding tasks and help our model rival with the state-of-the-art models
on the GLUE benchmark.
| 2,020 | Computation and Language |
Data Diversification: A Simple Strategy For Neural Machine Translation | We introduce Data Diversification: a simple but effective strategy to boost
neural machine translation (NMT) performance. It diversifies the training data
by using the predictions of multiple forward and backward models and then
merging them with the original dataset on which the final NMT model is trained.
Our method is applicable to all NMT models. It does not require extra
monolingual data like back-translation, nor does it add more computations and
parameters like ensembles of models. Our method achieves state-of-the-art BLEU
scores of 30.7 and 43.7 in the WMT'14 English-German and English-French
translation tasks, respectively. It also substantially improves on 8 other
translation tasks: 4 IWSLT tasks (English-German and English-French) and 4
low-resource translation tasks (English-Nepali and English-Sinhala). We
demonstrate that our method is more effective than knowledge distillation and
dual learning, it exhibits strong correlation with ensembles of models, and it
trades perplexity off for better BLEU score. We have released our source code
at https://github.com/nxphi47/data_diversification
| 2,020 | Computation and Language |
Language coverage and generalization in RNN-based continuous sentence
embeddings for interacting agents | Continuous sentence embeddings using recurrent neural networks (RNNs), where
variable-length sentences are encoded into fixed-dimensional vectors, are often
the main building blocks of architectures applied to language tasks such as
dialogue generation. While it is known that those embeddings are able to learn
some structures of language (e.g. grammar) in a purely data-driven manner,
there is very little work on the objective evaluation of their ability to cover
the whole language space and to generalize to sentences outside the language
bias of the training data. Using a manually designed context-free grammar (CFG)
to generate a large-scale dataset of sentences related to the content of
realistic 3D indoor scenes, we evaluate the language coverage and
generalization abilities of the most common continuous sentence embeddings
based on RNNs. We also propose a new embedding method based on arithmetic
coding, AriEL, that is not data-driven and that efficiently encodes in
continuous space any sentence from the CFG. We find that RNN-based embeddings
underfit the training data and cover only a small subset of the language
defined by the CFG. They also fail to learn the underlying CFG and generalize
to unbiased sentences from that same CFG. We found that AriEL provides an
insightful baseline.
| 2,019 | Computation and Language |
Infusing Knowledge into the Textual Entailment Task Using Graph
Convolutional Networks | Textual entailment is a fundamental task in natural language processing. Most
approaches for solving the problem use only the textual content present in
training data. A few approaches have shown that information from external
knowledge sources like knowledge graphs (KGs) can add value, in addition to the
textual content, by providing background knowledge that may be critical for a
task. However, the proposed models do not fully exploit the information in the
usually large and noisy KGs, and it is not clear how it can be effectively
encoded to be useful for entailment. We present an approach that complements
text-based entailment models with information from KGs by (1) using
Personalized PageR- ank to generate contextual subgraphs with reduced noise and
(2) encoding these subgraphs using graph convolutional networks to capture KG
structure. Our technique extends the capability of text models exploiting
structural and semantic information found in KGs. We evaluate our approach on
multiple textual entailment datasets and show that the use of external
knowledge helps improve prediction accuracy. This is particularly evident in
the challenging BreakingNLI dataset, where we see an absolute improvement of
5-20% over multiple text-based entailment models.
| 2,019 | Computation and Language |
Unsupervised Cross-lingual Representation Learning at Scale | This paper shows that pretraining multilingual language models at scale leads
to significant performance gains for a wide range of cross-lingual transfer
tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our
model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a
variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI,
+13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs
particularly well on low-resource languages, improving 15.7% in XNLI accuracy
for Swahili and 11.4% for Urdu over previous XLM models. We also present a
detailed empirical analysis of the key factors that are required to achieve
these gains, including the trade-offs between (1) positive transfer and
capacity dilution and (2) the performance of high and low resource languages at
scale. Finally, we show, for the first time, the possibility of multilingual
modeling without sacrificing per-language performance; XLM-R is very
competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
will make our code, data and models publicly available.
| 2,020 | Computation and Language |
Seq2Emo for Multi-label Emotion Classification Based on Latent Variable
Chains Transformation | Emotion detection in text is an important task in NLP and is essential in
many applications. Most of the existing methods treat this task as a problem of
single-label multi-class text classification. To predict multiple emotions for
one instance, most of the existing works regard it as a general Multi-label
Classification (MLC) problem, where they usually either apply a manually
determined threshold on the last output layer of their neural network models or
train multiple binary classifiers and make predictions in the fashion of
one-vs-all. However, compared to labels in the general MLC datasets, the number
of emotion categories are much fewer (less than 10). Additionally, emotions
tend to have more correlations with each other. For example, the human usually
does not express "joy" and "anger" at the same time, but it is very likely to
have "joy" and "love" expressed together. Given this intuition, in this paper,
we propose a Latent Variable Chain (LVC) transformation and a tailored model --
Seq2Emo model that not only naturally predicts multiple emotion labels but also
takes into consideration their correlations. We perform the experiments on the
existing multi-label emotion datasets as well as on our newly collected
datasets. The results show that our model compares favorably with existing
state-of-the-art methods.
| 2,019 | Computation and Language |
Multi-Paragraph Reasoning with Knowledge-enhanced Graph Neural Network | Multi-paragraph reasoning is indispensable for open-domain question answering
(OpenQA), which receives less attention in the current OpenQA systems. In this
work, we propose a knowledge-enhanced graph neural network (KGNN), which
performs reasoning over multiple paragraphs with entities. To explicitly
capture the entities' relatedness, KGNN utilizes relational facts in knowledge
graph to build the entity graph. The experimental results show that KGNN
outperforms in both distractor and full wiki settings than baselines methods on
HotpotQA dataset. And our further analysis illustrates KGNN is effective and
robust with more retrieved paragraphs.
| 2,019 | Computation and Language |
Guiding Non-Autoregressive Neural Machine Translation Decoding with
Reordering Information | Non-autoregressive neural machine translation (NAT) generates each target
word in parallel and has achieved promising inference acceleration. However,
existing NAT models still have a big gap in translation quality compared to
autoregressive neural machine translation models due to the enormous decoding
space. To address this problem, we propose a novel NAT framework named
ReorderNAT which explicitly models the reordering information in the decoding
procedure. We further introduce deterministic and non-deterministic decoding
strategies that utilize reordering information to narrow the decoding search
space in our proposed ReorderNAT. Experimental results on various widely-used
datasets show that our proposed model achieves better performance compared to
existing NAT models, and even achieves comparable translation quality as
autoregressive translation models with a significant speedup.
| 2,020 | Computation and Language |
Unsupervised Opinion Summarization as Copycat-Review Generation | Opinion summarization is the task of automatically creating summaries that
reflect subjective information expressed in multiple documents, such as product
reviews. While the majority of previous work has focused on the extractive
setting, i.e., selecting fragments from input reviews to produce a summary, we
let the model generate novel sentences and hence produce abstractive summaries.
Recent progress in summarization has seen the development of supervised models
which rely on large quantities of document-summary pairs. Since such training
data is expensive to acquire, we instead consider the unsupervised setting, in
other words, we do not use any summaries in training. We define a generative
model for a review collection which capitalizes on the intuition that when
generating a new review given a set of other reviews of a product, we should be
able to control the "amount of novelty" going into the new review or,
equivalently, vary the extent to which it deviates from the input. At test
time, when generating summaries, we force the novelty to be minimal, and
produce a text reflecting consensus opinions. We capture this intuition by
defining a hierarchical variational autoencoder model. Both individual reviews
and the products they correspond to are associated with stochastic latent
codes, and the review generator ("decoder") has direct access to the text of
input reviews through the pointer-generator mechanism. Experiments on Amazon
and Yelp datasets, show that setting at test time the review's latent code to
its mean, allows the model to produce fluent and coherent summaries reflecting
common opinions.
| 2,020 | Computation and Language |
Hierarchical Contextualized Representation for Named Entity Recognition | Named entity recognition (NER) models are typically based on the architecture
of Bi-directional LSTM (BiLSTM). The constraints of sequential nature and the
modeling of single input prevent the full utilization of global information
from larger scope, not only in the entire sentence, but also in the entire
document (dataset). In this paper, we address these two deficiencies and
propose a model augmented with hierarchical contextualized representation:
sentence-level representation and document-level representation. In
sentence-level, we take different contributions of words in a single sentence
into consideration to enhance the sentence representation learned from an
independent BiLSTM via label embedding attention mechanism. In document-level,
the key-value memory network is adopted to record the document-aware
information for each unique word which is sensitive to similarity of context
information. Our two-level hierarchical contextualized representations are
fused with each input token embedding and corresponding hidden state of BiLSTM,
respectively. The experimental results on three benchmark NER datasets
(CoNLL-2003 and Ontonotes 5.0 English datasets, CoNLL-2002 Spanish dataset)
show that we establish new state-of-the-art results.
| 2,019 | Computation and Language |
Enriching Conversation Context in Retrieval-based Chatbots | Work on retrieval-based chatbots, like most sequence pair matching tasks, can
be divided into Cross-encoders that perform word matching over the pair, and
Bi-encoders that encode the pair separately. The latter has better performance,
however since candidate responses cannot be encoded offline, it is also much
slower. Lately, multi-layer transformer architectures pre-trained as language
models have been used to great effect on a variety of natural language
processing and information retrieval tasks. Recent work has shown that these
language models can be used in text-matching scenarios to create Bi-encoders
that perform almost as well as Cross-encoders while having a much faster
inference speed. In this paper, we expand upon this work by developing a
sequence matching architecture that %takes into account contexts in the
training dataset at inference time. utilizes the entire training set as a
makeshift knowledge-base during inference. We perform detailed experiments
demonstrating that this architecture can be used to further improve Bi-encoders
performance while still maintaining a relatively high inference speed.
| 2,019 | Computation and Language |
Learning to Answer by Learning to Ask: Getting the Best of GPT-2 and
BERT Worlds | Automatic question generation aims at the generation of questions from a
context, with the corresponding answers being sub-spans of the given passage.
Whereas, most of the methods mostly rely on heuristic rules to generate
questions, more recently also neural network approaches have been proposed. In
this work, we propose a variant of the self-attention Transformer network
architectures model to generate meaningful and diverse questions. To this end,
we propose an easy to use model consisting of the conjunction of the
Transformer decoder GPT-2 model with Transformer encoder BERT for the
downstream task for question answering. The model is trained in an end-to-end
fashion, where the language model is trained to produce a question-answer-aware
input representation that facilitates to generate an answer focused question.
Our result of neural question generation from text on the SQuAD 1.1 dataset
suggests that our method can produce semantically correct and diverse
questions. Additionally, we assessed the performance of our proposed method for
the downstream task of question answering. The analysis shows that our proposed
generation & answering collaboration framework relatively improves both tasks
and is particularly powerful in the semi-supervised setup. The results further
suggest a robust and comparably lean pipeline facilitating question generation
in the small-data regime.
| 2,019 | Computation and Language |
Guiding Variational Response Generator to Exploit Persona | Leveraging persona information of users in Neural Response Generators (NRG)
to perform personalized conversations has been considered as an attractive and
important topic in the research of conversational agents over the past few
years. Despite of the promising progresses achieved by recent studies in this
field, persona information tends to be incorporated into neural networks in the
form of user embeddings, with the expectation that the persona can be involved
via the End-to-End learning. This paper proposes to adopt the
personality-related characteristics of human conversations into variational
response generators, by designing a specific conditional variational
autoencoder based deep model with two new regularization terms employed to the
loss function, so as to guide the optimization towards the direction of
generating both persona-aware and relevant responses. Besides, to reasonably
evaluate the performances of various persona modeling approaches, this paper
further presents three direct persona-oriented metrics from different
perspectives. The experimental results have shown that our proposed methodology
can notably improve the performance of persona-aware response generation, and
the metrics are reasonable to evaluate the results.
| 2,020 | Computation and Language |
SentiLARE: Sentiment-Aware Language Representation Learning with
Linguistic Knowledge | Most of the existing pre-trained language representation models neglect to
consider the linguistic knowledge of texts, which can promote language
understanding in NLP tasks. To benefit the downstream tasks in sentiment
analysis, we propose a novel language representation model called SentiLARE,
which introduces word-level linguistic knowledge including part-of-speech tag
and sentiment polarity (inferred from SentiWordNet) into pre-trained models. We
first propose a context-aware sentiment attention mechanism to acquire the
sentiment polarity of each word with its part-of-speech tag by querying
SentiWordNet. Then, we devise a new pre-training task called label-aware masked
language model to construct knowledge-aware language representation.
Experiments show that SentiLARE obtains new state-of-the-art performance on a
variety of sentiment analysis tasks.
| 2,020 | Computation and Language |
Dimensional Emotion Detection from Categorical Emotion | We present a model to predict fine-grained emotions along the continuous
dimensions of valence, arousal, and dominance (VAD) with a corpus with
categorical emotion annotations. Our model is trained by minimizing the EMD
(Earth Mover's Distance) loss between the predicted VAD score distribution and
the categorical emotion distributions sorted along VAD, and it can
simultaneously classify the emotion categories and predict the VAD scores for a
given sentence. We use pre-trained RoBERTa-Large and fine-tune on three
different corpora with categorical labels and evaluate on EmoBank corpus with
VAD scores. We show that our approach reaches comparable performance to that of
the state-of-the-art classifiers in categorical emotion classification and
shows significant positive correlations with the ground truth VAD scores. Also,
further training with supervision of VAD labels leads to improved performance
especially when dataset is small. We also present examples of predictions of
appropriate emotion words that are not part of the original annotations.
| 2,021 | Computation and Language |
Optimizing the Factual Correctness of a Summary: A Study of Summarizing
Radiology Reports | Neural abstractive summarization models are able to generate summaries which
have high overlap with human references. However, existing models are not
optimized for factual correctness, a critical metric in real-world
applications. In this work, we develop a general framework where we evaluate
the factual correctness of a generated summary by fact-checking it
automatically against its reference using an information extraction module. We
further propose a training strategy which optimizes a neural summarization
model with a factual correctness reward via reinforcement learning. We apply
the proposed method to the summarization of radiology reports, where factual
correctness is a key requirement. On two separate datasets collected from
hospitals, we show via both automatic and human evaluation that the proposed
approach substantially improves the factual correctness and overall quality of
outputs over a competitive neural summarization system, producing radiology
summaries that approach the quality of human-authored ones.
| 2,020 | Computation and Language |
Word Embedding Algorithms as Generalized Low Rank Models and their
Canonical Form | Word embedding algorithms produce very reliable feature representations of
words that are used by neural network models across a constantly growing
multitude of NLP tasks. As such, it is imperative for NLP practitioners to
understand how their word representations are produced, and why they are so
impactful.
The present work presents the Simple Embedder framework, generalizing the
state-of-the-art existing word embedding algorithms (including Word2vec (SGNS)
and GloVe) under the umbrella of generalized low rank models. We derive that
both of these algorithms attempt to produce embedding inner products that
approximate pointwise mutual information (PMI) statistics in the corpus. Once
cast as Simple Embedders, comparison of these models reveals that these
successful embedders all resemble a straightforward maximum likelihood estimate
(MLE) of the PMI parametrized by the inner product (between embeddings). This
MLE induces our proposed novel word embedding model, Hilbert-MLE, as the
canonical representative of the Simple Embedder framework.
We empirically compare these algorithms with evaluations on 17 different
datasets. Hilbert-MLE consistently observes second-best performance on every
extrinsic evaluation (news classification, sentiment analysis, POS-tagging, and
supersense tagging), while the first-best model depends varying on the task.
Moreover, Hilbert-MLE consistently observes the least variance in results with
respect to the random initialization of the weights in bidirectional LSTMs. Our
empirical results demonstrate that Hilbert-MLE is a very consistent word
embedding algorithm that can be reliably integrated into existing NLP systems
to obtain high-quality results.
| 2,019 | Computation and Language |
Unsupervised Domain Adaptation of Contextual Embeddings for Low-Resource
Duplicate Question Detection | Answering questions is a primary goal of many conversational systems or
search products. While most current systems have focused on answering questions
against structured databases or curated knowledge graphs, on-line community
forums or frequently asked questions (FAQ) lists offer an alternative source of
information for question answering systems. Automatic duplicate question
detection (DQD) is the key technology need for question answering systems to
utilize existing online forums like StackExchange. Existing annotations of
duplicate questions in such forums are community-driven, making them sparse or
even completely missing for many domains. Therefore, it is important to
transfer knowledge from related domains and tasks. Recently, contextual
embedding models such as BERT have been outperforming many baselines by
transferring self-supervised information to downstream tasks. In this paper, we
apply BERT to DQD and advance it by unsupervised adaptation to StackExchange
domains using self-supervised learning. We show the effectiveness of this
adaptation for low-resource settings, where little or no training data is
available from the target domain. Our analysis reveals that unsupervised BERT
domain adaptation on even small amounts of data boosts the performance of BERT.
| 2,019 | Computation and Language |
Towards Domain Adaptation from Limited Data for Question Answering Using
Deep Neural Networks | This paper explores domain adaptation for enabling question answering (QA)
systems to answer questions posed against documents in new specialized domains.
Current QA systems using deep neural network (DNN) technology have proven
effective for answering general purpose factoid-style questions. However,
current general purpose DNN models tend to be ineffective for use in new
specialized domains. This paper explores the effectiveness of transfer learning
techniques for this problem. In experiments on question answering in the
automobile manual domain we demonstrate that standard DNN transfer learning
techniques work surprisingly well in adapting DNN models to a new domain using
limited amounts of annotated training data in the new domain.
| 2,019 | Computation and Language |
Open Domain Web Keyphrase Extraction Beyond Language Modeling | This paper studies keyphrase extraction in real-world scenarios where
documents are from diverse domains and have variant content quality. We curate
and release OpenKP, a large scale open domain keyphrase extraction dataset with
near one hundred thousand web documents and expert keyphrase annotations. To
handle the variations of domain and content quality, we develop BLING-KPE, a
neural keyphrase extraction model that goes beyond language understanding using
visual presentations of documents and weak supervision from search queries.
Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the
contributions of its neural architecture, visual features, and search log weak
supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved
generalization ability of learning from the open domain data compared to a
specific domain.
| 2,019 | Computation and Language |
SIMMC: Situated Interactive Multi-Modal Conversational Data Collection
And Evaluation Platform | As digital virtual assistants become ubiquitous, it becomes increasingly
important to understand the situated behaviour of users as they interact with
these assistants. To this end, we introduce SIMMC, an extension to ParlAI for
multi-modal conversational data collection and system evaluation. SIMMC
simulates an immersive setup, where crowd workers are able to interact with
environments constructed in AI Habitat or Unity while engaging in a
conversation. The assistant in SIMMC can be a crowd worker or Artificial
Intelligent (AI) agent. This enables both (i) a multi-player / Wizard of Oz
setting for data collection, or (ii) a single player mode for model / system
evaluation. We plan to open-source a situated conversational data-set collected
on this platform for the Conversational AI research community.
| 2,020 | Computation and Language |
Multi-Domain Neural Machine Translation with Word-Level Adaptive
Layer-wise Domain Mixing | Many multi-domain neural machine translation (NMT) models achieve knowledge
transfer by enforcing one encoder to learn shared embedding across domains.
However, this design lacks adaptation to individual domains. To overcome this
limitation, we propose a novel multi-domain NMT model using individual modules
for each domain, on which we apply word-level, adaptive and layer-wise domain
mixing. We first observe that words in a sentence are often related to multiple
domains. Hence, we assume each word has a domain proportion, which indicates
its domain preference. Then word representations are obtained by mixing their
embedding in individual domains based on their domain proportions. We show this
can be achieved by carefully designing multi-head dot-product attention modules
for different domains, and eventually taking weighted averages of their
parameters by word-level layer-wise domain proportions. Through this, we can
achieve effective domain knowledge sharing, and capture fine-grained
domain-specific knowledge as well. Our experiments show that our proposed model
outperforms existing ones in several NMT tasks.
| 2,020 | Computation and Language |
Grounded Conversation Generation as Guided Traverses in Commonsense
Knowledge Graphs | Human conversations naturally evolve around related concepts and scatter to
multi-hop concepts. This paper presents a new conversation generation model,
ConceptFlow, which leverages commonsense knowledge graphs to explicitly model
conversation flows. By grounding conversations to the concept space,
ConceptFlow represents the potential conversation flow as traverses in the
concept space along commonsense relations. The traverse is guided by graph
attentions in the concept graph, moving towards more meaningful directions in
the concept space, in order to generate more semantic and informative
responses. Experiments on Reddit conversations demonstrate ConceptFlow's
effectiveness over previous knowledge-aware conversation models and GPT-2 based
models while using 70% fewer parameters, confirming the advantage of explicit
modeling conversation structures. All source codes of this work are available
at https://github.com/thunlp/ConceptFlow.
| 2,020 | Computation and Language |
Using Interlinear Glosses as Pivot in Low-Resource Multilingual Machine
Translation | We demonstrate a new approach to Neural Machine Translation (NMT) for
low-resource languages using a ubiquitous linguistic resource, Interlinear
Glossed Text (IGT). IGT represents a non-English sentence as a sequence of
English lemmas and morpheme labels. As such, it can serve as a pivot or
interlingua for NMT. Our contribution is four-fold. Firstly, we pool IGT for
1,497 languages in ODIN (54,545 glosses) and 70,918 glosses in Arapaho and
train a gloss-to-target NMT system from IGT to English, with a BLEU score of
25.94. We introduce a multilingual NMT model that tags all glossed text with
gloss-source language tags and train a universal system with shared attention
across 1,497 languages. Secondly, we use the IGT gloss-to-target translation as
a key step in an English-Turkish MT system trained on only 865 lines from ODIN.
Thirdly, we we present five metrics for evaluating extremely low-resource
translation when BLEU is no longer sufficient and evaluate the Turkish
low-resource system using BLEU and also using accuracy of matching nouns,
verbs, agreement, tense, and spurious repetition, showing large improvements.
| 2,020 | Computation and Language |
Making the Best Use of Review Summary for Sentiment Analysis | Sentiment analysis provides a useful overview of customer review contents.
Many review websites allow a user to enter a summary in addition to a full
review. Intuitively, summary information may give additional benefit for review
sentiment analysis. In this paper, we conduct a study to exploit methods for
better use of summary information. We start by finding out that the sentimental
signal distribution of a review and that of its corresponding summary are in
fact complementary to each other. We thus explore various architectures to
better guide the interactions between the two and propose a
hierarchically-refined review-centric attention model. Empirical results show
that our review-centric model can make better use of user-written summaries for
review sentiment analysis, and is also more effective compared to existing
methods when the user summary is replaced with summary generated by an
automatic summarization system.
| 2,020 | Computation and Language |
Understanding Knowledge Distillation in Non-autoregressive Machine
Translation | Non-autoregressive machine translation (NAT) systems predict a sequence of
output tokens in parallel, achieving substantial improvements in generation
speed compared to autoregressive models. Existing NAT models usually rely on
the technique of knowledge distillation, which creates the training data from a
pretrained autoregressive model for better performance. Knowledge distillation
is empirically useful, leading to large gains in accuracy for NAT models, but
the reason for this success has, as of yet, been unclear. In this paper, we
first design systematic experiments to investigate why knowledge distillation
is crucial to NAT training. We find that knowledge distillation can reduce the
complexity of data sets and help NAT to model the variations in the output
data. Furthermore, a strong correlation is observed between the capacity of an
NAT model and the optimal complexity of the distilled data for the best
translation quality. Based on these findings, we further propose several
approaches that can alter the complexity of data sets to improve the
performance of NAT models. We achieve the state-of-the-art performance for the
NAT-based models, and close the gap with the autoregressive baseline on WMT14
En-De benchmark.
| 2,021 | Computation and Language |
Porous Lattice-based Transformer Encoder for Chinese NER | Incorporating lattices into character-level Chinese named entity recognition
is an effective method to exploit explicit word information. Recent works
extend recurrent and convolutional neural networks to model lattice inputs.
However, due to the DAG structure or the variable-sized potential word set for
lattice inputs, these models prevent the convenient use of batched computation,
resulting in serious inefficient. In this paper, we propose a porous
lattice-based transformer encoder for Chinese named entity recognition, which
is capable to better exploit the GPU parallelism and batch the computation
owing to the mask mechanism in transformer. We first investigate the
lattice-aware self-attention coupled with relative position representations to
explore effective word information in the lattice structure. Besides, to
strengthen the local dependencies among neighboring tokens, we propose a novel
porous structure during self-attentional computation processing, in which every
two non-neighboring tokens are connected through a shared pivot node.
Experimental results on four datasets show that our model performs up to 9.47
times faster than state-of-the-art models, while is roughly on a par with its
performance. The source code of this paper can be obtained from
https://github.com/xxx/xxx.
| 2,020 | Computation and Language |
SubCharacter Chinese-English Neural Machine Translation with Wubi
encoding | Neural machine translation (NMT) is one of the best methods for understanding
the differences in semantic rules between two languages. Especially for
Indo-European languages, subword-level models have achieved impressive results.
However, when the translation task involves Chinese, semantic granularity
remains at the word and character level, so there is still need more
fine-grained translation model of Chinese. In this paper, we introduce a simple
and effective method for Chinese translation at the sub-character level. Our
approach uses the Wubi method to translate Chinese into English; byte-pair
encoding (BPE) is then applied. Our method for Chinese-English translation
eliminates the need for a complicated word segmentation algorithm during
preprocessing. Furthermore, our method allows for sub-character-level neural
translation based on recurrent neural network (RNN) architecture, without
preprocessing. The empirical results show that for Chinese-English translation
tasks, our sub-character-level model has a comparable BLEU score to the subword
model, despite having a much smaller vocabulary. Additionally, the small
vocabulary is highly advantageous for NMT model compression.
| 2,019 | Computation and Language |
Query-bag Matching with Mutual Coverage for Information-seeking
Conversations in E-commerce | Information-seeking conversation system aims at satisfying the information
needs of users through conversations. Text matching between a user query and a
pre-collected question is an important part of the information-seeking
conversation in E-commerce. In the practical scenario, a sort of questions
always correspond to a same answer. Naturally, these questions can form a bag.
Learning the matching between user query and bag directly may improve the
conversation performance, denoted as query-bag matching. Inspired by such
opinion, we propose a query-bag matching model which mainly utilizes the mutual
coverage between query and bag and measures the degree of the content in the
query mentioned by the bag, and vice verse. In addition, the learned bag
representation in word level helps find the main points of a bag in a fine
grade and promotes the query-bag matching performance. Experiments on two
datasets show the effectiveness of our model.
| 2,019 | Computation and Language |
Incremental Text-to-Speech Synthesis with Prefix-to-Prefix Framework | Text-to-speech synthesis (TTS) has witnessed rapid progress in recent years,
where neural methods became capable of producing audios with high naturalness.
However, these efforts still suffer from two types of latencies: (a) the {\em
computational latency} (synthesizing time), which grows linearly with the
sentence length even with parallel approaches, and (b) the {\em input latency}
in scenarios where the input text is incrementally generated (such as in
simultaneous translation, dialog generation, and assistive technologies). To
reduce these latencies, we devise the first neural incremental TTS approach
based on the recently proposed prefix-to-prefix framework. We synthesize speech
in an online fashion, playing a segment of audio while generating the next,
resulting in an $O(1)$ rather than $O(n)$ latency.
| 2,020 | Computation and Language |
S2ORC: The Semantic Scholar Open Research Corpus | We introduce S2ORC, a large corpus of 81.1M English-language academic papers
spanning many academic disciplines. The corpus consists of rich metadata, paper
abstracts, resolved bibliographic references, as well as structured full text
for 8.1M open access papers. Full text is annotated with automatically-detected
inline mentions of citations, figures, and tables, each linked to their
corresponding paper objects. In S2ORC, we aggregate papers from hundreds of
academic publishers and digital archives into a unified source, and create the
largest publicly-available collection of machine-readable academic text to
date. We hope this resource will facilitate research and development of tools
and tasks for text mining over academic text.
| 2,020 | Computation and Language |
Transition-Based Deep Input Linearization | Traditional methods for deep NLG adopt pipeline approaches comprising stages
such as constructing syntactic input, predicting function words, linearizing
the syntactic input and generating the surface forms. Though easier to
visualize, pipeline approaches suffer from error propagation. In addition,
information available across modules cannot be leveraged by all modules. We
construct a transition-based model to jointly perform linearization, function
word prediction and morphological generation, which considerably improves upon
the accuracy compared to a pipelined baseline system. On a standard deep input
linearization shared task, our system achieves the best results reported so
far.
| 2,019 | Computation and Language |
Enhancing Pre-trained Chinese Character Representation with Word-aligned
Attention | Most Chinese pre-trained models take character as the basic unit and learn
representation according to character's external contexts, ignoring the
semantics expressed in the word, which is the smallest meaningful utterance in
Chinese. Hence, we propose a novel word-aligned attention to exploit explicit
word information, which is complementary to various character-based Chinese
pre-trained language models. Specifically, we devise a pooling mechanism to
align the character-level attention to the word level and propose to alleviate
the potential issue of segmentation error propagation by multi-source
information fusion. As a result, word and character information are explicitly
integrated at the fine-tuning procedure. Experimental results on five Chinese
NLP benchmark tasks demonstrate that our model could bring another significant
gain over several pre-trained models.
| 2,020 | Computation and Language |
Improving Grammatical Error Correction with Machine Translation Pairs | We propose a novel data synthesis method to generate diverse error-corrected
sentence pairs for improving grammatical error correction, which is based on a
pair of machine translation models of different qualities (i.e., poor and
good). The poor translation model resembles the ESL (English as a second
language) learner and tends to generate translations of low quality in terms of
fluency and grammatical correctness, while the good translation model generally
generates fluent and grammatically correct translations. We build the poor and
good translation model with phrase-based statistical machine translation model
with decreased language model weight and neural machine translation model
respectively. By taking the pair of their translations of the same sentences in
a bridge language as error-corrected sentence pairs, we can construct unlimited
pseudo parallel data. Our approach is capable of generating diverse
fluency-improving patterns without being limited by the pre-defined rule set
and the seed error-corrected data. Experimental results demonstrate the
effectiveness of our approach and show that it can be combined with other
synthetic data sources to yield further improvements.
| 2,020 | Computation and Language |
Teacher-Student Training for Robust Tacotron-based TTS | While neural end-to-end text-to-speech (TTS) is superior to conventional
statistical methods in many ways, the exposure bias problem in the
autoregressive models remains an issue to be resolved. The exposure bias
problem arises from the mismatch between the training and inference process,
that results in unpredictable performance for out-of-domain test data at
run-time. To overcome this, we propose a teacher-student training scheme for
Tacotron-based TTS by introducing a distillation loss function in addition to
the feature loss function. We first train a Tacotron2-based TTS model by always
providing natural speech frames to the decoder, that serves as a teacher model.
We then train another Tacotron2-based model as a student model, of which the
decoder takes the predicted speech frames as input, similar to how the decoder
works during run-time inference. With the distillation loss, the student model
learns the output probabilities from the teacher model, that is called
knowledge distillation. Experiments show that our proposed training scheme
consistently improves the voice quality for out-of-domain test data both in
Chinese and English systems.
| 2,020 | Computation and Language |
Explicit Pairwise Word Interaction Modeling Improves Pretrained
Transformers for English Semantic Similarity Tasks | In English semantic similarity tasks, classic word embedding-based approaches
explicitly model pairwise "interactions" between the word representations of a
sentence pair. Transformer-based pretrained language models disregard this
notion, instead modeling pairwise word interactions globally and implicitly
through their self-attention mechanism. In this paper, we hypothesize that
introducing an explicit, constrained pairwise word interaction mechanism to
pretrained language models improves their effectiveness on semantic similarity
tasks. We validate our hypothesis using BERT on four tasks in semantic textual
similarity and answer sentence selection. We demonstrate consistent
improvements in quality by adding an explicit pairwise word interaction module
to BERT.
| 2,019 | Computation and Language |
Dependency and Span, Cross-Style Semantic Role Labeling on PropBank and
NomBank | The latest developments in neural semantic role labeling (SRL) have shown
great performance improvements with both the dependency and span
formalisms/styles. Although the two styles share many similarities in
linguistic meaning and computation, most previous studies focus on a single
style. In this paper, we define a new cross-style semantic role label
convention and propose a new cross-style joint optimization model designed
around the most basic linguistic meaning of a semantic role, providing a
solution to make the results of the two styles more comparable and allowing
both formalisms of SRL to benefit from their natural connections in both
linguistics and computation. Our model learns a general semantic argument
structure and is capable of outputting in either style. Additionally, we
propose a syntax-aided method to uniformly enhance the learning of both
dependency and span representations. Experiments show that the proposed methods
are effective on both span and dependency SRL benchmarks.
| 2,021 | Computation and Language |
Dice Loss for Data-imbalanced NLP Tasks | Many NLP tasks such as tagging and machine reading comprehension are faced
with the severe data imbalance issue: negative examples significantly outnumber
positive examples, and the huge number of background examples (or easy-negative
examples) overwhelms the training. The most commonly used cross entropy (CE)
criteria is actually an accuracy-oriented objective, and thus creates a
discrepancy between training and test: at training time, each training instance
contributes equally to the objective function, while at test time F1 score
concerns more about positive examples. In this paper, we propose to use dice
loss in replacement of the standard cross-entropy objective for data-imbalanced
NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky
index, which attaches similar importance to false positives and false
negatives, and is more immune to the data-imbalance issue. To further alleviate
the dominating influence from easy-negative examples in training, we propose to
associate training examples with dynamically adjusted weights to deemphasize
easy-negative examples.Theoretical analysis shows that this strategy narrows
down the gap between the F1 score in evaluation and the dice loss in training.
With the proposed training objective, we observe significant performance boost
on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve
SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA
results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity
recognition task; along with competitive results on the tasks of machine
reading comprehension and paraphrase identification.
| 2,020 | Computation and Language |
Improving Joint Training of Inference Networks and Structured Prediction
Energy Networks | Deep energy-based models are powerful, but pose challenges for learning and
inference (Belanger and McCallum, 2016). Tu and Gimpel (2018) developed an
efficient framework for energy-based models by training "inference networks" to
approximate structured inference instead of using gradient descent. However,
their alternating optimization approach suffers from instabilities during
training, requiring additional loss terms and careful hyperparameter tuning. In
this paper, we contribute several strategies to stabilize and improve this
joint training of energy functions and inference networks for structured
prediction. We design a compound objective to jointly train both cost-augmented
and test-time inference networks along with the energy function. We propose
joint parameterizations for the inference networks that encourage them to
capture complementary functionality during learning. We empirically validate
our strategies on two sequence labeling tasks, showing easier paths to strong
performance than prior work, as well as further improvements with global energy
terms.
| 2,020 | Computation and Language |
Contextualized Sparse Representations for Real-Time Open-Domain Question
Answering | Open-domain question answering can be formulated as a phrase retrieval
problem, in which we can expect huge scalability and speed benefit but often
suffer from low accuracy due to the limitation of existing phrase
representation models. In this paper, we aim to improve the quality of each
phrase embedding by augmenting it with a contextualized sparse representation
(Sparc). Unlike previous sparse vectors that are term-frequency-based (e.g.,
tf-idf) or directly learned (only few thousand dimensions), we leverage
rectified self-attention to indirectly learn sparse vectors in n-gram
vocabulary space. By augmenting the previous phrase retrieval model (Seo et
al., 2019) with Sparc, we show 4%+ improvement in CuratedTREC and SQuAD-Open.
Our CuratedTREC score is even better than the best known retrieve & read model
with at least 45x faster inference speed.
| 2,020 | Computation and Language |
The LIG system for the English-Czech Text Translation Task of IWSLT 2019 | In this paper, we present our submission for the English to Czech Text
Translation Task of IWSLT 2019. Our system aims to study how pre-trained
language models, used as input embeddings, can improve a specialized machine
translation system trained on few data. Therefore, we implemented a
Transformer-based encoder-decoder neural system which is able to use the output
of a pre-trained language model as input embeddings, and we compared its
performance under three configurations: 1) without any pre-trained language
model (constrained), 2) using a language model trained on the monolingual parts
of the allowed English-Czech data (constrained), and 3) using a language model
trained on a large quantity of external monolingual data (unconstrained). We
used BERT as external pre-trained language model (configuration 3), and BERT
architecture for training our own language model (configuration 2). Regarding
the training data, we trained our MT system on a small quantity of parallel
text: one set only consists of the provided MuST-C corpus, and the other set
consists of the MuST-C corpus and the News Commentary corpus from WMT. We
observed that using the external pre-trained BERT improves the scores of our
system by +0.8 to +1.5 of BLEU on our development set, and +0.97 to +1.94 of
BLEU on the test set. However, using our own language model trained only on the
allowed parallel data seems to improve the machine translation performances
only when the system is trained on the smallest dataset.
| 2,019 | Computation and Language |
Transformation of Dense and Sparse Text Representations | Sparsity is regarded as a desirable property of representations, especially
in terms of explanation. However, its usage has been limited due to the gap
with dense representations. Most NLP research progresses in recent years are
based on dense representations. Thus the desirable property of sparsity cannot
be leveraged. Inspired by Fourier Transformation, in this paper, we propose a
novel Semantic Transformation method to bridge the dense and sparse spaces,
which can facilitate the NLP research to shift from dense space to sparse space
or to jointly use both spaces. The key idea of the proposed approach is to use
a Forward Transformation to transform dense representations to sparse
representations. Then some useful operations in the sparse space can be
performed over the sparse representations, and the sparse representations can
be used directly to perform downstream tasks such as text classification and
natural language inference. Then, a Backward Transformation can also be carried
out to transform those processed sparse representations to dense
representations. Experiments using classification tasks and natural language
inference task show that the proposed Semantic Transformation is effective.
| 2,019 | Computation and Language |
How Can BERT Help Lexical Semantics Tasks? | Contextualized embeddings such as BERT can serve as strong input
representations to NLP tasks, outperforming their static embeddings
counterparts such as skip-gram, CBOW and GloVe. However, such embeddings are
dynamic, calculated according to a sentence-level context, which limits their
use in lexical semantics tasks. We address this issue by making use of dynamic
embeddings as word representations in training static embeddings, thereby
leveraging their strong representation power for disambiguating context
information. Results show that this method leads to improvements over
traditional static embeddings on a range of lexical semantics tasks, obtaining
the best reported results on seven datasets.
| 2,020 | Computation and Language |
BERTs of a feather do not generalize together: Large variability in
generalization across models with similar test set performance | If the same neural network architecture is trained multiple times on the same
dataset, will it make similar linguistic generalizations across runs? To study
this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural
Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which
evaluates syntactic generalization in natural language inference. On the MNLI
development set, the behavior of all instances was remarkably consistent, with
accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models
varied widely in their generalization performance. For example, on the simple
case of subject-object swap (e.g., determining that "the doctor visited the
lawyer" does not entail "the lawyer visited the doctor"), accuracy ranged from
0.00% to 66.2%. Such variation is likely due to the presence of many local
minima that are equally attractive to a low-bias learner such as a neural
network; decreasing the variability may therefore require models with stronger
inductive biases.
| 2,020 | Computation and Language |
Probing Contextualized Sentence Representations with Visual Awareness | We present a universal framework to model contextualized sentence
representations with visual awareness that is motivated to overcome the
shortcomings of the multimodal parallel data with manual annotations. For each
sentence, we first retrieve a diversity of images from a shared cross-modal
embedding space, which is pre-trained on a large-scale of text-image pairs.
Then, the texts and images are respectively encoded by transformer encoder and
convolutional neural network. The two sequences of representations are further
fused by a simple and effective attention layer. The architecture can be easily
applied to text-only natural language processing tasks without manually
annotating multimodal parallel corpora. We apply the proposed method on three
tasks, including neural machine translation, natural language inference and
sequence labeling and experimental results verify the effectiveness.
| 2,019 | Computation and Language |
Blockwise Self-Attention for Long Document Understanding | We present BlockBERT, a lightweight and efficient BERT model for better
modeling long-distance dependencies. Our model extends BERT by introducing
sparse block structures into the attention matrix to reduce both memory
consumption and training/inference time, which also enables attention heads to
capture either short- or long-range contextual information. We conduct
experiments on language model pre-training and several benchmark question
answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1%
less memory and 12.0-25.1% less time to learn the model. During testing,
BlockBERT saves 27.8% inference time, while having comparable and sometimes
better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.