Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Revealing the Dark Secrets of BERT | BERT-based architectures currently give state-of-the-art performance on many
NLP tasks, but little is known about the exact mechanisms that contribute to
its success. In the current work, we focus on the interpretation of
self-attention, which is one of the fundamental underlying components of BERT.
Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we
propose the methodology and carry out a qualitative and quantitative analysis
of the information encoded by the individual BERT's heads. Our findings suggest
that there is a limited set of attention patterns that are repeated across
different heads, indicating the overall model overparametrization. While
different heads consistently use the same attention patterns, they have varying
impact on performance across different tasks. We show that manually disabling
attention in certain heads leads to a performance improvement over the regular
fine-tuned BERT models.
| 2,019 | Computation and Language |
Training Optimus Prime, M.D.: Generating Medical Certification Items by
Fine-Tuning OpenAI's gpt2 Transformer Model | This article describes new results of an application using transformer-based
language models to automated item generation (AIG), an area of ongoing interest
in the domain of certification testing as well as in educational measurement
and psychological testing. OpenAI's gpt2 pre-trained 345M parameter language
model was retrained using the public domain text mining set of PubMed articles
and subsequently used to generate item stems (case vignettes) as well as
distractor proposals for multiple-choice items. This case study shows promise
and produces draft text that can be used by human item writers as input for
authoring. Future experiments with more recent transformer models (such as
Grover, TransformerXL) using existing item pools are expected to improve
results further and to facilitate the development of assessment materials.
| 2,019 | Computation and Language |
Jointly Modeling Hierarchical and Horizontal Features for Relational
Triple Extraction | Recent works on relational triple extraction have shown the superiority of
jointly extracting entities and relations over the pipelined extraction manner.
However, most existing joint models fail to balance the modeling of entity
features and the joint decoding strategy, and thus the interactions between the
entity level and triple level are not fully investigated. In this work, we
first introduce the hierarchical dependency and horizontal commonality between
the two levels, and then propose an entity-enhanced dual tagging framework that
enables the triple extraction (TE) task to utilize such interactions with
self-learned entity features through an auxiliary entity extraction (EE) task,
without breaking the joint decoding of relational triples. Specifically, we
align the EE and TE tasks in a position-wise manner by formulating them as two
sequence labeling problems with identical encoder-decoder structure. Moreover,
the two tasks are organized in a carefully designed parameter sharing setting
so that the learned entity features could be naturally shared via multi-task
learning. Empirical experiments on the NYT benchmark demonstrate the
effectiveness of the proposed framework compared to the state-of-the-art
methods.
| 2,022 | Computation and Language |
Hierarchically-Refined Label Attention Network for Sequence Labeling | CRF has been used as a powerful model for statistical sequence labeling. For
neural sequence labeling, however, BiLSTM-CRF does not always lead to better
results compared with BiLSTM-softmax local classification. This can be because
the simple Markov label transition model of CRF does not give much information
gain over strong neural encoding. For better representing label sequences, we
investigate a hierarchically-refined label attention network, which explicitly
leverages label embeddings and captures potential long-term label dependency by
giving each word incrementally refined label distributions with hierarchical
attention. Results on POS tagging, NER and CCG supertagging show that the
proposed model not only improves the overall tagging accuracy with similar
number of parameters, but also significantly speeds up the training and testing
compared to BiLSTM-CRF.
| 2,019 | Computation and Language |
Gender Representation in French Broadcast Corpora and Its Impact on ASR
Performance | This paper analyzes the gender representation in four major corpora of French
broadcast. These corpora being widely used within the speech processing
community, they are a primary material for training automatic speech
recognition (ASR) systems. As gender bias has been highlighted in numerous
natural language processing (NLP) applications, we study the impact of the
gender imbalance in TV and radio broadcast on the performance of an ASR system.
This analysis shows that women are under-represented in our data in terms of
speakers and speech turns. We introduce the notion of speaker role to refine
our analysis and find that women are even fewer within the Anchor category
corresponding to prominent speakers. The disparity of available data for both
gender causes performance to decrease on women. However this global trend can
be counterbalanced for speaker who are used to speak in the media when
sufficient amount of data is available.
| 2,019 | Computation and Language |
Deep Learning Based Chatbot Models | A conversational agent (chatbot) is a piece of software that is able to
communicate with humans using natural language. Modeling conversation is an
important task in natural language processing and artificial intelligence.
While chatbots can be used for various tasks, in general they have to
understand users' utterances and provide responses that are relevant to the
problem at hand.
In my work, I conduct an in-depth survey of recent literature, examining over
70 publications related to chatbots published in the last 3 years. Then, I
proceed to make the argument that the very nature of the general conversation
domain demands approaches that are different from current state-of-of-the-art
architectures. Based on several examples from the literature I show why current
chatbot models fail to take into account enough priors when generating
responses and how this affects the quality of the conversation. In the case of
chatbots, these priors can be outside sources of information that the
conversation is conditioned on like the persona or mood of the conversers. In
addition to presenting the reasons behind this problem, I propose several ideas
on how it could be remedied.
The next section focuses on adapting the very recent Transformer model to the
chatbot domain, which is currently state-of-the-art in neural machine
translation. I first present experiments with the vanilla model, using
conversations extracted from the Cornell Movie-Dialog Corpus. Secondly, I
augment the model with some of my ideas regarding the issues of encoder-decoder
architectures. More specifically, I feed additional features into the model
like mood or persona together with the raw conversation data. Finally, I
conduct a detailed analysis of how the vanilla model performs on conversational
data by comparing it to previous chatbot models and how the additional features
affect the quality of the generated responses.
| 2,019 | Computation and Language |
Neural Poetry: Learning to Generate Poems using Syllables | Motivated by the recent progresses on machine learning-based models that
learn artistic styles, in this paper we focus on the problem of poem
generation. This is a challenging task in which the machine has to capture the
linguistic features that strongly characterize a certain poet, as well as the
semantics of the poet's production, that are influenced by his personal
experiences and by his literary background. Since poetry is constructed using
syllables, that regulate the form and structure of poems, we propose a
syllable-based neural language model, and we describe a poem generation
mechanism that is designed around the poet style, automatically selecting the
most representative generations. The poetic work of a target author is usually
not enough to successfully train modern deep neural networks, so we propose a
multi-stage procedure that exploits non-poetic works of the same author, and
also other publicly available huge corpora to learn syntax and grammar of the
target language. We focus on the Italian poet Dante Alighieri, widely famous
for his Divine Comedy. A quantitative and qualitative experimental analysis of
the generated tercets is reported, where we included expert judges with strong
background in humanistic studies. The generated tercets are frequently
considered to be real by a generic population of judges, with relative
difference of 56.25\% with respect to the ones really authored by Dante, and
expert judges perceived Dante's style and rhymes in the generated text.
| 2,019 | Computation and Language |
Neural Text Summarization: A Critical Evaluation | Text summarization aims at compressing long documents into a shorter form
that conveys the most important parts of the original document. Despite
increased interest in the community and notable research effort, progress on
benchmark datasets has stagnated. We critically evaluate key ingredients of the
current research setup: datasets, evaluation metrics, and models, and highlight
three primary shortcomings: 1) automatically collected datasets leave the task
underconstrained and may contain noise detrimental to training and evaluation,
2) current evaluation protocol is weakly correlated with human judgment and
does not account for important characteristics such as factual correctness, 3)
models overfit to layout biases of current datasets and offer limited diversity
in their outputs.
| 2,019 | Computation and Language |
Well-Read Students Learn Better: On the Importance of Pre-training
Compact Models | Recent developments in natural language representations have been accompanied
by large and expensive models that leverage vast amounts of general-domain text
through self-supervised pre-training. Due to the cost of applying such models
to down-stream tasks, several model compression techniques on pre-trained
language representations have been proposed (Sun et al., 2019; Sanh, 2019).
However, surprisingly, the simple baseline of just pre-training and fine-tuning
compact models has been overlooked. In this paper, we first show that
pre-training remains important in the context of smaller architectures, and
fine-tuning pre-trained compact models can be competitive to more elaborate
methods proposed in concurrent work. Starting with pre-trained compact models,
we then explore transferring task knowledge from large fine-tuned models
through standard knowledge distillation. The resulting simple, yet effective
and general algorithm, Pre-trained Distillation, brings further improvements.
Through extensive experiments, we more generally explore the interaction
between pre-training and distillation under two variables that have been
under-studied: model size and properties of unlabeled task data. One surprising
observation is that they have a compound effect even when sequentially applied
on the same data. To accelerate future research, we will make our 24
pre-trained miniature BERT models publicly available.
| 2,019 | Computation and Language |
Deploying Technology to Save Endangered Languages | Computer scientists working on natural language processing, native speakers
of endangered languages, and field linguists to discuss ways to harness
Automatic Speech Recognition, especially neural networks, to automate
annotation, speech tagging, and text parsing on endangered languages.
| 2,019 | Computation and Language |
A Little Annotation does a Lot of Good: A Study in Bootstrapping
Low-resource Named Entity Recognizers | Most state-of-the-art models for named entity recognition (NER) rely on the
availability of large amounts of labeled data, making them challenging to
extend to new, lower-resourced languages. However, there are now several
proposed approaches involving either cross-lingual transfer learning, which
learns from other highly resourced languages, or active learning, which
efficiently selects effective training data based on model predictions. This
paper poses the question: given this recent progress, and limited human
annotation, what is the most effective method for efficiently creating
high-quality entity recognizers in under-resourced languages? Based on
extensive experimentation using both simulated and real human annotation, we
find a dual-strategy approach best, starting with a cross-lingual transferred
model, then performing targeted annotation of only uncertain entity spans in
the target language, minimizing annotator effort. Results demonstrate that
cross-lingual transfer is a powerful tool when very little data can be
annotated, but an entity-targeted annotation strategy can achieve competitive
accuracy quickly, with just one-tenth of training data.
| 2,019 | Computation and Language |
Neural data-to-text generation: A comparison between pipeline and
end-to-end architectures | Traditionally, most data-to-text applications have been designed using a
modular pipeline architecture, in which non-linguistic input data is converted
into natural language through several intermediate transformations. In
contrast, recent neural models for data-to-text generation have been proposed
as end-to-end approaches, where the non-linguistic input is rendered in natural
language with much less explicit intermediate representations in-between. This
study introduces a systematic comparison between neural pipeline and end-to-end
data-to-text approaches for the generation of text from RDF triples. Both
architectures were implemented making use of state-of-the art deep learning
methods as the encoder-decoder Gated-Recurrent Units (GRU) and Transformer.
Automatic and human evaluations together with a qualitative analysis suggest
that having explicit intermediate steps in the generation process results in
better texts than the ones generated by end-to-end approaches. Moreover, the
pipeline models generalize better to unseen inputs. Data and code are publicly
available.
| 2,019 | Computation and Language |
DAST Model: Deciding About Semantic Complexity of a Text | Measuring text complexity is an essential task in several fields and
applications (such as NLP, semantic web, smart education, etc.). The semantic
layer of text is more tacit than its syntactic structure and, as a result,
calculation of semantic complexity is more difficult than syntactic complexity.
While there are famous and powerful academic and commercial syntactic
complexity measures, the problem of measuring semantic complexity is still a
challenging one. In this paper, we introduce the DAST model, which stands for
Deciding About Semantic Complexity of a Text. DAST proposes an intuitionistic
approach to semantics that lets us have a well-defined model for the semantics
of a text and its complexity: semantic is considered as a lattice of intuitions
and, as a result, semantic complexity is defined as the result of a calculation
on this lattice. A set theoretic formal definition of semantic complexity, as a
6-tuple formal system, is provided. By using this formal system, a method for
measuring semantic complexity is presented. The evaluation of the proposed
approach is done by a set of three human-judgment experiments. The results show
that DAST model is capable of deciding about semantic complexity of text.
Furthermore, the analysis of the results leads us to introduce a Markovian
model for the process of common-sense, multiple-steps and semantic-complexity
reasoning in people. The results of Experiments demonstrate that our method
outperforms the random baseline with improvement in better precision and
competes with other methods by less error percentage.
| 2,019 | Computation and Language |
Multi-view Story Characterization from Movie Plot Synopses and Reviews | This paper considers the problem of characterizing stories by inferring
properties such as theme and style using written synopses and reviews of
movies. We experiment with a multi-label dataset of movie synopses and a tagset
representing various attributes of stories (e.g., genre, type of events). Our
proposed multi-view model encodes the synopses and reviews using hierarchical
attention and shows improvement over methods that only use synopses. Finally,
we demonstrate how can we take advantage of such a model to extract a
complementary set of story-attributes from reviews without direct supervision.
We have made our dataset and source code publicly available at
https://ritual.uh.edu/ multiview-tag-2020.
| 2,020 | Computation and Language |
BERT for Coreference Resolution: Baselines and Analysis | We apply BERT to coreference resolution, achieving strong improvements on the
OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. A qualitative analysis of
model predictions indicates that, compared to ELMo and BERT-base, BERT-large is
particularly better at distinguishing between related but distinct entities
(e.g., President and CEO). However, there is still room for improvement in
modeling document-level context, conversations, and mention paraphrasing. Our
code and models are publicly available.
| 2,019 | Computation and Language |
Automatic Text Summarization of Legal Cases: A Hybrid Approach | Manual Summarization of large bodies of text involves a lot of human effort
and time, especially in the legal domain. Lawyers spend a lot of time preparing
legal briefs of their clients' case files. Automatic Text summarization is a
constantly evolving field of Natural Language Processing(NLP), which is a
subdiscipline of the Artificial Intelligence Field. In this paper a hybrid
method for automatic text summarization of legal cases using k-means clustering
technique and tf-idf(term frequency-inverse document frequency) word vectorizer
is proposed. The summary generated by the proposed method is compared using
ROGUE evaluation parameters with the case summary as prepared by the lawyer for
appeal in court. Further, suggestions for improving the proposed method are
also presented.
| 2,019 | Computation and Language |
Domain-Invariant Feature Distillation for Cross-Domain Sentiment
Classification | Cross-domain sentiment classification has drawn much attention in recent
years. Most existing approaches focus on learning domain-invariant
representations in both the source and target domains, while few of them pay
attention to the domain-specific information. Despite the non-transferability
of the domain-specific information, simultaneously learning domain-dependent
representations can facilitate the learning of domain-invariant
representations. In this paper, we focus on aspect-level cross-domain sentiment
classification, and propose to distill the domain-invariant sentiment features
with the help of an orthogonal domain-dependent task, i.e. aspect detection,
which is built on the aspects varying widely in different domains. We conduct
extensive experiments on three public datasets and the experimental results
demonstrate the effectiveness of our method.
| 2,019 | Computation and Language |
Position-Aware Self-Attention based Neural Sequence Labeling | Sequence labeling is a fundamental task in natural language processing and
has been widely studied. Recently, RNN-based sequence labeling models have
increasingly gained attentions. Despite superior performance achieved by
learning the long short-term (i.e., successive) dependencies, the way of
sequentially processing inputs might limit the ability to capture the
non-continuous relations over tokens within a sentence. To tackle the problem,
we focus on how to effectively model successive and discrete dependencies of
each token for enhancing the sequence labeling performance. Specifically, we
propose an innovative attention-based model (called position-aware
selfattention, i.e., PSA) as well as a well-designed self-attentional context
fusion layer within a neural network architecture, to explore the positional
information of an input sequence for capturing the latent relations among
tokens. Extensive experiments on three classical tasks in sequence labeling
domain, i.e., partof-speech (POS) tagging, named entity recognition (NER) and
phrase chunking, demonstrate our proposed model outperforms the
state-of-the-arts without any external knowledge, in terms of various metrics.
| 2,021 | Computation and Language |
Propagate-Selector: Detecting Supporting Sentences for Question
Answering via Graph Neural Networks | In this study, we propose a novel graph neural network called
propagate-selector (PS), which propagates information over sentences to
understand information that cannot be inferred when considering sentences in
isolation. First, we design a graph structure in which each node represents an
individual sentence, and some pairs of nodes are selectively connected based on
the text structure. Then, we develop an iterative attentive aggregation and a
skip-combine method in which a node interacts with its neighborhood nodes to
accumulate the necessary information. To evaluate the performance of the
proposed approaches, we conduct experiments with the standard HotpotQA dataset.
The empirical results demonstrate the superiority of our proposed approach,
which obtains the best performances, compared to the widely used
answer-selection models that do not consider the intersentential relationship.
| 2,020 | Computation and Language |
Query-Based Named Entity Recognition | In this paper, we propose a new strategy for the task of named entity
recognition (NER). We cast the task as a query-based machine reading
comprehension task: e.g., the task of extracting entities with PER is
formalized as answering the question of "which person is mentioned in the text
?". Such a strategy comes with the advantage that it solves the long-standing
issue of handling overlapping or nested entities (the same token that
participates in more than one entity categories) with sequence-labeling
techniques for NER. Additionally, since the query encodes informative prior
knowledge, this strategy facilitates the process of entity extraction, leading
to better performances. We experiment the proposed model on five widely used
NER datasets on English and Chinese, including MSRA, Resume, OntoNotes, ACE04
and ACE05. The proposed model sets new SOTA results on all of these datasets.
| 2,019 | Computation and Language |
A framework for anomaly detection using language modeling, and its
applications to finance | In the finance sector, studies focused on anomaly detection are often
associated with time-series and transactional data analytics. In this paper, we
lay out the opportunities for applying anomaly and deviation detection methods
to text corpora and challenges associated with them. We argue that language
models that use distributional semantics can play a significant role in
advancing these studies in novel directions, with new applications in risk
identification, predictive modeling, and trend analysis.
| 2,019 | Computation and Language |
Release Strategies and the Social Impacts of Language Models | Large language models have a range of beneficial uses: they can assist in
prose, poetry, and programming; analyze dataset biases; and more. However,
their flexibility and generative capabilities also raise misuse concerns. This
report discusses OpenAI's work related to the release of its GPT-2 language
model. It discusses staged release, which allows time between model releases to
conduct risk and benefit analyses as model sizes increased. It also discusses
ongoing partnership-based research and provides recommendations for better
coordination and responsible publication in AI.
| 2,019 | Computation and Language |
Adversarial Domain Adaptation for Machine Reading Comprehension | In this paper, we focus on unsupervised domain adaptation for Machine Reading
Comprehension (MRC), where the source domain has a large amount of labeled
data, while only unlabeled passages are available in the target domain. To this
end, we propose an Adversarial Domain Adaptation framework (AdaMRC), where
($i$) pseudo questions are first generated for unlabeled passages in the target
domain, and then ($ii$) a domain classifier is incorporated into an MRC model
to predict which domain a given passage-question pair comes from. The
classifier and the passage-question encoder are jointly trained using
adversarial learning to enforce domain-invariant representation learning.
Comprehensive evaluations demonstrate that our approach ($i$) is generalizable
to different MRC models and datasets, ($ii$) can be combined with pre-trained
large-scale language models (such as ELMo and BERT), and ($iii$) can be
extended to semi-supervised learning.
| 2,019 | Computation and Language |
Open Event Extraction from Online Text using a Generative Adversarial
Network | To extract the structured representations of open-domain events, Bayesian
graphical models have made some progress. However, these approaches typically
assume that all words in a document are generated from a single event. While
this may be true for short text such as tweets, such an assumption does not
generally hold for long text such as news articles. Moreover, Bayesian
graphical models often rely on Gibbs sampling for parameter inference which may
take long time to converge. To address these limitations, we propose an event
extraction model based on Generative Adversarial Nets, called
Adversarial-neural Event Model (AEM). AEM models an event with a Dirichlet
prior and uses a generator network to capture the patterns underlying latent
events. A discriminator is used to distinguish documents reconstructed from the
latent events and the original documents. A byproduct of the discriminator is
that the features generated by the learned discriminator network allow the
visualization of the extracted events. Our model has been evaluated on two
Twitter datasets and a news article dataset. Experimental results show that our
model outperforms the baseline approaches on all the datasets, with more
significant improvements observed on the news article dataset where an increase
of 15\% is observed in F-measure.
| 2,019 | Computation and Language |
Don't Just Scratch the Surface: Enhancing Word Representations for
Korean with Hanja | We propose a simple yet effective approach for improving Korean word
representations using additional linguistic annotation (i.e. Hanja). We employ
cross-lingual transfer learning in training word representations by leveraging
the fact that Hanja is closely related to Chinese. We evaluate the intrinsic
quality of representations learned through our approach using the word analogy
and similarity tests. In addition, we demonstrate their effectiveness on
several downstream tasks, including a novel Korean news headline generation
task.
| 2,019 | Computation and Language |
Multi-task Learning for Low-resource Second Language Acquisition
Modeling | Second language acquisition (SLA) modeling is to predict whether second
language learners could correctly answer the questions according to what they
have learned. It is a fundamental building block of the personalized learning
system and has attracted more and more attention recently. However, as far as
we know, almost all existing methods cannot work well in low-resource scenarios
due to lacking of training data. Fortunately, there are some latent common
patterns among different language-learning tasks, which gives us an opportunity
to solve the low-resource SLA modeling problem. Inspired by this idea, in this
paper, we propose a novel SLA modeling method, which learns the latent common
patterns among different language-learning datasets by multi-task learning and
are further applied to improving the prediction performance in low-resource
scenarios. Extensive experiments show that the proposed method performs much
better than the state-of-the-art baselines in the low-resource scenario.
Meanwhile, it also obtains improvement slightly in the non-low-resource
scenario.
| 2,020 | Computation and Language |
Multilingual Neural Machine Translation with Language Clustering | Multilingual neural machine translation (NMT), which translates multiple
languages using a single model, is of great practical importance due to its
advantages in simplifying the training process, reducing online maintenance
costs, and enhancing low-resource and zero-shot translation. Given there are
thousands of languages in the world and some of them are very different, it is
extremely burdensome to handle them all in a single model or use a separate
model for each language pair. Therefore, given a fixed resource budget, e.g.,
the number of models, how to determine which languages should be supported by
one model is critical to multilingual NMT, which, unfortunately, has been
ignored by previous work. In this work, we develop a framework that clusters
languages into different groups and trains one multilingual model for each
cluster. We study two methods for language clustering: (1) using prior
knowledge, where we cluster languages according to language family, and (2)
using language embedding, in which we represent each language by an embedding
vector and cluster them in the embedding space. In particular, we obtain the
embedding vectors of all the languages by training a universal neural machine
translation model. Our experiments on 23 languages show that the first
clustering method is simple and easy to understand but leading to suboptimal
translation accuracy, while the second method sufficiently captures the
relationship among languages well and improves the translation accuracy for
almost all the languages over baseline methods
| 2,019 | Computation and Language |
Efficient Bidirectional Neural Machine Translation | The encoder-decoder based neural machine translation usually generates a
target sequence token by token from left to right. Due to error propagation,
the tokens in the right side of the generated sequence are usually of poorer
quality than those in the left side. In this paper, we propose an efficient
method to generate a sequence in both left-to-right and right-to-left manners
using a single encoder and decoder, combining the advantages of both generation
directions. Experiments on three translation tasks show that our method
achieves significant improvements over conventional unidirectional approach.
Compared with ensemble methods that train and combine two models with different
generation directions, our method saves 50% model parameters and about 40%
training time, and also improve inference speed.
| 2,019 | Computation and Language |
Patient Knowledge Distillation for BERT Model Compression | Pre-trained language models such as BERT have proven to be highly effective
for natural language processing (NLP) tasks. However, the high demand for
computing resources in training such models hinders their application in
practice. In order to alleviate this resource hunger in large-scale model
training, we propose a Patient Knowledge Distillation approach to compress an
original large model (teacher) into an equally-effective lightweight shallow
network (student). Different from previous knowledge distillation methods,
which only use the output from the last layer of the teacher network for
distillation, our student model patiently learns from multiple intermediate
layers of the teacher model for incremental knowledge extraction, following two
strategies: ($i$) PKD-Last: learning from the last $k$ layers; and ($ii$)
PKD-Skip: learning from every $k$ layers. These two patient distillation
schemes enable the exploitation of rich information in the teacher's hidden
layers, and encourage the student model to patiently learn from and imitate the
teacher through a multi-layer distillation process. Empirically, this
translates into improved results on multiple NLP tasks with significant gain in
training efficiency, without sacrificing model accuracy.
| 2,019 | Computation and Language |
Transforming Delete, Retrieve, Generate Approach for Controlled Text
Style Transfer | Text style transfer is the task of transferring the style of text having
certain stylistic attributes, while preserving non-stylistic or content
information. In this work we introduce the Generative Style Transformer (GST) -
a new approach to rewriting sentences to a target style in the absence of
parallel style corpora. GST leverages the power of both, large unsupervised
pre-trained language models as well as the Transformer. GST is a part of a
larger `Delete Retrieve Generate' framework, in which we also propose a novel
method of deleting style attributes from the source sentence by exploiting the
inner workings of the Transformer. Our models outperform state-of-art systems
across 5 datasets on sentiment, gender and political slant transfer. We also
propose the use of the GLEU metric as an automatic metric of evaluation of
style transfer, which we found to compare better with human ratings than the
predominantly used BLEU score.
| 2,019 | Computation and Language |
On Measuring and Mitigating Biased Inferences of Word Embeddings | Word embeddings carry stereotypical connotations from the text they are
trained on, which can lead to invalid inferences in downstream models that rely
on them. We use this observation to design a mechanism for measuring
stereotypes using the task of natural language inference. We demonstrate a
reduction in invalid inferences via bias mitigation strategies on static word
embeddings (GloVe). Further, we show that for gender bias, these techniques
extend to contextualized embeddings when applied selectively only to the static
components of contextualized embeddings (ELMo, BERT).
| 2,019 | Computation and Language |
Domain Adaptive Text Style Transfer | Text style transfer without parallel data has achieved some practical
success. However, in the scenario where less data is available, these methods
may yield poor performance. In this paper, we examine domain adaptation for
text style transfer to leverage massively available data from other domains.
These data may demonstrate domain shift, which impedes the benefits of
utilizing such data for training. To address this challenge, we propose simple
yet effective domain adaptive text style transfer models, enabling
domain-adaptive information exchange. The proposed models presumably learn from
the source domain to: (i) distinguish stylized information and generic content
information; (ii) maximally preserve content information; and (iii) adaptively
transfer the styles in a domain-aware manner. We evaluate the proposed models
on two style transfer tasks (sentiment and formality) over multiple target
domains where only limited non-parallel data is available. Extensive
experiments demonstrate the effectiveness of the proposed model compared to the
baselines.
| 2,019 | Computation and Language |
Partially-supervised Mention Detection | Learning to detect entity mentions without using syntactic information can be
useful for integration and joint optimization with other tasks. However, it is
common to have partially annotated data for this problem. Here, we investigate
two approaches to deal with partial annotation of mentions: weighted loss and
soft-target classification. We also propose two neural mention detection
approaches: a sequence tagging, and an exhaustive search. We evaluate our
methods with coreference resolution as a downstream task, using multitask
learning. The results show that the recall and F1 score improve for all
methods.
| 2,019 | Computation and Language |
Thinking Globally, Acting Locally: Distantly Supervised Global-to-Local
Knowledge Selection for Background Based Conversation | Background Based Conversations (BBCs) have been introduced to help
conversational systems avoid generating overly generic responses. In a BBC, the
conversation is grounded in a knowledge source. A key challenge in BBCs is
Knowledge Selection (KS): given a conversational context, try to find the
appropriate background knowledge (a text fragment containing related facts or
comments, etc.) based on which to generate the next response. Previous work
addresses KS by employing attention and/or pointer mechanisms. These mechanisms
use a local perspective, i.e., they select a token at a time based solely on
the current decoding state. We argue for the adoption of a global perspective,
i.e., pre-selecting some text fragments from the background knowledge that
could help determine the topic of the next response. We enhance KS in BBCs by
introducing a Global-to-Local Knowledge Selection (GLKS) mechanism. Given a
conversational context and background knowledge, we first learn a topic
transition vector to encode the most likely text fragments to be used in the
next response, which is then used to guide the local KS at each decoding
timestamp. In order to effectively learn the topic transition vector, we
propose a distantly supervised learning schema. Experimental results show that
the GLKS model significantly outperforms state-of-the-art methods in terms of
both automatic and human evaluation. More importantly, GLKS achieves this
without requiring any extra annotations, which demonstrates its high degree of
scalability.
| 2,019 | Computation and Language |
Transductive Data-Selection Algorithms for Fine-Tuning Neural Machine
Translation | Machine Translation models are trained to translate a variety of documents
from one language into another. However, models specifically trained for a
particular characteristics of the documents tend to perform better. Fine-tuning
is a technique for adapting an NMT model to some domain. In this work, we want
to use this technique to adapt the model to a given test set. In particular, we
are using transductive data selection algorithms which take advantage the
information of the test set to retrieve sentences from a larger parallel set.
In cases where the model is available at translation time (when the test set
is provided), it can be adapted with a small subset of data, thereby achieving
better performance than a generic model or a domain-adapted model.
| 2,019 | Computation and Language |
Rethinking Attribute Representation and Injection for Sentiment
Classification | Text attributes, such as user and product information in product reviews,
have been used to improve the performance of sentiment classification models.
The de facto standard method is to incorporate them as additional biases in the
attention mechanism, and more performance gains are achieved by extending the
model architecture. In this paper, we show that the above method is the least
effective way to represent and inject attributes. To demonstrate this
hypothesis, unlike previous models with complicated architectures, we limit our
base model to a simple BiLSTM with attention classifier, and instead focus on
how and where the attributes should be incorporated in the model. We propose to
represent attributes as chunk-wise importance weight matrices and consider four
locations in the model (i.e., embedding, encoding, attention, classifier) to
inject attributes. Experiments show that our proposed method achieves
significant improvements over the standard approach and that attention
mechanism is the worst location to inject attributes, contradicting prior work.
We also outperform the state-of-the-art despite our use of a simple base model.
Finally, we show that these representations transfer well to other tasks. Model
implementation and datasets are released here:
https://github.com/rktamplayo/CHIM.
| 2,019 | Computation and Language |
Measuring Patent Claim Generation by Span Relevancy | Our goal of patent claim generation is to realize "augmented inventing" for
inventors by leveraging latest Deep Learning techniques. We envision the
possibility of building an "auto-complete" function for inventors to conceive
better inventions in the era of artificial intelligence. In order to generate
patent claims with good quality, a fundamental question is how to measure it.
We tackle the problem from a perspective of claim span relevancy. Patent claim
language was rarely explored in the NLP field. It is unique in its own way and
contains rich explicit and implicit human annotations. In this work, we propose
a span-based approach and a generic framework to measure patent claim
generation quantitatively. In order to study the effectiveness of patent claim
generation, we define a metric to measure whether two consecutive spans in a
generated patent claims are relevant. We treat such relevancy measurement as a
span-pair classification problem, following the concept of natural language
inference. Technically, the span-pair classifier is implemented by fine-tuning
a pre-trained language model. The patent claim generation is implemented by
fine-tuning the other pre-trained model. Specifically, we fine-tune a
pre-trained Google BERT model to measure the patent claim spans generated by a
fine-tuned OpenAI GPT-2 model. In this way, we re-use two of the
state-of-the-art pre-trained models in the NLP field. Our result shows the
effectiveness of the span-pair classifier after fine-tuning the pre-trained
model. It further validates the quantitative metric of span relevancy in patent
claim generation. Particularly, we found that the span relevancy ratio measured
by BERT becomes lower when the diversity in GPT-2 text generation becomes
higher.
| 2,019 | Computation and Language |
Revisiting Simple Domain Adaptation Methods in Unsupervised Neural
Machine Translation | Domain adaptation has been well-studied in supervised neural machine
translation (SNMT). However, it has not been well-studied for unsupervised
neural machine translation (UNMT), although UNMT has recently achieved
remarkable results in several domain-specific language pairs. Besides the
inconsistent domains between training data and test data for SNMT, there
sometimes exists an inconsistent domain between two monolingual training data
for UNMT. In this work, we empirically show different scenarios for
unsupervised neural machine translation. Based on these scenarios, we revisit
the effect of the existing domain adaptation methods including batch weighting
and fine tuning methods in UNMT. Finally, we propose modified methods to
improve the performances of domain-specific UNMT systems.
| 2,020 | Computation and Language |
Semi-supervised Learning for Word Sense Disambiguation | This work is a study of the impact of multiple aspects in a classic
unsupervised word sense disambiguation algorithm. We identify relevant factors
in a decision rule algorithm, including the initial labeling of examples, the
formalization of the rule confidence, and the criteria for accepting a decision
rule. Some of these factors are only implicitly considered in the original
literature. We then propose a lightly supervised version of the algorithm, and
employ a pseudo-word-based strategy to evaluate the impact of these factors.
The obtained performances are comparable with those of highly optimized
formulations of the word sense disambiguation method.
| 2,019 | Computation and Language |
Low-Resource Name Tagging Learned with Weakly Labeled Data | Name tagging in low-resource languages or domains suffers from inadequate
training data. Existing work heavily relies on additional information, while
leaving those noisy annotations unexplored that extensively exist on the web.
In this paper, we propose a novel neural model for name tagging solely based on
weakly labeled (WL) data, so that it can be applied in any low-resource
settings. To take the best advantage of all WL sentences, we split them into
high-quality and noisy portions for two modules, respectively: (1) a
classification module focusing on the large portion of noisy data can
efficiently and robustly pretrain the tag classifier by capturing textual
context semantics; and (2) a costly sequence labeling module focusing on
high-quality data utilizes Partial-CRFs with non-entity sampling to achieve
global optimum. Two modules are combined via shared parameters. Extensive
experiments involving five low-resource languages and fine-grained food domain
demonstrate our superior performance (6% and 7.8% F1 gains on average) as well
as efficiency.
| 2,019 | Computation and Language |
uniblock: Scoring and Filtering Corpus with Unicode Block Information | The preprocessing pipelines in Natural Language Processing usually involve a
step of removing sentences consisted of illegal characters. The definition of
illegal characters and the specific removal strategy depend on the task,
language, domain, etc, which often lead to tiresome and repetitive scripting of
rules. In this paper, we introduce a simple statistical method, uniblock, to
overcome this problem. For each sentence, uniblock generates a fixed-size
feature vector using Unicode block information of the characters. A Gaussian
mixture model is then estimated on some clean corpus using variational
inference. The learned model can then be used to score sentences and filter
corpus. We present experimental results on Sentiment Analysis, Language
Modeling and Machine Translation, and show the simplicity and effectiveness of
our method.
| 2,019 | Computation and Language |
Ensemble approach for natural language question answering problem | Machine comprehension, answering a question depending on a given context
paragraph is a typical task of Natural Language Understanding. It requires to
model complex dependencies existing between the question and the context
paragraph. There are many neural network models attempting to solve the problem
of question answering. The best models have been selected, studied and compared
with each other. All the selected models are based on the neural attention
mechanism concept. Additionally, studies on a SQUAD dataset were performed. The
subsets of queries were extracted and then each model was analyzed how it deals
with specific group of queries. Based on these three model ensemble model was
created and tested on SQUAD dataset. It outperforms the best Mnemonic Reader
model.
| 2,019 | Computation and Language |
Detecting Toxicity in News Articles: Application to Bulgarian | Online media aim for reaching ever bigger audience and for attracting ever
longer attention span. This competition creates an environment that rewards
sensational, fake, and toxic news. To help limit their spread and impact, we
propose and develop a news toxicity detector that can recognize various types
of toxic content. While previous research primarily focused on English, here we
target Bulgarian. We created a new dataset by crawling a website that for five
years has been collecting Bulgarian news articles that were manually
categorized into eight toxicity groups. Then we trained a multi-class
classifier with nine categories: eight toxic and one non-toxic. We experimented
with different representations based on ElMo, BERT, and XLM, as well as with a
variety of domain-specific features. Due to the small size of our dataset, we
created a separate model for each feature type, and we ultimately combined
these models into a meta-classifier. The evaluation results show an accuracy of
59.0% and a macro-F1 score of 39.7%, which represent sizable improvements over
the majority-class baseline (Acc=30.3%, macro-F1=5.2%).
| 2,019 | Computation and Language |
The Limitations of Stylometry for Detecting Machine-Generated Fake News | Recent developments in neural language models (LMs) have raised concerns
about their potential misuse for automatically spreading misinformation. In
light of these concerns, several studies have proposed to detect
machine-generated fake news by capturing their stylistic differences from
human-written text. These approaches, broadly termed stylometry, have found
success in source attribution and misinformation detection in human-written
texts. However, in this work, we show that stylometry is limited against
machine-generated misinformation. While humans speak differently when trying to
deceive, LMs generate stylistically consistent text, regardless of underlying
motive. Thus, though stylometry can successfully prevent impersonation by
identifying text provenance, it fails to distinguish legitimate LM applications
from those that introduce false information. We create two benchmarks
demonstrating the stylistic similarity between malicious and legitimate uses of
LMs, employed in auto-completion and editing-assistance settings. Our findings
highlight the need for non-stylometry approaches in detecting machine-generated
misinformation, and open up the discussion on the desired evaluation
benchmarks.
| 2,020 | Computation and Language |
Multi-Granularity Representations of Dialog | Neural models of dialog rely on generalized latent representations of
language. This paper introduces a novel training procedure which explicitly
learns multiple representations of language at several levels of granularity.
The multi-granularity training algorithm modifies the mechanism by which
negative candidate responses are sampled in order to control the granularity of
learned latent representations. Strong performance gains are observed on the
next utterance retrieval task using both the MultiWOZ dataset and the Ubuntu
dialog corpus. Analysis significantly demonstrates that multiple granularities
of representation are being learned, and that multi-granularity training
facilitates better transfer to downstream tasks.
| 2,019 | Computation and Language |
Does BERT agree? Evaluating knowledge of structure dependence through
agreement relations | Learning representations that accurately model semantics is an important goal
of natural language processing research. Many semantic phenomena depend on
syntactic structure. Recent work examines the extent to which state-of-the-art
models for pre-training representations, such as BERT, capture such
structure-dependent phenomena, but is largely restricted to one phenomenon in
English: number agreement between subjects and verbs. We evaluate BERT's
sensitivity to four types of structure-dependent agreement relations in a new
semi-automatically curated dataset across 26 languages. We show that both the
single-language and multilingual BERT models capture syntax-sensitive agreement
patterns well in general, but we also highlight the specific linguistic
contexts in which their performance degrades.
| 2,019 | Computation and Language |
Multi-Channel Graph Neural Network for Entity Alignment | Entity alignment typically suffers from the issues of structural
heterogeneity and limited seed alignments. In this paper, we propose a novel
Multi-channel Graph Neural Network model (MuGNN) to learn alignment-oriented
knowledge graph (KG) embeddings by robustly encoding two KGs via multiple
channels. Each channel encodes KGs via different relation weighting schemes
with respect to self-attention towards KG completion and cross-KG attention for
pruning exclusive entities respectively, which are further combined via pooling
techniques. Moreover, we also infer and transfer rule knowledge for completing
two KGs consistently. MuGNN is expected to reconcile the structural differences
of two KGs, and thus make better use of seed alignments. Extensive experiments
on five publicly available datasets demonstrate our superior performance (5%
Hits@1 up on average).
| 2,019 | Computation and Language |
Gender Prediction from Tweets: Improving Neural Representations with
Hand-Crafted Features | Author profiling is the characterization of an author through some key
attributes such as gender, age, and language. In this paper, a RNN model with
Attention (RNNwA) is proposed to predict the gender of a twitter user using
their tweets. Both word level and tweet level attentions are utilized to learn
'where to look'. This model
(https://github.com/Darg-Iztech/gender-prediction-from-tweets) is improved by
concatenating LSA-reduced n-gram features with the learned neural
representation of a user. Both models are tested on three languages: English,
Spanish, Arabic. The improved version of the proposed model (RNNwA + n-gram)
achieves state-of-the-art performance on English and has competitive results on
Spanish and Arabic.
| 2,019 | Computation and Language |
Reference Network for Neural Machine Translation | Neural Machine Translation (NMT) has achieved notable success in recent
years. Such a framework usually generates translations in isolation. In
contrast, human translators often refer to reference data, either rephrasing
the intricate sentence fragments with common terms in source language, or just
accessing to the golden translation directly. In this paper, we propose a
Reference Network to incorporate referring process into translation decoding of
NMT. To construct a \emph{reference book}, an intuitive way is to store the
detailed translation history with extra memory, which is computationally
expensive. Instead, we employ Local Coordinates Coding (LCC) to obtain global
context vectors containing monolingual and bilingual contextual information for
NMT decoding. Experimental results on Chinese-English and English-German tasks
demonstrate that our proposed model is effective in improving the translation
quality with lightweight computation cost.
| 2,019 | Computation and Language |
Toward Dialogue Modeling: A Semantic Annotation Scheme for Questions and
Answers | The present study proposes an annotation scheme for classifying the content
and discourse contribution of question-answer pairs. We propose detailed
guidelines for using the scheme and apply them to dialogues in English,
Spanish, and Dutch. Finally, we report on initial machine learning experiments
for automatic annotation.
| 2,019 | Computation and Language |
Don't paraphrase, detect! Rapid and Effective Data Collection for
Semantic Parsing | A major hurdle on the road to conversational interfaces is the difficulty in
collecting data that maps language utterances to logical forms. One prominent
approach for data collection has been to automatically generate pseudo-language
paired with logical forms, and paraphrase the pseudo-language to natural
language through crowdsourcing (Wang et al., 2015). However, this data
collection procedure often leads to low performance on real data, due to a
mismatch between the true distribution of examples and the distribution induced
by the data collection procedure. In this paper, we thoroughly analyze two
sources of mismatch in this process: the mismatch in logical form distribution
and the mismatch in language distribution between the true and induced
distributions. We quantify the effects of these mismatches, and propose a new
data collection approach that mitigates them. Assuming access to unlabeled
utterances from the true distribution, we combine crowdsourcing with a
paraphrase model to detect correct logical forms for the unlabeled utterances.
On two datasets, our method leads to 70.6 accuracy on average on the true
distribution, compared to 51.3 in paraphrasing-based data collection.
| 2,019 | Computation and Language |
An Emotional Analysis of False Information in Social Media and News
Articles | Fake news is risky since it has been created to manipulate the readers'
opinions and beliefs. In this work, we compared the language of false news to
the real one of real news from an emotional perspective, considering a set of
false information types (propaganda, hoax, clickbait, and satire) from social
media and online news articles sources. Our experiments showed that false
information has different emotional patterns in each of its types, and emotions
play a key role in deceiving the reader. Based on that, we proposed a LSTM
neural network model that is emotionally-infused to detect false news.
| 2,019 | Computation and Language |
Text Modeling with Syntax-Aware Variational Autoencoders | Syntactic information contains structures and rules about how text sentences
are arranged. Incorporating syntax into text modeling methods can potentially
benefit both representation learning and generation. Variational autoencoders
(VAEs) are deep generative models that provide a probabilistic way to describe
observations in the latent space. When applied to text data, the latent
representations are often unstructured. We propose syntax-aware variational
autoencoders (SAVAEs) that dedicate a subspace in the latent dimensions dubbed
syntactic latent to represent syntactic structures of sentences. SAVAEs are
trained to infer syntactic latent from either text inputs or parsed syntax
results as well as reconstruct original text with inferred latent variables.
Experiments show that SAVAEs are able to achieve lower reconstruction loss on
four different data sets. Furthermore, they are capable of generating examples
with modified target syntax.
| 2,019 | Computation and Language |
On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model
Compression | Despite their ubiquity in NLP tasks, Long Short-Term Memory (LSTM) networks
suffer from computational inefficiencies caused by inherent unparallelizable
recurrences, which further aggravates as LSTMs require more parameters for
larger memory capacity. In this paper, we propose to apply low-rank matrix
factorization (MF) algorithms to different recurrences in LSTMs, and explore
the effectiveness on different NLP tasks and model components. We discover that
additive recurrence is more important than multiplicative recurrence, and
explain this by identifying meaningful correlations between matrix norms and
compression performance. We compare our approach across two settings: 1)
compressing core LSTM recurrences in language models, 2) compressing biLSTM
layers of ELMo evaluated in three downstream NLP tasks.
| 2,019 | Computation and Language |
MIDAS: A Dialog Act Annotation Scheme for Open Domain Human Machine
Spoken Conversations | Dialog act prediction is an essential language comprehension task for both
dialog system building and discourse analysis. Previous dialog act schemes,
such as SWBD-DAMSL, are designed for human-human conversations, in which
conversation partners have perfect language understanding ability. In this
paper, we design a dialog act annotation scheme, MIDAS (Machine Interaction
Dialog Act Scheme), targeted on open-domain human-machine conversations. MIDAS
is designed to assist machines which have limited ability to understand their
human partners. MIDAS has a hierarchical structure and supports multi-label
annotations. We collected and annotated a large open-domain human-machine
spoken conversation dataset (consists of 24K utterances). To show the
applicability of the scheme, we leverage transfer learning methods to train a
multi-label dialog act prediction model and reach an F1 score of 0.79.
| 2,019 | Computation and Language |
FinBERT: Financial Sentiment Analysis with Pre-trained Language Models | Financial sentiment analysis is a challenging task due to the specialized
language and lack of labeled data in that domain. General-purpose models are
not effective enough because of the specialized language used in a financial
context. We hypothesize that pre-trained language models can help with this
problem because they require fewer labeled examples and they can be further
trained on domain-specific corpora. We introduce FinBERT, a language model
based on BERT, to tackle NLP tasks in the financial domain. Our results show
improvement in every measured metric on current state-of-the-art results for
two financial sentiment analysis datasets. We find that even with a smaller
training set and fine-tuning only a part of the model, FinBERT outperforms
state-of-the-art machine learning methods.
| 2,019 | Computation and Language |
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new
state-of-the-art performance on sentence-pair regression tasks like semantic
textual similarity (STS). However, it requires that both sentences are fed into
the network, which causes a massive computational overhead: Finding the most
similar pair in a collection of 10,000 sentences requires about 50 million
inference computations (~65 hours) with BERT. The construction of BERT makes it
unsuitable for semantic similarity search as well as for unsupervised tasks
like clustering.
In this publication, we present Sentence-BERT (SBERT), a modification of the
pretrained BERT network that use siamese and triplet network structures to
derive semantically meaningful sentence embeddings that can be compared using
cosine-similarity. This reduces the effort for finding the most similar pair
from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while
maintaining the accuracy from BERT.
We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning
tasks, where it outperforms other state-of-the-art sentence embeddings methods.
| 2,019 | Computation and Language |
On NMT Search Errors and Model Errors: Cat Got Your Tongue? | We report on search errors and model errors in neural machine translation
(NMT). We present an exact inference procedure for neural sequence models based
on a combination of beam search and depth-first search. We use our exact search
to find the global best model scores under a Transformer base model for the
entire WMT15 English-German test set. Surprisingly, beam search fails to find
these global best model scores in most cases, even with a very large beam size
of 100. For more than 50% of the sentences, the model in fact assigns its
global best score to the empty translation, revealing a massive failure of
neural models in properly accounting for adequacy. We show by constraining
search with a minimum translation length that at the root of the problem of
empty translations lies an inherent bias towards shorter translations. We
conclude that vanilla NMT in its current form requires just the right amount of
beam search errors, which, from a modelling perspective, is a highly
unsatisfactory conclusion indeed, as the model often prefers an empty
translation.
| 2,019 | Computation and Language |
Multi-Layer Softmaxing during Training Neural Machine Translation for
Flexible Decoding with Fewer Layers | This paper proposes a novel procedure for training an encoder-decoder based
deep neural network which compresses NxM models into a single model enabling us
to dynamically choose the number of encoder and decoder layers for decoding.
Usually, the output of the last layer of the N-layer encoder is fed to the
M-layer decoder, and the output of the last decoder layer is used to compute
softmax loss. Instead, our method computes a single loss consisting of NxM
losses: the softmax loss for the output of each of the M decoder layers derived
using the output of each of the N encoder layers. A single model trained by our
method can be used for decoding with an arbitrary fewer number of encoder and
decoder layers. In practical scenarios, this (a) enables faster decoding with
insignificant losses in translation quality and (b) alleviates the need to
train NxM models, thereby saving space. We take a case study of neural machine
translation and show the advantage and give a cost-benefit analysis of our
approach.
| 2,019 | Computation and Language |
A Morpho-Syntactically Informed LSTM-CRF Model for Named Entity
Recognition | We propose a morphologically informed model for named entity recognition,
which is based on LSTM-CRF architecture and combines word embeddings, Bi-LSTM
character embeddings, part-of-speech (POS) tags, and morphological information.
While previous work has focused on learning from raw word input, using word and
character embeddings only, we show that for morphologically rich languages,
such as Bulgarian, access to POS information contributes more to the
performance gains than the detailed morphological information. Thus, we show
that named entity recognition needs only coarse-grained POS tags, but at the
same time it can benefit from simultaneously using some POS information of
different granularity. Our evaluation results over a standard dataset show
sizable improvements over the state-of-the-art for Bulgarian NER.
| 2,019 | Computation and Language |
The Wiki Music dataset: A tool for computational analysis of popular
music | Is it possible use algorithms to find trends in the history of popular music?
And is it possible to predict the characteristics of future music genres? In
order to answer these questions, we produced a hand-crafted dataset with the
intent to put together features about style, psychology, sociology and
typology, annotated by music genre and indexed by time and decade. We collected
a list of popular genres by decade from Wikipedia and scored music genres based
on Wikipedia descriptions. Using statistical and machine learning techniques,
we find trends in the musical preferences and use time series forecasting to
evaluate the prediction of future music genres.
| 2,019 | Computation and Language |
Is the Red Square Big? MALeViC: Modeling Adjectives Leveraging Visual
Contexts | This work aims at modeling how the meaning of gradable adjectives of size
(`big', `small') can be learned from visually-grounded contexts. Inspired by
cognitive and linguistic evidence showing that the use of these expressions
relies on setting a threshold that is dependent on a specific context, we
investigate the ability of multi-modal models in assessing whether an object is
`big' or `small' in a given visual scene. In contrast with the standard
computational approach that simplistically treats gradable adjectives as
`fixed' attributes, we pose the problem as relational: to be successful, a
model has to consider the full visual context. By means of four main tasks, we
show that state-of-the-art models (but not a relatively strong baseline) can
learn the function subtending the meaning of size adjectives, though their
performance is found to decrease while moving from simple to more complex
tasks. Crucially, models fail in developing abstract representations of
gradable adjectives that can be used compositionally.
| 2,019 | Computation and Language |
Bridging the Gap for Tokenizer-Free Language Models | Purely character-based language models (LMs) have been lagging in quality on
large scale datasets, and current state-of-the-art LMs rely on word
tokenization. It has been assumed that injecting the prior knowledge of a
tokenizer into the model is essential to achieving competitive results. In this
paper, we show that contrary to this conventional wisdom, tokenizer-free LMs
with sufficient capacity can achieve competitive performance on a large scale
dataset. We train a vanilla transformer network with 40 self-attention layers
on the One Billion Word (lm1b) benchmark and achieve a new state of the art for
tokenizer-free LMs, pushing these models to be on par with their word-based
counterparts.
| 2,019 | Computation and Language |
Movie Plot Analysis via Turning Point Identification | According to screenwriting theory, turning points (e.g., change of plans,
major setback, climax) are crucial narrative moments within a screenplay: they
define the plot structure, determine its progression and segment the screenplay
into thematic units (e.g., setup, complications, aftermath). We propose the
task of turning point identification in movies as a means of analyzing their
narrative structure. We argue that turning points and the segmentation they
provide can facilitate processing long, complex narratives, such as
screenplays, for summarization and question answering. We introduce a dataset
consisting of screenplays and plot synopses annotated with turning points and
present an end-to-end neural network model that identifies turning points in
plot synopses and projects them onto scenes in screenplays. Our model
outperforms strong baselines based on state-of-the-art sentence representations
and the expected position of turning points.
| 2,019 | Computation and Language |
Facet-Aware Evaluation for Extractive Summarization | Commonly adopted metrics for extractive summarization focus on lexical
overlap at the token level. In this paper, we present a facet-aware evaluation
setup for better assessment of the information coverage in extracted summaries.
Specifically, we treat each sentence in the reference summary as a
\textit{facet}, identify the sentences in the document that express the
semantics of each facet as \textit{support sentences} of the facet, and
automatically evaluate extractive summarization methods by comparing the
indices of extracted sentences and support sentences of all the facets in the
reference summary. To facilitate this new evaluation setup, we construct an
extractive version of the CNN/Daily Mail dataset and perform a thorough
quantitative investigation, through which we demonstrate that facet-aware
evaluation manifests better correlation with human judgment than ROUGE, enables
fine-grained evaluation as well as comparative analysis, and reveals valuable
insights of state-of-the-art summarization methods. Data can be found at
https://github.com/morningmoni/FAR.
| 2,020 | Computation and Language |
Investigating Meta-Learning Algorithms for Low-Resource Natural Language
Understanding Tasks | Learning general representations of text is a fundamental problem for many
natural language understanding (NLU) tasks. Previously, researchers have
proposed to use language model pre-training and multi-task learning to learn
robust representations. However, these methods can achieve sub-optimal
performance in low-resource scenarios. Inspired by the recent success of
optimization-based meta-learning algorithms, in this paper, we explore the
model-agnostic meta-learning algorithm (MAML) and its variants for low-resource
NLU tasks. We validate our methods on the GLUE benchmark and show that our
proposed models can outperform several strong baselines. We further empirically
demonstrate that the learned representations can be adapted to new tasks
efficiently and effectively.
| 2,019 | Computation and Language |
Unsupervised Domain Adaptation for Neural Machine Translation with
Domain-Aware Feature Embeddings | The recent success of neural machine translation models relies on the
availability of high quality, in-domain data. Domain adaptation is required
when domain-specific data is scarce or nonexistent. Previous unsupervised
domain adaptation strategies include training the model with in-domain copied
monolingual or back-translated data. However, these methods use generic
representations for text regardless of domain shift, which makes it infeasible
for translation models to control outputs conditional on a specific domain. In
this work, we propose an approach that adapts models with domain-aware feature
embeddings, which are learned via an auxiliary language modeling task. Our
approach allows the model to assign domain-specific representations to words
and output sentences in the desired domain. Our empirical results demonstrate
the effectiveness of the proposed strategy, achieving consistent improvements
in multiple experimental settings. In addition, we show that combining our
method with back translation can further improve the performance of the model.
| 2,019 | Computation and Language |
Interactive Machine Comprehension with Information Seeking Agents | Existing machine reading comprehension (MRC) models do not scale effectively
to real-world applications like web-level information retrieval and question
answering (QA). We argue that this stems from the nature of MRC datasets: most
of these are static environments wherein the supporting documents and all
necessary information are fully observed. In this paper, we propose a simple
method that reframes existing MRC datasets as interactive, partially observable
environments. Specifically, we "occlude" the majority of a document's text and
add context-sensitive commands that reveal "glimpses" of the hidden text to a
model. We repurpose SQuAD and NewsQA as an initial case study, and then show
how the interactive corpora can be used to train a model that seeks relevant
information through sequential decision making. We believe that this setting
can contribute in scaling models to web-level QA scenarios.
| 2,020 | Computation and Language |
A survey of cross-lingual features for zero-shot cross-lingual semantic
parsing | The availability of corpora to train semantic parsers in English has lead to
significant advances in the field. Unfortunately, for languages other than
English, annotation is scarce and so are developed parsers. We then ask: could
a parser trained in English be applied to language that it hasn't been trained
on? To answer this question we explore zero-shot cross-lingual semantic parsing
where we train an available coarse-to-fine semantic parser (Liu et al., 2018)
using cross-lingual word embeddings and universal dependencies in English and
test it on Italian, German and Dutch. Results on the Parallel Meaning Bank - a
multilingual semantic graphbank, show that Universal Dependency features
significantly boost performance when used in conjunction with other lexical
features but modelling the UD structure directly when encoding the input does
not.
| 2,019 | Computation and Language |
Classical Chinese Sentence Segmentation for Tomb Biographies of Tang
Dynasty | Tomb biographies of the Tang dynasty provide invaluable information about
Chinese history. The original biographies are classical Chinese texts which
contain neither word boundaries nor sentence boundaries. Relying on three
published books of tomb biographies of the Tang dynasty, we investigated the
effectiveness of employing machine-learning methods for algorithmically
identifying the pauses and terminals of sentences in the biographies.
We consider the segmentation task as a classification problem. Chinese
characters that are and are not followed by a punctuation mark are classified
into two categories. We applied a machine-learning-based mechanism, the
conditional random fields (CRF), to classify the characters (and words) in the
texts, and we studied the contributions of selected types of lexical
information to the resulting quality of the segmentation recommendations.
This proposal presented at the DH 2018 conference discussed some of the basic
experiments and their evaluations. By considering the contextual information
and employing the heuristics provided by experts of Chinese literature, we
achieved F1 measures that were better than 80%. More complex experiments that
employ deep neural networks helped us further improve the results in recent
work.
| 2,019 | Computation and Language |
Onto Word Segmentation of the Complete Tang Poems | We aim at segmenting words in the Complete Tang Poems (CTP). Although it is
possible to do some research about CTP without doing full-scale word
segmentation, we must move forward to word-level analysis of CTP for conducting
advanced research topics. In November 2018 when we submitted the manuscript for
DH 2019 (ADHO), we collected only 2433 poems that were segmented by trained
experts, and used the segmented poems to evaluate the segmenter that considered
domain knowledge of Chinese poetry. We trained pointwise mutual information
(PMI) between Chinese characters based on the CTP poems (excluding the 2433
poems, which were used exclusively only for testing) and the domain knowledge.
The segmenter relied on the PMI information to the recover 85.7% of words in
the test poems. We could segment a poem completely correct only 17.8% of the
time, however. When we presented our work at DH 2019, we have annotated more
than 20000 poems. With a much larger amount of data, we were able to apply
biLSTM models for this word segmentation task, and we segmented a poem
completely correct above 20% of the time. In contrast, human annotators
completely agreed on their annotations about 40% of the time.
| 2,019 | Computation and Language |
Exploiting Multiple Embeddings for Chinese Named Entity Recognition | Identifying the named entities mentioned in text would enrich many semantic
applications at the downstream level. However, due to the predominant usage of
colloquial language in microblogs, the named entity recognition (NER) in
Chinese microblogs experience significant performance deterioration, compared
with performing NER in formal Chinese corpus. In this paper, we propose a
simple yet effective neural framework to derive the character-level embeddings
for NER in Chinese text, named ME-CNER. A character embedding is derived with
rich semantic information harnessed at multiple granularities, ranging from
radical, character to word levels. The experimental results demonstrate that
the proposed approach achieves a large performance improvement on Weibo dataset
and comparable performance on MSRA news dataset with lower computational cost
against the existing state-of-the-art alternatives.
| 2,019 | Computation and Language |
Emotion Detection with Neural Personal Discrimination | There have been a recent line of works to automatically predict the emotions
of posts in social media. Existing approaches consider the posts individually
and predict their emotions independently. Different from previous researches,
we explore the dependence among relevant posts via the authors' backgrounds,
since the authors with similar backgrounds, e.g., gender, location, tend to
express similar emotions. However, such personal attributes are not easy to
obtain in most social media websites, and it is hard to capture
attributes-aware words to connect similar people. Accordingly, we propose a
Neural Personal Discrimination (NPD) approach to address above challenges by
determining personal attributes from posts, and connecting relevant posts with
similar attributes to jointly learn their emotions. In particular, we employ
adversarial discriminators to determine the personal attributes, with attention
mechanisms to aggregate attributes-aware words. In this way, social
correlationship among different posts can be better addressed. Experimental
results show the usefulness of personal attributes, and the effectiveness of
our proposed NPD approach in capturing such personal attributes with
significant gains over the state-of-the-art models.
| 2,019 | Computation and Language |
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain
Task-Oriented Dialog | Dialog policy decides what and how a task-oriented dialog system will
respond, and plays a vital role in delivering effective conversations. Many
studies apply Reinforcement Learning to learn a dialog policy with the reward
function which requires elaborate design and pre-specified user goals. With the
growing needs to handle complex goals across multiple domains, such manually
designed reward functions are not affordable to deal with the complexity of
real-world tasks. To this end, we propose Guided Dialog Policy Learning, a
novel algorithm based on Adversarial Inverse Reinforcement Learning for joint
reward estimation and policy optimization in multi-domain task-oriented dialog.
The proposed approach estimates the reward signal and infers the user goal in
the dialog sessions. The reward estimator evaluates the state-action pairs so
that it can guide the dialog policy at each dialog turn. Extensive experiments
on a multi-domain dialog dataset show that the dialog policy guided by the
learned reward function achieves remarkably higher task success than
state-of-the-art baselines.
| 2,019 | Computation and Language |
Discourse-Aware Semantic Self-Attention for Narrative Reading
Comprehension | In this work, we propose to use linguistic annotations as a basis for a
\textit{Discourse-Aware Semantic Self-Attention} encoder that we employ for
reading comprehension on long narrative texts. We extract relations between
discourse units, events and their arguments as well as coreferring mentions,
using available annotation tools. Our empirical evaluation shows that the
investigated structures improve the overall performance, especially
intra-sentential and cross-sentential discourse relations, sentence-internal
semantic role relations, and long-distance coreference relations. We show that
dedicating self-attention heads to intra-sentential relations and relations
connecting neighboring sentences is beneficial for finding answers to questions
in longer contexts. Our findings encourage the use of discourse-semantic
annotations to enhance the generalization capacity of self-attention models for
reading comprehension.
| 2,019 | Computation and Language |
DeepCopy: Grounded Response Generation with Hierarchical Pointer
Networks | Recent advances in neural sequence-to-sequence models have led to promising
results for several language generation-based tasks, including dialogue
response generation, summarization, and machine translation. However, these
models are known to have several problems, especially in the context of
chit-chat based dialogue systems: they tend to generate short and dull
responses that are often too generic. Furthermore, these models do not ground
conversational responses on knowledge and facts, resulting in turns that are
not accurate, informative and engaging for the users. In this paper, we propose
and experiment with a series of response generation models that aim to serve in
the general scenario where in addition to the dialogue context, relevant
unstructured external knowledge in the form of text is also assumed to be
available for models to harness. Our proposed approach extends
pointer-generator networks (See et al., 2017) by allowing the decoder to
hierarchically attend and copy from external knowledge in addition to the
dialogue context. We empirically show the effectiveness of the proposed model
compared to several baselines including (Ghazvininejad et al., 2018; Zhang et
al., 2018) through both automatic evaluation metrics and human evaluation on
CONVAI2 dataset.
| 2,019 | Computation and Language |
Language Tasks and Language Games: On Methodology in Current Natural
Language Processing Research | "This paper introduces a new task and a new dataset", "we improve the state
of the art in X by Y" -- it is rare to find a current natural language
processing paper (or AI paper more generally) that does not contain such
statements. What is mostly left implicit, however, is the assumption that this
necessarily constitutes progress, and what it constitutes progress towards.
Here, we make more precise the normally impressionistically used notions of
language task and language game and ask how a research programme built on these
might make progress towards the goal of modelling general language competence.
| 2,019 | Computation and Language |
Unlearn Dataset Bias in Natural Language Inference by Fitting the
Residual | Statistical natural language inference (NLI) models are susceptible to
learning dataset bias: superficial cues that happen to associate with the label
on a particular dataset, but are not useful in general, e.g., negation words
indicate contradiction. As exposed by several recent challenge datasets, these
models perform poorly when such association is absent, e.g., predicting that "I
love dogs" contradicts "I don't love cats". Our goal is to design learning
algorithms that guard against known dataset bias. We formalize the concept of
dataset bias under the framework of distribution shift and present a simple
debiasing algorithm based on residual fitting, which we call DRiFt. We first
learn a biased model that only uses features that are known to relate to
dataset bias. Then, we train a debiased model that fits to the residual of the
biased model, focusing on examples that cannot be predicted well by biased
features only. We use DRiFt to train three high-performing NLI models on two
benchmark datasets, SNLI and MNLI. Our debiased models achieve significant
gains over baseline models on two challenge test sets, while maintaining
reasonable performance on the original test sets.
| 2,019 | Computation and Language |
Data Augmentation with Atomic Templates for Spoken Language
Understanding | Spoken Language Understanding (SLU) converts user utterances into structured
semantic representations. Data sparsity is one of the main obstacles of SLU due
to the high cost of human annotation, especially when domain changes or a new
domain comes. In this work, we propose a data augmentation method with atomic
templates for SLU, which involves minimum human efforts. The atomic templates
produce exemplars for fine-grained constituents of semantic representations. We
propose an encoder-decoder model to generate the whole utterance from atomic
exemplars. Moreover, the generator could be transferred from source domains to
help a new domain which has little data. Experimental results show that our
method achieves significant improvements on DSTC 2\&3 dataset which is a domain
adaptation setting of SLU.
| 2,019 | Computation and Language |
An Empirical Comparison on Imitation Learning and Reinforcement Learning
for Paraphrase Generation | Generating paraphrases from given sentences involves decoding words step by
step from a large vocabulary. To learn a decoder, supervised learning which
maximizes the likelihood of tokens always suffers from the exposure bias.
Although both reinforcement learning (RL) and imitation learning (IL) have been
widely used to alleviate the bias, the lack of direct comparison leads to only
a partial image on their benefits. In this work, we present an empirical study
on how RL and IL can help boost the performance of generating paraphrases, with
the pointer-generator as a base model. Experiments on the benchmark datasets
show that (1) imitation learning is constantly better than reinforcement
learning; and (2) the pointer-generator models with imitation learning
outperform the state-of-the-art methods with a large margin.
| 2,022 | Computation and Language |
Analyzing Customer Feedback for Product Fit Prediction | One of the biggest hurdles for customers when purchasing fashion online, is
the difficulty of finding products with the right fit. In order to provide a
better online shopping experience, platforms need to find ways to recommend the
right product sizes and the best fitting products to their customers. These
recommendation systems, however, require customer feedback in order to estimate
the most suitable sizing options. Such feedback is rare and often only
available as natural text. In this paper, we examine the extraction of product
fit feedback from customer reviews using natural language processing
techniques. In particular, we compare traditional methods with more recent
transfer learning techniques for text classification, and analyze their
results. Our evaluation shows, that the transfer learning approach ULMFit is
not only comparatively fast to train, but also achieves highest accuracy on
this task. The integration of the extracted information with actual size
recommendation systems is left for future work.
| 2,019 | Computation and Language |
Interactive Language Learning by Question Answering | Humans observe and interact with the world to acquire knowledge. However,
most existing machine reading comprehension (MRC) tasks miss the interactive,
information-seeking component of comprehension. Such tasks present models with
static documents that contain all necessary information, usually concentrated
in a single short substring. Thus, models can achieve strong performance
through simple word- and phrase-based pattern matching. We address this problem
by formulating a novel text-based question answering task: Question Answering
with Interactive Text (QAit). In QAit, an agent must interact with a partially
observable text-based environment to gather information required to answer
questions. QAit poses questions about the existence, location, and attributes
of objects found in the environment. The data is built using a text-based game
generator that defines the underlying dynamics of interaction with the
environment. We propose and evaluate a set of baseline models for the QAit task
that includes deep reinforcement learning agents. Experiments show that the
task presents a major challenge for machine reading systems, while humans solve
it with relative ease.
| 2,019 | Computation and Language |
SpatialNLI: A Spatial Domain Natural Language Interface to Databases
Using Spatial Comprehension | A natural language interface (NLI) to databases is an interface that
translates a natural language question to a structured query that is executable
by database management systems (DBMS). However, an NLI that is trained in the
general domain is hard to apply in the spatial domain due to the idiosyncrasy
and expressiveness of the spatial questions. Inspired by the machine
comprehension model, we propose a spatial comprehension model that is able to
recognize the meaning of spatial entities based on the semantics of the
context. The spatial semantics learned from the spatial comprehension model is
then injected to the natural language question to ease the burden of capturing
the spatial-specific semantics. With our spatial comprehension model and
information injection, our NLI for the spatial domain, named SpatialNLI, is
able to capture the semantic structure of the question and translate it to the
corresponding syntax of an executable query accurately. We also experimentally
ascertain that SpatialNLI outperforms state-of-the-art methods.
| 2,019 | Computation and Language |
Learning a Multi-Domain Curriculum for Neural Machine Translation | Most data selection research in machine translation focuses on improving a
single domain. We perform data selection for multiple domains at once. This is
achieved by carefully introducing instance-level domain-relevance features and
automatically constructing a training curriculum to gradually concentrate on
multi-domain relevant and noise-reduced data batches. Both the choice of
features and the use of curriculum are crucial for balancing and improving all
domains, including out-of-domain. In large-scale experiments, the multi-domain
curriculum simultaneously reaches or outperforms the individual performance and
brings solid gains over no-curriculum training.
| 2,020 | Computation and Language |
Leveraging Structural and Semantic Correspondence for Attribute-Oriented
Aspect Sentiment Discovery | Opinionated text often involves attributes such as authorship and location
that influence the sentiments expressed for different aspects. We posit that
structural and semantic correspondence is both prevalent in opinionated text,
especially when associated with attributes, and crucial in accurately revealing
its latent aspect and sentiment structure. However, it is not recognized by
existing approaches.
We propose Trait, an unsupervised probabilistic model that discovers aspects
and sentiments from text and associates them with different attributes. To this
end, Trait infers and leverages structural and semantic correspondence using a
Markov Random Field. We show empirically that by incorporating attributes
explicitly Trait significantly outperforms state-of-the-art baselines both by
generating attribute profiles that accord with our intuitions, as shown via
visualization, and yielding topics of greater semantic cohesion.
| 2,019 | Computation and Language |
Two-Pass End-to-End Speech Recognition | The requirements for many applications of state-of-the-art speech recognition
systems include not only low word error rate (WER) but also low latency.
Specifically, for many use-cases, the system must be able to decode utterances
in a streaming fashion and faster than real-time. Recently, a streaming
recurrent neural network transducer (RNN-T) end-to-end (E2E) model has shown to
be a good candidate for on-device speech recognition, with improved WER and
latency metrics compared to conventional on-device models [1]. However, this
model still lags behind a large state-of-the-art conventional model in quality
[2]. On the other hand, a non-streaming E2E Listen, Attend and Spell (LAS)
model has shown comparable quality to large conventional models [3]. This work
aims to bring the quality of an E2E streaming model closer to that of a
conventional system by incorporating a LAS network as a second-pass component,
while still abiding by latency constraints. Our proposed two-pass model
achieves a 17%-22% relative reduction in WER compared to RNN-T alone and
increases latency by a small fraction over RNN-T.
| 2,019 | Computation and Language |
Scientific Statement Classification over arXiv.org | We introduce a new classification task for scientific statements and release
a large-scale dataset for supervised learning. Our resource is derived from a
machine-readable representation of the arXiv.org collection of preprint
articles. We explore fifty author-annotated categories and empirically motivate
a task design of grouping 10.5 million annotated paragraphs into thirteen
classes. We demonstrate that the task setup aligns with known success rates
from the state of the art, peaking at a 0.91 F1-score via a BiLSTM
encoder-decoder model. Additionally, we introduce a lexeme serialization for
mathematical formulas, and observe that context-aware models could improve when
also trained on the symbolic modality. Finally, we discuss the limitations of
both data and task design, and outline potential directions towards
increasingly complex models of scientific discourse, beyond isolated
statements.
| 2,019 | Computation and Language |
Neural Snowball for Few-Shot Relation Learning | Knowledge graphs typically undergo open-ended growth of new relations. This
cannot be well handled by relation extraction that focuses on pre-defined
relations with sufficient training data. To address new relations with few-shot
instances, we propose a novel bootstrapping approach, Neural Snowball, to learn
new relations by transferring semantic knowledge about existing relations. More
specifically, we use Relational Siamese Networks (RSN) to learn the metric of
relational similarities between instances based on existing relations and their
labeled data. Afterwards, given a new relation and its few-shot instances, we
use RSN to accumulate reliable instances from unlabeled corpora; these
instances are used to train a relation classifier, which can further identify
new facts of the new relation. The process is conducted iteratively like a
snowball. Experiments show that our model can gather high-quality instances for
better few-shot relation learning and achieves significant improvement compared
to baselines. Codes and datasets are released on
https://github.com/thunlp/Neural-Snowball.
| 2,019 | Computation and Language |
A Joint Model for Aspect-Category Sentiment Analysis with Shared
Sentiment Prediction Layer | Aspect-category sentiment analysis (ACSA) aims to predict the aspect
categories mentioned in texts and their corresponding sentiment polarities.
Some joint models have been proposed to address this task. Given a text, these
joint models detect all the aspect categories mentioned in the text and predict
the sentiment polarities toward them at once. Although these joint models
obtain promising performances, they train separate parameters for each aspect
category and therefore suffer from data deficiency of some aspect categories.
To solve this problem, we propose a novel joint model which contains a shared
sentiment prediction layer. The shared sentiment prediction layer transfers
sentiment knowledge between aspect categories and alleviates the problem caused
by data deficiency. Experiments conducted on SemEval-2016 Datasets demonstrate
the effectiveness of our model.
| 2,021 | Computation and Language |
Regularized Context Gates on Transformer for Machine Translation | Context gates are effective to control the contributions from the source and
target contexts in the recurrent neural network (RNN) based neural machine
translation (NMT). However, it is challenging to extend them into the advanced
Transformer architecture, which is more complicated than RNN. This paper first
provides a method to identify source and target contexts and then introduce a
gate mechanism to control the source and target contributions in Transformer.
In addition, to further reduce the bias problem in the gate mechanism, this
paper proposes a regularization method to guide the learning of the gates with
supervision automatically generated using pointwise mutual information.
Extensive experiments on 4 translation datasets demonstrate that the proposed
model obtains an averaged gain of 1.0 BLEU score over a strong Transformer
baseline.
| 2,020 | Computation and Language |
Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of
NER | BiLSTM has been prevalently used as a core module for NER in a
sequence-labeling setup. State-of-the-art approaches use BiLSTM with additional
resources such as gazetteers, language-modeling, or multi-task supervision to
further improve NER. This paper instead takes a step back and focuses on
analyzing problems of BiLSTM itself and how exactly self-attention can bring
improvements. We formally show the limitation of (CRF-)BiLSTM in modeling
cross-context patterns for each word -- the XOR limitation. Then, we show that
two types of simple cross-structures -- self-attention and Cross-BiLSTM -- can
effectively remedy the problem. We test the practical impacts of the deficiency
on real-world NER datasets, OntoNotes 5.0 and WNUT 2017, with clear and
consistent improvements over the baseline, up to 8.7% on some of the
multi-token entity mentions. We give in-depth analyses of the improvements
across several aspects of NER, especially the identification of multi-token
mentions. This study should lay a sound foundation for future improvements on
sequence-labeling NER. (Source codes:
https://github.com/jacobvsdanniel/cross-ner)
| 2,020 | Computation and Language |
Shallow Syntax in Deep Water | Shallow syntax provides an approximation of phrase-syntactic structure of
sentences; it can be produced with high accuracy, and is computationally cheap
to obtain. We investigate the role of shallow syntax-aware representations for
NLP tasks using two techniques. First, we enhance the ELMo architecture to
allow pretraining on predicted shallow syntactic parses, instead of just raw
text, so that contextual embeddings make use of shallow syntactic context. Our
second method involves shallow syntactic features obtained automatically on
downstream task data. Neither approach leads to a significant gain on any of
the four downstream tasks we considered relative to ELMo-only baselines.
Further analysis using black-box probes confirms that our shallow-syntax-aware
contextual embeddings do not transfer to linguistic tasks any more easily than
ELMo's embeddings. We take these findings as evidence that ELMo-style
pretraining discovers representations which make additional awareness of
shallow syntax redundant.
| 2,019 | Computation and Language |
Multilingual and Multi-Aspect Hate Speech Analysis | Current research on hate speech analysis is typically oriented towards
monolingual and single classification tasks. In this paper, we present a new
multilingual multi-aspect hate speech analysis dataset and use it to test the
current state-of-the-art multilingual multitask learning approaches. We
evaluate our dataset in various classification settings, then we discuss how to
leverage our annotations in order to improve hate speech detection and
classification in general.
| 2,019 | Computation and Language |
Zero-shot Text-to-SQL Learning with Auxiliary Task | Recent years have seen great success in the use of neural seq2seq models on
the text-to-SQL task. However, little work has paid attention to how these
models generalize to realistic unseen data, which naturally raises a question:
does this impressive performance signify a perfect generalization model, or are
there still some limitations?
In this paper, we first diagnose the bottleneck of text-to-SQL task by
providing a new testbed, in which we observe that existing models present poor
generalization ability on rarely-seen data. The above analysis encourages us to
design a simple but effective auxiliary task, which serves as a supportive
model as well as a regularization term to the generation task to increase the
models generalization. Experimentally, We evaluate our models on a large
text-to-SQL dataset WikiSQL. Compared to a strong baseline coarse-to-fine
model, our models improve over the baseline by more than 3% absolute in
accuracy on the whole dataset. More interestingly, on a zero-shot subset test
of WikiSQL, our models achieve 5% absolute accuracy gain over the baseline,
clearly demonstrating its superior generalizability.
| 2,019 | Computation and Language |
Leveraging Frequent Query Substructures to Generate Formal Queries for
Complex Question Answering | Formal query generation aims to generate correct executable queries for
question answering over knowledge bases (KBs), given entity and relation
linking results. Current approaches build universal paraphrasing or ranking
models for the whole questions, which are likely to fail in generating queries
for complex, long-tail questions. In this paper, we propose SubQG, a new query
generation approach based on frequent query substructures, which helps rank the
existing (but nonsignificant) query structures or build new query structures.
Our experiments on two benchmark datasets show that our approach significantly
outperforms the existing ones, especially for complex questions. Also, it
achieves promising performance with limited training data and noisy
entity/relation linking results.
| 2,019 | Computation and Language |
Document Hashing with Mixture-Prior Generative Models | Hashing is promising for large-scale information retrieval tasks thanks to
the efficiency of distance evaluation between binary codes. Generative hashing
is often used to generate hashing codes in an unsupervised way. However,
existing generative hashing methods only considered the use of simple priors,
like Gaussian and Bernoulli priors, which limits these methods to further
improve their performance. In this paper, two mixture-prior generative models
are proposed, under the objective to produce high-quality hashing codes for
documents. Specifically, a Gaussian mixture prior is first imposed onto the
variational auto-encoder (VAE), followed by a separate step to cast the
continuous latent representation of VAE into binary code. To avoid the
performance loss caused by the separate casting, a model using a Bernoulli
mixture prior is further developed, in which an end-to-end training is admitted
by resorting to the straight-through (ST) discrete gradient estimator.
Experimental results on several benchmark datasets demonstrate that the
proposed methods, especially the one using Bernoulli mixture priors,
consistently outperform existing ones by a substantial margin.
| 2,019 | Computation and Language |
Probing Representations Learned by Multimodal Recurrent and Transformer
Models | Recent literature shows that large-scale language modeling provides excellent
reusable sentence representations with both recurrent and self-attentive
architectures. However, there has been less clarity on the commonalities and
differences in the representational properties induced by the two
architectures. It also has been shown that visual information serves as one of
the means for grounding sentence representations. In this paper, we present a
meta-study assessing the representational quality of models where the training
signal is obtained from different modalities, in particular, language modeling,
image features prediction, and both textual and multimodal machine translation.
We evaluate textual and visual features of sentence representations obtained
using predominant approaches on image retrieval and semantic textual
similarity. Our experiments reveal that on moderate-sized datasets, a sentence
counterpart in a target language or visual modality provides much stronger
training signal for sentence representation than language modeling.
Importantly, we observe that while the Transformer models achieve superior
machine translation quality, representations from the recurrent neural network
based models perform significantly better over tasks focused on semantic
relevance.
| 2,019 | Computation and Language |
Ellipsis Resolution as Question Answering: An Evaluation | Most, if not all forms of ellipsis (e.g., so does Mary) are similar to
reading comprehension questions (what does Mary do), in that in order to
resolve them, we need to identify an appropriate text span in the preceding
discourse. Following this observation, we present an alternative approach for
English ellipsis resolution relying on architectures developed for question
answering (QA). We present both single-task models, and joint models trained on
auxiliary QA and coreference resolution datasets, clearly outperforming the
current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F1) and Verb
Phrase Ellipsis (from 72.89 to 78.66 F1).
| 2,021 | Computation and Language |
A Summarization System for Scientific Documents | We present a novel system providing summaries for Computer Science
publications. Through a qualitative user study, we identified the most valuable
scenarios for discovery, exploration and understanding of scientific documents.
Based on these findings, we built a system that retrieves and summarizes
scientific documents for a given information need, either in form of a
free-text query or by choosing categorized values such as scientific tasks,
datasets and more. Our system ingested 270,000 papers, and its summarization
module aims to generate concise yet detailed summaries. We validated our
approach with human experts.
| 2,019 | Computation and Language |
Global Reasoning over Database Structures for Text-to-SQL Parsing | State-of-the-art semantic parsers rely on auto-regressive decoding, emitting
one symbol at a time. When tested against complex databases that are unobserved
at training time (zero-shot), the parser often struggles to select the correct
set of database constants in the new database, due to the local nature of
decoding. In this work, we propose a semantic parser that globally reasons
about the structure of the output query to make a more contextually-informed
selection of database constants. We use message-passing through a graph neural
network to softly select a subset of database constants for the output query,
conditioned on the question. Moreover, we train a model to rank queries based
on the global alignment of database constants to question words. We apply our
techniques to the current state-of-the-art model for Spider, a zero-shot
semantic parsing dataset with complex databases, increasing accuracy from 39.4%
to 47.4%.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.