Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Dialog State Tracking: A Neural Reading Comprehension Approach | Dialog state tracking is used to estimate the current belief state of a
dialog given all the preceding conversation. Machine reading comprehension, on
the other hand, focuses on building systems that read passages of text and
answer questions that require some understanding of passages. We formulate
dialog state tracking as a reading comprehension task to answer the question
$what\ is\ the\ state\ of\ the\ current\ dialog?$ after reading conversational
context. In contrast to traditional state tracking methods where the dialog
state is often predicted as a distribution over a closed set of all the
possible slot values within an ontology, our method uses a simple
attention-based neural network to point to the slot values within the
conversation. Experiments on MultiWOZ-2.0 cross-domain dialog dataset show that
our simple system can obtain similar accuracies compared to the previous more
complex methods. By exploiting recent advances in contextual word embeddings,
adding a model that explicitly tracks whether a slot value should be carried
over to the next turn, and combining our method with a traditional joint state
tracking method that relies on closed set vocabulary, we can obtain a
joint-goal accuracy of $47.33\%$ on the standard test split, exceeding current
state-of-the-art by $11.75\%$**.
| 2,019 | Computation and Language |
Self-Balanced Dropout | Dropout is known as an effective way to reduce overfitting via preventing
co-adaptations of units. In this paper, we theoretically prove that the
co-adaptation problem still exists after using dropout due to the correlations
among the inputs. Based on the proof, we further propose Self-Balanced Dropout,
a novel dropout method which uses a trainable variable to balance the influence
of the input correlation on parameter update. We evaluate Self-Balanced Dropout
on a range of tasks with both simple and complex models. The experimental
results show that the mechanism can effectively solve the co-adaption problem
to some extent and significantly improve the performance on all tasks.
| 2,019 | Computation and Language |
Word Embedding for Response-To-Text Assessment of Evidence | Manually grading the Response to Text Assessment (RTA) is labor intensive.
Therefore, an automatic method is being developed for scoring analytical
writing when the RTA is administered in large numbers of classrooms. Our
long-term goal is to also use this scoring method to provide formative feedback
to students and teachers about students' writing quality. As a first step
towards this goal, interpretable features for automatically scoring the
evidence rubric of the RTA have been developed. In this paper, we present a
simple but promising method for improving evidence scoring by employing the
word embedding model. We evaluate our method on corpora of responses written by
upper elementary students.
| 2,017 | Computation and Language |
eRevise: Using Natural Language Processing to Provide Formative Feedback
on Text Evidence Usage in Student Writing | Writing a good essay typically involves students revising an initial paper
draft after receiving feedback. We present eRevise, a web-based writing and
revising environment that uses natural language processing features generated
for rubric-based essay scoring to trigger formative feedback messages regarding
students' use of evidence in response-to-text writing. By helping students
understand the criteria for using text evidence during writing, eRevise
empowers students to better revise their paper drafts. In a pilot deployment of
eRevise in 7 classrooms spanning grades 5 and 6, the quality of text evidence
usage in writing improved after students received formative feedback then
engaged in paper revision.
| 2,019 | Computation and Language |
Co-Attention Based Neural Network for Source-Dependent Essay Scoring | This paper presents an investigation of using a co-attention based neural
network for source-dependent essay scoring. We use a co-attention mechanism to
help the model learn the importance of each part of the essay more accurately.
Also, this paper shows that the co-attention based neural network model
provides reliable score prediction of source-dependent responses. We evaluate
our model on two source-dependent response corpora. Results show that our model
outperforms the baseline on both corpora. We also show that the attention of
the model is similar to the expert opinions with examples.
| 2,018 | Computation and Language |
Predicting Prosodic Prominence from Text with Pre-trained Contextualized
Word Representations | In this paper we introduce a new natural language processing dataset and
benchmark for predicting prosodic prominence from written text. To our
knowledge this will be the largest publicly available dataset with prosodic
labels. We describe the dataset construction and the resulting benchmark
dataset in detail and train a number of different models ranging from
feature-based classifiers to neural network systems for the prediction of
discretized prosodic prominence. We show that pre-trained contextualized word
representations from BERT outperform the other models even with less than 10%
of the training data. Finally we discuss the dataset in light of the results
and point to future research and plans for further improving both the dataset
and methods of predicting prosodic prominence from text. The dataset and the
code for the models are publicly available.
| 2,019 | Computation and Language |
A Weakly-Supervised Attention-based Visualization Tool for Assessing
Political Affiliation | In this work, we seek to finetune a weakly-supervised expert-guided Deep
Neural Network (DNN) for the purpose of determining political affiliations. In
this context, stance detection is used for determining political affiliation or
ideology which is framed in the form of relative proximities between entities
in a low-dimensional space. An attention-based mechanism is used to provide
model interpretability. A Deep Neural Network for Natural Language
Understanding (NLU) using static and contextual embeddings is trained and
evaluated. Various techniques to visualize the projections generated from the
network are evaluated for visualization efficiency. An overview of the pipeline
from data ingestion, processing and generation of visualization is given here.
A web-based framework created to faciliate this interaction and exploration is
presented here. Preliminary results of this study are summarized and future
work is outlined.
| 2,019 | Computation and Language |
Two-stage Training for Chinese Dialect Recognition | In this paper, we present a two-stage language identification (LID) system
based on a shallow ResNet14 followed by a simple 2-layer recurrent neural
network (RNN) architecture, which was used for Xunfei (iFlyTek) Chinese Dialect
Recognition Challenge and won the first place among 110 teams. The system
trains an acoustic model (AM) firstly with connectionist temporal
classification (CTC) to recognize the given phonetic sequence annotation and
then train another RNN to classify dialect category by utilizing the
intermediate features as inputs from the AM. Compared with a three-stage system
we further explore, our results show that the two-stage system can achieve high
accuracy for Chinese dialects recognition under both short utterance and long
utterance conditions with less training time.
| 2,019 | Computation and Language |
Text Summarization in the Biomedical Domain | This chapter gives an overview of recent advances in the field of biomedical
text summarization. Different types of challenges are introduced, and methods
are discussed concerning the type of challenge that they address. Biomedical
literature summarization is explored as a leading trend in the field, and some
future lines of work are pointed out. Underlying methods of recent
summarization systems are briefly explained and the most significant evaluation
results are mentioned. The primary purpose of this chapter is to review the
most significant research efforts made in the current decade toward new methods
of biomedical text summarization. As the main parts of this chapter, current
trends are discussed and new challenges are introduced.
| 2,019 | Computation and Language |
Clustering of Deep Contextualized Representations for Summarization of
Biomedical Texts | In recent years, summarizers that incorporate domain knowledge into the
process of text summarization have outperformed generic methods, especially for
summarization of biomedical texts. However, construction and maintenance of
domain knowledge bases are resource-intense tasks requiring significant manual
annotation. In this paper, we demonstrate that contextualized representations
extracted from the pre-trained deep language model BERT, can be effectively
used to measure the similarity between sentences and to quantify the
informative content. The results show that our BERT-based summarizer can
improve the performance of biomedical summarization. Although the summarizer
does not use any sources of domain knowledge, it can capture the context of
sentences more accurately than the comparison methods. The source code and data
are available at https://github.com/BioTextSumm/BERT-based-Summ.
| 2,019 | Computation and Language |
DpgMedia2019: A Dutch News Dataset for Partisanship Detection | We present a new Dutch news dataset with labeled partisanship. The dataset
contains more than 100K articles that are labeled on the publisher level and
776 articles that were crowdsourced using an internal survey platform and
labeled on the article level. In this paper, we document our original
motivation, the collection and annotation process, limitations, and
applications.
| 2,019 | Computation and Language |
Semantic Role Labeling with Associated Memory Network | Semantic role labeling (SRL) is a task to recognize all the
predicate-argument pairs of a sentence, which has been in a performance
improvement bottleneck after a series of latest works were presented. This
paper proposes a novel syntax-agnostic SRL model enhanced by the proposed
associated memory network (AMN), which makes use of inter-sentence attention of
label-known associated sentences as a kind of memory to further enhance
dependency-based SRL. In detail, we use sentences and their labels from train
dataset as an associated memory cue to help label the target sentence.
Furthermore, we compare several associated sentences selecting strategies and
label merging methods in AMN to find and utilize the label of associated
sentences while attending them. By leveraging the attentive memory from known
training data, Our full model reaches state-of-the-art on CoNLL-2009 benchmark
datasets for syntax-agnostic setting, showing a new effective research line of
SRL enhancement other than exploiting external resources such as well
pre-trained language models.
| 2,019 | Computation and Language |
Flexibly-Structured Model for Task-Oriented Dialogues | This paper proposes a novel end-to-end architecture for task-oriented
dialogue systems. It is based on a simple and practical yet very effective
sequence-to-sequence approach, where language understanding and state tracking
tasks are modeled jointly with a structured copy-augmented sequential decoder
and a multi-label decoder for each slot. The policy engine and language
generation tasks are modeled jointly following that. The copy-augmented
sequential decoder deals with new or unknown values in the conversation, while
the multi-label decoder combined with the sequential decoder ensures the
explicit assignment of values to slots. On the generation part, slot binary
classifiers are used to improve performance. This architecture is scalable to
real-world scenarios and is shown through an empirical evaluation to achieve
state-of-the-art performance on both the Cambridge Restaurant dataset and the
Stanford in-car assistant dataset\footnote{The code is available at
\url{https://github.com/uber-research/FSDM}}
| 2,019 | Computation and Language |
Fast and Accurate Capitalization and Punctuation for Automatic Speech
Recognition Using Transformer and Chunk Merging | In recent years, studies on automatic speech recognition (ASR) have shown
outstanding results that reach human parity on short speech segments. However,
there are still difficulties in standardizing the output of ASR such as
capitalization and punctuation restoration for long-speech transcription. The
problems obstruct readers to understand the ASR output semantically and also
cause difficulties for natural language processing models such as NER, POS and
semantic parsing. In this paper, we propose a method to restore the punctuation
and capitalization for long-speech ASR transcription. The method is based on
Transformer models and chunk merging that allows us to (1), build a single
model that performs punctuation and capitalization in one go, and (2), perform
decoding in parallel while improving the prediction accuracy. Experiments on
British National Corpus showed that the proposed approach outperforms existing
methods in both accuracy and decoding speed.
| 2,019 | Computation and Language |
Ab Antiquo: Neural Proto-language Reconstruction | Historical linguists have identified regularities in the process of historic
sound change. The comparative method utilizes those regularities to reconstruct
proto-words based on observed forms in daughter languages. Can this process be
efficiently automated? We address the task of proto-word reconstruction, in
which the model is exposed to cognates in contemporary daughter languages, and
has to predict the proto word in the ancestor language. We provide a novel
dataset for this task, encompassing over 8,000 comparative entries, and show
that neural sequence models outperform conventional methods applied to this
task so far. Error analysis reveals variability in the ability of neural model
to capture different phonological changes, correlating with the complexity of
the changes. Analysis of learned embeddings reveals the models learn
phonologically meaningful generalizations, corresponding to well-attested
phonological shifts documented by historical linguistics.
| 2,021 | Computation and Language |
Embedding-based system for the Text part of CALL v3 shared task | This paper presents a scoring system that has shown the top result on the
text subset of CALL v3 shared task. The presented system is based on text
embeddings, namely NNLM~\cite{nnlm} and BERT~\cite{Bert}. The distinguishing
feature of the given approach is that it does not rely on the reference grammar
file for scoring. The model is compared against approaches that use the grammar
file and proves the possibility to achieve similar and even higher results
without a predefined set of correct answers.
The paper describes the model itself and the data preparation process that
played a crucial role in the model training.
| 2,019 | Computation and Language |
A Simple and Effective Approach for Fine Tuning Pre-trained Word
Embeddings for Improved Text Classification | This work presents a new and simple approach for fine-tuning pretrained word
embeddings for text classification tasks. In this approach, the class in which
a term appears, acts as an additional contextual variable during the fine
tuning process, and contributes to the final word vector for that term. As a
result, words that are used distinctively within a particular class, will bear
vectors that are closer to each other in the embedding space and will be more
discriminative towards that class. To validate this novel approach, it was
applied to three Arabic and two English datasets that have been previously used
for text classification tasks such as sentiment analysis and emotion detection.
In the vast majority of cases, the results obtained using the proposed
approach, improved considerably.
| 2,019 | Computation and Language |
Neural Network based Deep Transfer Learning for Cross-domain Dependency
Parsing | In this paper, we describe the details of the neural dependency parser
sub-mitted by our team to the NLPCC 2019 Shared Task of Semi-supervised do-main
adaptation subtask on Cross-domain Dependency Parsing. Our system is based on
the stack-pointer networks(STACKPTR). Considering the im-portance of context,
we utilize self-attention mechanism for the representa-tion vectors to capture
the meaning of words. In addition, to adapt three dif-ferent domains, we
utilize neural network based deep transfer learning which transfers the
pre-trained partial network in the source domain to be a part of deep neural
network in the three target domains (product comments, product blogs and web
fiction) respectively. Results on the three target domains demonstrate that our
model performs competitively.
| 2,019 | Computation and Language |
Do Neural Language Representations Learn Physical Commonsense? | Humans understand language based on the rich background knowledge about how
the physical world works, which in turn allows us to reason about the physical
world through language. In addition to the properties of objects (e.g., boats
require fuel) and their affordances, i.e., the actions that are applicable to
them (e.g., boats can be driven), we can also reason about if-then inferences
between what properties of objects imply the kind of actions that are
applicable to them (e.g., that if we can drive something then it likely
requires fuel).
In this paper, we investigate the extent to which state-of-the-art neural
language representations, trained on a vast amount of natural language text,
demonstrate physical commonsense reasoning. While recent advancements of neural
language models have demonstrated strong performance on various types of
natural language inference tasks, our study based on a dataset of over 200k
newly collected annotations suggests that neural language representations still
only learn associations that are explicitly written down.
| 2,019 | Computation and Language |
Mitigating Noisy Inputs for Question Answering | Natural language processing systems are often downstream of unreliable
inputs: machine translation, optical character recognition, or speech
recognition. For instance, virtual assistants can only answer your questions
after understanding your speech. We investigate and mitigate the effects of
noise from Automatic Speech Recognition systems on two factoid Question
Answering (QA) tasks. Integrating confidences into the model and forced
decoding of unknown words are empirically shown to improve the accuracy of
downstream neural QA systems. We create and train models on a synthetic corpus
of over 500,000 noisy sentences and evaluate on two human corpora from Quizbowl
and Jeopardy! competitions.
| 2,019 | Computation and Language |
A Test Suite and Manual Evaluation of Document-Level NMT at WMT19 | As the quality of machine translation rises and neural machine translation
(NMT) is moving from sentence to document level translations, it is becoming
increasingly difficult to evaluate the output of translation systems.
We provide a test suite for WMT19 aimed at assessing discourse phenomena of
MT systems participating in the News Translation Task. We have manually checked
the outputs and identified types of translation errors that are relevant to
document-level translation.
| 2,019 | Computation and Language |
Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text
Generation | Table-to-text generation aims to translate the structured data into the
unstructured text. Most existing methods adopt the encoder-decoder framework to
learn the transformation, which requires large-scale training samples. However,
the lack of large parallel data is a major practical problem for many domains.
In this work, we consider the scenario of low resource table-to-text
generation, where only limited parallel data is available. We propose a novel
model to separate the generation into two stages: key fact prediction and
surface realization. It first predicts the key facts from the tables, and then
generates the text with the key facts. The training of key fact prediction
needs much fewer annotated data, while surface realization can be trained with
pseudo parallel corpus. We evaluate our model on a biography generation
dataset. Our model can achieve $27.34$ BLEU score with only $1,000$ parallel
data, while the baseline model only obtain the performance of $9.71$ BLEU
score.
| 2,019 | Computation and Language |
UdS Submission for the WMT 19 Automatic Post-Editing Task | In this paper, we describe our submission to the English-German APE shared
task at WMT 2019. We utilize and adapt an NMT architecture originally developed
for exploiting context information to APE, implement this in our own
transformer model and explore joint training of the APE task with a de-noising
encoder.
| 2,019 | Computation and Language |
Challenging the Boundaries of Speech Recognition: The MALACH Corpus | There has been huge progress in speech recognition over the last several
years. Tasks once thought extremely difficult, such as SWITCHBOARD, now
approach levels of human performance. The MALACH corpus (LDC catalog
LDC2012S05), a 375-Hour subset of a large archive of Holocaust testimonies
collected by the Survivors of the Shoah Visual History Foundation, presents
significant challenges to the speech community. The collection consists of
unconstrained, natural speech filled with disfluencies, heavy accents,
age-related coarticulations, un-cued speaker and language switching, and
emotional speech - all still open problems for speech recognition systems.
Transcription is challenging even for skilled human annotators. This paper
proposes that the community place focus on the MALACH corpus to develop speech
recognition systems that are more robust with respect to accents, disfluencies
and emotional speech. To reduce the barrier for entry, a lexicon and training
and testing setups have been created and baseline results using current deep
learning technologies are presented. The metadata has just been released by LDC
(LDC2019S11). It is hoped that this resource will enable the community to build
on top of these baselines so that the extremely important information in these
and related oral histories becomes accessible to a wider audience.
| 2,019 | Computation and Language |
Artificially Evolved Chunks for Morphosyntactic Analysis | We introduce a language-agnostic evolutionary technique for automatically
extracting chunks from dependency treebanks. We evaluate these chunks on a
number of morphosyntactic tasks, namely POS tagging, morphological feature
tagging, and dependency parsing. We test the utility of these chunks in a host
of different ways. We first learn chunking as one task in a shared multi-task
framework together with POS and morphological feature tagging. The predictions
from this network are then used as input to augment sequence-labelling
dependency parsing. Finally, we investigate the impact chunks have on
dependency parsing in a multi-task framework. Our results from these analyses
show that these chunks improve performance at different levels of syntactic
abstraction on English UD treebanks and a small, diverse subset of non-English
UD treebanks.
| 2,019 | Computation and Language |
Generating Information Extraction Patterns from Overlapping and Variable
Length Annotations using Sequence Alignment | Sequence alignments are used to capture patterns composed of elements
representing multiple conceptual levels through the alignment of sequences that
contain overlapping and variable length annotations. The alignments also
determine the proper context window of words and phrases that most directly
impact the meaning of a given target within a sentence, eliminating the need to
predefine a fixed context window of words surrounding the targets. We evaluated
the system using the CoNLL-2003 named entity recognition (NER) task.
| 2,019 | Computation and Language |
A Generate-Validate Approach to Answering Questions about Qualitative
Relationships | Qualitative relationships describe how increasing or decreasing one property
(e.g. altitude) affects another (e.g. temperature). They are an important
aspect of natural language question answering and are crucial for building
chatbots or voice agents where one may enquire about qualitative relationships.
Recently a dataset about question answering involving qualitative relationships
has been proposed, and a few approaches to answer such questions have been
explored, in the heart of which lies a semantic parser that converts the
natural language input to a suitable logical form. A problem with existing
semantic parsers is that they try to directly convert the input sentences to a
logical form. Since the output language varies with each application, it forces
the semantic parser to learn almost everything from scratch. In this paper, we
show that instead of using a semantic parser to produce the logical form, if we
apply the generate-validate framework i.e. generate a natural language
description of the logical form and validate if the natural language
description is followed from the input text, we get a better scope for transfer
learning and our method outperforms the state-of-the-art by a large margin of
7.93%.
| 2,019 | Computation and Language |
Unsupervised Stemming based Language Model for Telugu Broadcast News
Transcription | In Indian Languages , native speakers are able to understand new words formed
by either combining or modifying root words with tense and / or gender. Due to
data insufficiency, Automatic Speech Recognition system (ASR) may not
accommodate all the words in the language model irrespective of the size of the
text corpus. It also becomes computationally challenging if the volume of the
data increases exponentially due to morphological changes to the root word. In
this paper a new unsupervised method is proposed for a Indian language: Telugu,
based on the unsupervised method for Hindi, to generate the Out of Vocabulary
(OOV) words in the language model. By using techniques like smoothing and
interpolation of pre-processed data with supervised and unsupervised stemming,
different issues in language model for Indian language: Telugu has been
addressed. We observe that the smoothing techniques Witten-Bell and Kneser-Ney
perform well when compared to other techniques on pre-processed data from
supervised learning. The ASRs accuracy is improved by 0.76% and 0.94% with
supervised and unsupervised stemming respectively.
| 2,019 | Computation and Language |
Active Annotation: bootstrapping annotation lexicon and guidelines for
supervised NLU learning | Natural Language Understanding (NLU) models are typically trained in a
supervised learning framework. In the case of intent classification, the
predicted labels are predefined and based on the designed annotation schema
while the labelling process is based on a laborious task where annotators
manually inspect each utterance and assign the corresponding label. We propose
an Active Annotation (AA) approach where we combine an unsupervised learning
method in the embedding space, a human-in-the-loop verification process, and
linguistic insights to create lexicons that can be open categories and adapted
over time. In particular, annotators define the y-label space on-the-fly during
the annotation using an iterative process and without the need for prior
knowledge about the input data. We evaluate the proposed annotation paradigm in
a real use-case NLU scenario. Results show that our Active Annotation paradigm
achieves accurate and higher quality training data, with an annotation speed of
an order of magnitude higher with respect to the traditional human-only driven
baseline annotation methodology.
| 2,019 | Computation and Language |
On Identifiability in Transformers | In this paper we delve deep in the Transformer architecture by investigating
two of its core components: self-attention and contextual embeddings. In
particular, we study the identifiability of attention weights and token
embeddings, and the aggregation of context into hidden tokens. We show that,
for sequences longer than the attention head dimension, attention weights are
not identifiable. We propose effective attention as a complementary tool for
improving explanatory interpretations based on attention. Furthermore, we show
that input tokens retain to a large degree their identity across the model. We
also find evidence suggesting that identity information is mainly encoded in
the angle of the embeddings and gradually decreases with depth. Finally, we
demonstrate strong mixing of input information in the generation of contextual
embeddings by means of a novel quantification method based on gradient
attribution. Overall, we show that self-attention distributions are not
directly interpretable and present tools to better understand and further
investigate Transformer models.
| 2,020 | Computation and Language |
A Finnish News Corpus for Named Entity Recognition | We present a corpus of Finnish news articles with a manually prepared named
entity annotation. The corpus consists of 953 articles (193,742 word tokens)
with six named entity classes (organization, location, person, product, event,
and date). The articles are extracted from the archives of Digitoday, a Finnish
online technology news source. The corpus is available for research purposes.
We present baseline experiments on the corpus using a rule-based and two deep
learning systems on two, in-domain and out-of-domain, test sets.
| 2,019 | Computation and Language |
LSTM vs. GRU vs. Bidirectional RNN for script generation | Scripts are an important part of any TV series. They narrate movements,
actions and expressions of characters. In this paper, a case study is presented
on how different sequence to sequence deep learning models perform in the task
of generating new conversations between characters as well as new scenarios on
the basis of a script (previous conversations). A comprehensive comparison
between these models, namely, LSTM, GRU and Bidirectional RNN is presented. All
the models are designed to learn the sequence of recurring characters from the
input sequence. Each input sequence will contain, say "n" characters, and the
corresponding targets will contain the same number of characters, except, they
will be shifted one character to the right. In this manner, input and output
sequences are generated and used to train the models. A closer analysis of
explored models performance and efficiency is delineated with the help of graph
plots and generated texts by taking some input string. These graphs describe
both, intraneural performance and interneural model performance for each model.
| 2,019 | Computation and Language |
AmazonQA: A Review-Based Question Answering Task | Every day, thousands of customers post questions on Amazon product pages.
After some time, if they are fortunate, a knowledgeable customer might answer
their question. Observing that many questions can be answered based upon the
available product reviews, we propose the task of review-based QA. Given a
corpus of reviews and a question, the QA system synthesizes an answer. To this
end, we introduce a new dataset and propose a method that combines information
retrieval techniques for selecting relevant reviews (given a question) and
"reading comprehension" models for synthesizing an answer (given a question and
review). Our dataset consists of 923k questions, 3.6M answers and 14M reviews
across 156k products. Building on the well-known Amazon dataset, we collect
additional annotations, marking each question as either answerable or
unanswerable based on the available reviews. A deployed system could first
classify a question as answerable and then attempt to generate an answer.
Notably, unlike many popular QA datasets, here, the questions, passages, and
answers are all extracted from real human interactions. We evaluate numerous
models for answer generation and propose strong baselines, demonstrating the
challenging nature of this new task.
| 2,019 | Computation and Language |
Understanding Spatial Language in Radiology: Representation Framework,
Annotation, and Spatial Relation Extraction from Chest X-ray Reports using
Deep Learning | We define a representation framework for extracting spatial information from
radiology reports (Rad-SpRL). We annotated a total of 2000 chest X-ray reports
with 4 spatial roles corresponding to the common radiology entities. Our focus
is on extracting detailed information of a radiologist's interpretation
containing a radiographic finding, its anatomical location, corresponding
probable diagnoses, as well as associated hedging terms. For this, we propose a
deep learning-based natural language processing (NLP) method involving both
word and character-level encodings. Specifically, we utilize a bidirectional
long short-term memory (Bi-LSTM) conditional random field (CRF) model for
extracting the spatial roles. The model achieved average F1 measures of 90.28
and 94.61 for extracting the Trajector and Landmark roles respectively whereas
the performance was moderate for Diagnosis and Hedge roles with average F1 of
71.47 and 73.27 respectively. The corpus will soon be made available upon
request.
| 2,019 | Computation and Language |
Incorporating Relation Knowledge into Commonsense Reading Comprehension
with Multi-task Learning | This paper focuses on how to take advantage of external relational knowledge
to improve machine reading comprehension (MRC) with multi-task learning. Most
of the traditional methods in MRC assume that the knowledge used to get the
correct answer generally exists in the given documents. However, in real-world
task, part of knowledge may not be mentioned and machines should be equipped
with the ability to leverage external knowledge. In this paper, we integrate
relational knowledge into MRC model for commonsense reasoning. Specifically,
based on a pre-trained language model (LM). We design two auxiliary
relation-aware tasks to predict if there exists any commonsense relation and
what is the relation type between two words, in order to better model the
interactions between document and candidate answer option. We conduct
experiments on two multi-choice benchmark datasets: the SemEval-2018 Task 11
and the Cloze Story Test. The experimental results demonstrate the
effectiveness of the proposed method, which achieves superior performance
compared with the comparable baselines on both datasets.
| 2,019 | Computation and Language |
Offensive Language and Hate Speech Detection for Danish | The presence of offensive language on social media platforms and the
implications this poses is becoming a major concern in modern society. Given
the enormous amount of content created every day, automatic methods are
required to detect and deal with this type of content. Until now, most of the
research has focused on solving the problem for the English language, while the
problem is multilingual.
We construct a Danish dataset containing user-generated comments from
\textit{Reddit} and \textit{Facebook}. It contains user generated comments from
various social media platforms, and to our knowledge, it is the first of its
kind. Our dataset is annotated to capture various types and target of offensive
language. We develop four automatic classification systems, each designed to
work for both the English and the Danish language. In the detection of
offensive language in English, the best performing system achieves a macro
averaged F1-score of $0.74$, and the best performing system for Danish achieves
a macro averaged F1-score of $0.70$. In the detection of whether or not an
offensive post is targeted, the best performing system for English achieves a
macro averaged F1-score of $0.62$, while the best performing system for Danish
achieves a macro averaged F1-score of $0.73$. Finally, in the detection of the
target type in a targeted offensive post, the best performing system for
English achieves a macro averaged F1-score of $0.56$, and the best performing
system for Danish achieves a macro averaged F1-score of $0.63$.
Our work for both the English and the Danish language captures the type and
targets of offensive language, and present automatic methods for detecting
different kinds of offensive language such as hate speech and cyberbullying.
| 2,023 | Computation and Language |
EASSE: Easier Automatic Sentence Simplification Evaluation | We introduce EASSE, a Python package aiming to facilitate and standardise
automatic evaluation and comparison of Sentence Simplification (SS) systems.
EASSE provides a single access point to a broad range of evaluation resources:
standard automatic metrics for assessing SS outputs (e.g. SARI), word-level
accuracy scores for certain simplification transformations,
reference-independent quality estimation features (e.g. compression ratio), and
standard test data for SS evaluation (e.g. TurkCorpus). Finally, EASSE
generates easy-to-visualise reports on the various metrics and features above
and on how a particular SS output fares against reference simplifications.
Through experiments, we show that these functionalities allow for better
comparison and understanding of the performance of SS systems.
| 2,019 | Computation and Language |
StructBERT: Incorporating Language Structures into Pre-training for Deep
Language Understanding | Recently, the pre-trained language model, BERT (and its robustly optimized
version RoBERTa), has attracted a lot of attention in natural language
understanding (NLU), and achieved state-of-the-art accuracy in various NLU
tasks, such as sentiment classification, natural language inference, semantic
textual similarity and question answering. Inspired by the linearization
exploration work of Elman [8], we extend BERT to a new model, StructBERT, by
incorporating language structures into pre-training. Specifically, we pre-train
StructBERT with two auxiliary tasks to make the most of the sequential order of
words and sentences, which leverage language structures at the word and
sentence levels, respectively. As a result, the new model is adapted to
different levels of language understanding required by downstream tasks. The
StructBERT with structural pre-training gives surprisingly good empirical
results on a variety of downstream tasks, including pushing the
state-of-the-art on the GLUE benchmark to 89.0 (outperforming all published
models), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on
SNLI to 91.7.
| 2,019 | Computation and Language |
Getting To Know You: User Attribute Extraction from Dialogues | User attributes provide rich and useful information for user understanding,
yet structured and easy-to-use attributes are often sparsely populated. In this
paper, we leverage dialogues with conversational agents, which contain strong
suggestions of user information, to automatically extract user attributes.
Since no existing dataset is available for this purpose, we apply distant
supervision to train our proposed two-stage attribute extractor, which
surpasses several retrieval and generation baselines on human evaluation.
Meanwhile, we discuss potential applications (e.g., personalized recommendation
and dialogue systems) of such extracted user attributes, and point out current
limitations to cast light on future work.
| 2,019 | Computation and Language |
Attention is not not Explanation | Attention mechanisms play a central role in NLP systems, especially within
recurrent neural network (RNN) models. Recently, there has been increasing
interest in whether or not the intermediate representations offered by these
modules may be used to explain the reasoning for a model's prediction, and
consequently reach insights regarding the model's decision-making process. A
recent paper claims that `Attention is not Explanation' (Jain and Wallace,
2019). We challenge many of the assumptions underlying this work, arguing that
such a claim depends on one's definition of explanation, and that testing it
needs to take into account all elements of the model, using a rigorous
experimental design. We propose four alternative tests to determine
when/whether attention can be used as explanation: a simple uniform-weights
baseline; a variance calibration based on multiple random seed runs; a
diagnostic framework using frozen weights from pretrained models; and an
end-to-end adversarial attention training protocol. Each allows for meaningful
interpretation of attention mechanisms in RNN models. We show that even when
reliable adversarial distributions can be found, they don't perform well on the
simple diagnostic, indicating that prior work does not disprove the usefulness
of attention mechanisms for explainability.
| 2,019 | Computation and Language |
Playing log(N)-Questions over Sentences | We propose a two-agent game wherein a questioner must be able to conjure
discerning questions between sentences, incorporate responses from an answerer,
and keep track of a hypothesis state. The questioner must be able to understand
the information required to make its final guess, while also being able to
reason over the game's text environment based on the answerer's responses. We
experiment with an end-to-end model where both agents can learn simultaneously
to play the game, showing that simultaneously achieving high game accuracy and
producing meaningful questions can be a difficult trade-off.
| 2,019 | Computation and Language |
Neural Machine Translation with Noisy Lexical Constraints | Lexically constrained decoding for machine translation has shown to be
beneficial in previous studies. Unfortunately, constraints provided by users
may contain mistakes in real-world situations. It is still an open question
that how to manipulate these noisy constraints in such practical scenarios. We
present a novel framework that treats constraints as external memories. In this
soft manner, a mistaken constraint can be corrected. Experiments demonstrate
that our approach can achieve substantial BLEU gains in handling noisy
constraints. These results motivate us to apply the proposed approach on a new
scenario where constraints are generated without the help of users. Experiments
show that our approach can indeed improve the translation quality with the
automatically generated constraints.
| 2,020 | Computation and Language |
Improving Generalization in Coreference Resolution via Adversarial
Training | In order for coreference resolution systems to be useful in practice, they
must be able to generalize to new text. In this work, we demonstrate that the
performance of the state-of-the-art system decreases when the names of PER and
GPE named entities in the CoNLL dataset are changed to names that do not occur
in the training set. We use the technique of adversarial gradient-based
training to retrain the state-of-the-art system and demonstrate that the
retrained system achieves higher performance on the CoNLL dataset (both with
and without the change of named entities) and the GAP dataset.
| 2,019 | Computation and Language |
IMS-Speech: A Speech to Text Tool | We present the IMS-Speech, a web based tool for German and English speech
transcription aiming to facilitate research in various disciplines which
require accesses to lexical information in spoken language materials. This tool
is based on modern open source software stack, advanced speech recognition
methods and public data resources and is freely available for academic
researchers. The utilized models are built to be generic in order to provide
transcriptions of competitive accuracy on a diverse set of tasks and
conditions.
| 2,019 | Computation and Language |
Fine-grained Information Status Classification Using Discourse
Context-Aware Self-Attention | Previous work on bridging anaphora recognition (Hou et al., 2013a) casts the
problem as a subtask of learning fine-grained information status (IS). However,
these systems heavily depend on many hand-crafted linguistic features. In this
paper, we propose a discourse context-aware self-attention neural network model
for fine-grained IS classification. On the ISNotes corpus (Markert et al.,
2012), our model with the contextually-encoded word representations (BERT)
(Devlin et al., 2018) achieves new state-of-the-art performances on
fine-grained IS classification, obtaining a 4.1% absolute overall accuracy
improvement compared to Hou et al. (2013a). More importantly, we also show an
improvement of 3.9% F1 for bridging anaphora recognition without using any
complex hand-crafted semantic features designed for capturing the bridging
phenomenon.
| 2,019 | Computation and Language |
Learn How to Cook a New Recipe in a New House: Using Map
Familiarization, Curriculum Learning, and Bandit Feedback to Learn Families
of Text-Based Adventure Games | We consider the task of learning to play families of text-based computer
adventure games, i.e., fully textual environments with a common theme (e.g.
cooking) and goal (e.g. prepare a meal from a recipe) but with different
specifics; new instances of such games are relatively straightforward for
humans to master after a brief exposure to the genre but have been curiously
difficult for computer agents to learn. We find that the deep Q-learning
strategies that have been successfully leveraged for superhuman performance in
single-instance action video games can be applied to learn families of text
video games when adopting simple strategies that correlate with human-like
learning behavior. Specifically, we build agents that learn to tackle simple
scenarios before more complex ones using curriculum learning, that familiarize
themselves in an unfamiliar environment by navigating before acting, and that
explore uncertain environments more thoroughly using contextual multi-armed
bandit decision policies. We demonstrate improved task completion rates over
reasonable baselines when evaluating on never-before-seen games of that theme.
| 2,020 | Computation and Language |
An Effective Domain Adaptive Post-Training Method for BERT in Response
Selection | We focus on multi-turn response selection in a retrieval-based dialog system.
In this paper, we utilize the powerful pre-trained language model
Bi-directional Encoder Representations from Transformer (BERT) for a multi-turn
dialog system and propose a highly effective post-training method on
domain-specific corpus. Although BERT is easily adopted to various NLP tasks
and outperforms previous baselines of each task, it still has limitations if a
task corpus is too focused on a certain domain. Post-training on
domain-specific corpus (e.g., Ubuntu Corpus) helps the model to train
contextualized representations and words that do not appear in general corpus
(e.g., English Wikipedia). Experimental results show that our approach achieves
new state-of-the-art on two response selection benchmarks (i.e., Ubuntu Corpus
V1, Advising Corpus) performance improvement by 5.9% and 6% on R@1.
| 2,020 | Computation and Language |
Entertaining and Opinionated but Too Controlling: A Large-Scale User
Study of an Open Domain Alexa Prize System | Conversational systems typically focus on functional tasks such as scheduling
appointments or creating todo lists. Instead we design and evaluate SlugBot
(SB), one of 8 semifinalists in the 2018 AlexaPrize, whose goal is to support
casual open-domain social inter-action. This novel application requires both
broad topic coverage and engaging interactive skills. We developed a new
technical approach to meet this demanding situation by crowd-sourcing novel
content and introducing playful conversational strategies based on storytelling
and games. We collected over 10,000 conversations during August 2018 as part of
the Alexa Prize competition. We also conducted an in-lab follow-up qualitative
evaluation. Over-all users found SB moderately engaging; conversations averaged
3.6 minutes and involved 26 user turns. However, users reacted very differently
to different conversation subtypes. Storytelling and games were evaluated
positively; these were seen as entertaining with predictable interactive
structure. They also led users to impute personality and intelligence to SB. In
contrast, search and general Chit-Chat induced coverage problems; here users
found it hard to infer what topics SB could understand, with these
conversations seen as being too system-driven. Theoretical and design
implications suggest a move away from conversational systems that simply
provide factual information. Future systems should be designed to have their
own opinions with personal stories to share, and SB provides an example of how
we might achieve this.
| 2,019 | Computation and Language |
Meta Reasoning over Knowledge Graphs | The ability to reason over learned knowledge is an innate ability for humans
and humans can easily master new reasoning rules with only a few
demonstrations. While most existing studies on knowledge graph (KG) reasoning
assume enough training examples, we study the challenging and practical problem
of few-shot knowledge graph reasoning under the paradigm of meta-learning. We
propose a new meta learning framework that effectively utilizes the
task-specific meta information such as local graph neighbors and reasoning
paths in KGs. Specifically, we design a meta-encoder that encodes the meta
information into task-specific initialization parameters for different tasks.
This allows our reasoning module to have diverse starting points when learning
to reason over different relations, which is expected to better fit the target
task. On two few-shot knowledge base completion benchmarks, we show that the
augmented task-specific meta-encoder yields much better initial point than MAML
and outperforms several few-shot learning baselines.
| 2,019 | Computation and Language |
HyperKG: Hyperbolic Knowledge Graph Embeddings for Knowledge Base
Completion | Learning embeddings of entities and relations existing in knowledge bases
allows the discovery of hidden patterns in data. In this work, we examine the
geometrical space's contribution to the task of knowledge base completion. We
focus on the family of translational models, whose performance has been
lagging, and propose a model, dubbed HyperKG, which exploits the hyperbolic
space in order to better reflect the topological properties of knowledge bases.
We investigate the type of regularities that our model can capture and we show
that it is a prominent candidate for effectively representing a subset of
Datalog rules. We empirically show, using a variety of link prediction
datasets, that hyperbolic space allows to narrow down significantly the
performance gap between translational and bilinear models.
| 2,019 | Computation and Language |
Aspect and Opinion Terms Extraction Using Double Embeddings and
Attention Mechanism for Indonesian Hotel Reviews | Aspect and opinion terms extraction from review texts is one of the key tasks
in aspect-based sentiment analysis. In order to extract aspect and opinion
terms for Indonesian hotel reviews, we adapt double embeddings feature and
attention mechanism that outperform the best system at SemEval 2015 and 2016.
We conduct experiments using 4000 reviews to find the best configuration and
show the influences of double embeddings and attention mechanism toward model
performance. Using 1000 reviews for evaluation, we achieved F1-measure of 0.914
and 0.90 for aspect and opinion terms extraction in token and entity (term)
level respectively.
| 2,019 | Computation and Language |
Architecture and evolution of semantic networks in mathematics texts | Knowledge is a network of interconnected concepts. Yet, precisely how the
topological structure of knowledge constrains its acquisition remains unknown,
hampering the development of learning enhancement strategies. Here we study the
topological structure of semantic networks reflecting mathematical concepts and
their relations in college-level linear algebra texts. We hypothesize that
these networks will exhibit structural order, reflecting the logical sequence
of topics that ensures accessibility. We find that the networks exhibit strong
core-periphery architecture, where a dense core of concepts presented early is
complemented with a sparse periphery presented evenly throughout the
exposition; the latter is composed of many small modules each reflecting more
narrow domains. Using tools from applied topology, we find that the
expositional evolution of the semantic networks produces and subsequently fills
knowledge gaps, and that the density of these gaps tracks negatively with
community ratings of each textbook. Broadly, our study lays the groundwork for
future efforts developing optimal design principles for textbook exposition and
teaching in a classroom setting.
| 2,021 | Computation and Language |
Reasoning-Driven Question-Answering for Natural Language Understanding | Natural language understanding (NLU) of text is a fundamental challenge in
AI, and it has received significant attention throughout the history of NLP
research. This primary goal has been studied under different tasks, such as
Question Answering (QA) and Textual Entailment (TE). In this thesis, we
investigate the NLU problem through the QA task and focus on the aspects that
make it a challenge for the current state-of-the-art technology. This thesis is
organized into three main parts:
In the first part, we explore multiple formalisms to improve existing machine
comprehension systems. We propose a formulation for abductive reasoning in
natural language and show its effectiveness, especially in domains with limited
training data. Additionally, to help reasoning systems cope with irrelevant or
redundant information, we create a supervised approach to learn and detect the
essential terms in questions.
In the second part, we propose two new challenge datasets. In particular, we
create two datasets of natural language questions where (i) the first one
requires reasoning over multiple sentences; (ii) the second one requires
temporal common sense reasoning. We hope that the two proposed datasets will
motivate the field to address more complex problems.
In the final part, we present the first formal framework for multi-step
reasoning algorithms, in the presence of a few important properties of language
use, such as incompleteness, ambiguity, etc. We apply this framework to prove
fundamental limitations for reasoning algorithms. These theoretical results
provide extra intuition into the existing empirical evidence in the field.
| 2,019 | Computation and Language |
Reinforcement Learning Based Graph-to-Sequence Model for Natural
Question Generation | Natural question generation (QG) aims to generate questions from a passage
and an answer. Previous works on QG either (i) ignore the rich structure
information hidden in text, (ii) solely rely on cross-entropy loss that leads
to issues like exposure bias and inconsistency between train/test measurement,
or (iii) fail to fully exploit the answer information. To address these
limitations, in this paper, we propose a reinforcement learning (RL) based
graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq
generator with a novel Bidirectional Gated Graph Neural Network based encoder
to embed the passage, and a hybrid evaluator with a mixed objective combining
both cross-entropy and RL losses to ensure the generation of syntactically and
semantically valid text. We also introduce an effective Deep Alignment Network
for incorporating the answer information into the passage at both the word and
contextual levels. Our model is end-to-end trainable and achieves new
state-of-the-art scores, outperforming existing methods by a significant margin
on the standard SQuAD benchmark.
| 2,020 | Computation and Language |
Establishing Strong Baselines for the New Decade: Sequence Tagging,
Syntactic and Semantic Parsing with BERT | This paper presents new state-of-the-art models for three tasks,
part-of-speech tagging, syntactic parsing, and semantic parsing, using the
cutting-edge contextualized embedding framework known as BERT. For each task,
we first replicate and simplify the current state-of-the-art approach to
enhance its model efficiency. We then evaluate our simplified approaches on
those three tasks using token embeddings generated by BERT. 12 datasets in both
English and Chinese are used for our experiments. The BERT models outperform
the previously best-performing models by 2.5% on average (7.5% for the most
significant case). Moreover, an in-depth analysis on the impact of BERT
embeddings is provided using self-attention, which helps understanding in this
rich yet representation. All models and source codes are available in public so
that researchers can improve upon and utilize them to establish strong
baselines for the next decade.
| 2,020 | Computation and Language |
FlexNER: A Flexible LSTM-CNN Stack Framework for Named Entity
Recognition | Named entity recognition (NER) is a foundational technology for information
extraction. This paper presents a flexible NER framework compatible with
different languages and domains. Inspired by the idea of distant supervision
(DS), this paper enhances the representation by increasing the entity-context
diversity without relying on external resources. We choose different layer
stacks and sub-network combinations to construct the bilateral networks. This
strategy can generally improve model performance on different datasets. We
conduct experiments on five languages, such as English, German, Spanish, Dutch
and Chinese, and biomedical fields, such as identifying the chemicals and
gene/protein terms from scientific works. Experimental results demonstrate the
good performance of this framework.
| 2,019 | Computation and Language |
Fusion of Detected Objects in Text for Visual Question Answering | To advance models of multimodal context, we introduce a simple yet powerful
neural architecture for data that combines vision and natural language. The
"Bounding Boxes in Text Transformer" (B2T2) also leverages referential
information binding words to portions of the image in a single unified
architecture. B2T2 is highly effective on the Visual Commonsense Reasoning
benchmark (https://visualcommonsense.com), achieving a new state-of-the-art
with a 25% relative reduction in error rate compared to published baselines and
obtaining the best performance to date on the public leaderboard (as of May 22,
2019). A detailed ablation analysis shows that the early integration of the
visual features into the text analysis is key to the effectiveness of the new
architecture. A reference implementation of our models is provided
(https://github.com/google-research/language/tree/master/language/question_answering/b2t2).
| 2,019 | Computation and Language |
Reactive Multi-Stage Feature Fusion for Multimodal Dialogue Modeling | Visual question answering and visual dialogue tasks have been increasingly
studied in the multimodal field towards more practical real-world scenarios. A
more challenging task, audio visual scene-aware dialogue (AVSD), is proposed to
further advance the technologies that connect audio, vision, and language,
which introduces temporal video information and dialogue interactions between a
questioner and an answerer. This paper proposes an intuitive mechanism that
fuses features and attention in multiple stages in order to well integrate
multimodal features, and the results demonstrate its capability in the
experiments. Also, we apply several state-of-the-art models in other tasks to
the AVSD task, and further analyze their generalization across different tasks.
| 2,019 | Computation and Language |
Towards Optimisation of Collaborative Question Answering over Knowledge
Graphs | Collaborative Question Answering (CQA) frameworks for knowledge graphs aim at
integrating existing question answering (QA) components for implementing
sequences of QA tasks (i.e. QA pipelines). The research community has paid
substantial attention to CQAs since they support reusability and scalability of
the available components in addition to the flexibility of pipelines. CQA
frameworks attempt to build such pipelines automatically by solving two
optimisation problems: 1) local collective performance of QA components per QA
task and 2) global performance of QA pipelines. In spite offering several
advantages over monolithic QA systems, the effectiveness and efficiency of CQA
frameworks in answering questions is limited. In this paper, we tackle the
problem of local optimisation of CQA frameworks and propose a three fold
approach, which applies feature selection techniques with supervised machine
learning approaches in order to identify the best performing components
efficiently. We have empirically evaluated our approach over existing
benchmarks and compared to existing automatic CQA frameworks. The observed
results provide evidence that our approach answers a higher number of questions
than the state of the art while reducing: i) the number of used features by 50%
and ii) the number of components used by 76%.
| 2,019 | Computation and Language |
X-WikiRE: A Large, Multilingual Resource for Relation Extraction as
Machine Comprehension | Although the vast majority of knowledge bases KBs are heavily biased towards
English, Wikipedias do cover very different topics in different languages.
Exploiting this, we introduce a new multilingual dataset (X-WikiRE), framing
relation extraction as a multilingual machine reading problem. We show that by
leveraging this resource it is possible to robustly transfer models
cross-lingually and that multilingual support significantly improves
(zero-shot) relation extraction, enabling the population of low-resourced KBs
from their well-populated counterparts.
| 2,019 | Computation and Language |
FlowDelta: Modeling Flow Information Gain in Reasoning for
Conversational Machine Comprehension | Conversational machine comprehension requires deep understanding of the
dialogue flow, and the prior work proposed FlowQA to implicitly model the
context representations in reasoning for better understanding. This paper
proposes to explicitly model the information gain through dialogue reasoning in
order to allow the model to focus on more informative cues. The proposed model
achieves state-of-the-art performance in a conversational QA dataset QuAC and
sequential instruction understanding dataset SCONE, which shows the
effectiveness of the proposed mechanism and demonstrates its capability of
generalization to different QA models and tasks.
| 2,020 | Computation and Language |
Mastering emergent language: learning to guide in simulated navigation | To cooperate with humans effectively, virtual agents need to be able to
understand and execute language instructions. A typical setup to achieve this
is with a scripted teacher which guides a virtual agent using language
instructions. However, such setup has clear limitations in scalability and,
more importantly, it is not interactive. Here, we introduce an autonomous agent
that uses discrete communication to interactively guide other agents to
navigate and act on a simulated environment. The developed communication
protocol is trainable, emergent and requires no additional supervision. The
emergent language speeds up learning of new agents, it generalizes across
incrementally more difficult tasks and, contrary to most other emergent
languages, it is highly interpretable. We demonstrate how the emitted messages
correlate with particular actions and observations, and how new agents become
less dependent on this guidance as training progresses. By exploiting the
correlations identified in our analysis, we manage to successfully address the
agents in their own language.
| 2,019 | Computation and Language |
MemeFaceGenerator: Adversarial Synthesis of Chinese Meme-face from
Natural Sentences | Chinese meme-face is a special kind of internet subculture widely spread in
Chinese Social Community Networks. It usually consists of a template image
modified by some amusing details and a text caption. In this paper, we present
MemeFaceGenerator, a Generative Adversarial Network with the attention module
and template information as supplementary signals, to automatically generate
meme-faces from text inputs. We also develop a web service as system
demonstration of meme-face synthesis. MemeFaceGenerator has been shown to be
capable of generating high-quality meme-faces from random text inputs.
| 2,019 | Computation and Language |
SG-Net: Syntax-Guided Machine Reading Comprehension | For machine reading comprehension, the capacity of effectively modeling the
linguistic knowledge from the detail-riddled and lengthy passages and getting
ride of the noises is essential to improve its performance. Traditional
attentive models attend to all words without explicit constraint, which results
in inaccurate concentration on some dispensable words. In this work, we propose
using syntax to guide the text modeling by incorporating explicit syntactic
constraints into attention mechanism for better linguistically motivated word
representations. In detail, for self-attention network (SAN) sponsored
Transformer-based encoder, we introduce syntactic dependency of interest (SDOI)
design into the SAN to form an SDOI-SAN with syntax-guided self-attention.
Syntax-guided network (SG-Net) is then composed of this extra SDOI-SAN and the
SAN from the original Transformer encoder through a dual contextual
architecture for better linguistics inspired representation. To verify its
effectiveness, the proposed SG-Net is applied to typical pre-trained language
model BERT which is right based on a Transformer encoder. Extensive experiments
on popular benchmarks including SQuAD 2.0 and RACE show that the proposed
SG-Net design helps achieve substantial performance improvement over strong
baselines.
| 2,019 | Computation and Language |
On The Evaluation of Machine Translation Systems Trained With
Back-Translation | Back-translation is a widely used data augmentation technique which leverages
target monolingual data. However, its effectiveness has been challenged since
automatic metrics such as BLEU only show significant improvements for test
examples where the source itself is a translation, or translationese. This is
believed to be due to translationese inputs better matching the back-translated
training data. In this work, we show that this conjecture is not empirically
supported and that back-translation improves translation quality of both
naturally occurring text as well as translationese according to professional
human translators. We provide empirical evidence to support the view that
back-translation is preferred by humans because it produces more fluent
outputs. BLEU cannot capture human preferences because references are
translationese when source sentences are natural text. We recommend
complementing BLEU with a language model score to measure fluency.
| 2,020 | Computation and Language |
The lexical and grammatical sources of neg-raising inferences | We investigate neg(ation)-raising inferences, wherein negation on a predicate
can be interpreted as though in that predicate's subordinate clause. To do
this, we collect a large-scale dataset of neg-raising judgments for effectively
all English clause-embedding verbs and develop a model to jointly induce the
semantic types of verbs and their subordinate clauses and the relationship of
these types to neg-raising inferences. We find that some neg-raising inferences
are attributable to properties of particular predicates, while others are
attributable to subordinate clause structure.
| 2,019 | Computation and Language |
Towards Debiasing Fact Verification Models | Fact verification requires validating a claim in the context of evidence. We
show, however, that in the popular FEVER dataset this might not necessarily be
the case. Claim-only classifiers perform competitively with top evidence-aware
models. In this paper, we investigate the cause of this phenomenon, identifying
strong cues for predicting labels solely based on the claim, without
considering any evidence. We create an evaluation set that avoids those
idiosyncrasies. The performance of FEVER-trained models significantly drops
when evaluated on this test set. Therefore, we introduce a regularization
method which alleviates the effect of bias in the training data, obtaining
improvements on the newly created test set. This work is a step towards a more
sound evaluation of reasoning capabilities in fact verification models.
| 2,019 | Computation and Language |
Raw-to-End Name Entity Recognition in Social Media | Taking word sequences as the input, typical named entity recognition (NER)
models neglect errors from pre-processing (e.g., tokenization). However, these
errors can influence the model performance greatly, especially for noisy texts
like tweets. Here, we introduce Neural-Char-CRF, a raw-to-end framework that is
more robust to pre-processing errors. It takes raw character sequences as
inputs and makes end-to-end predictions. Word embedding and contextualized
representation models are further tailored to capture textual signals for each
character instead of each word. Our model neither requires the conversion from
character sequences to word sequences, nor assumes tokenizer can correctly
detect all word boundaries. Moreover, we observe our model performance remains
unchanged after replacing tokenization with string matching, which demonstrates
its potential to be tokenization-free. Extensive experimental results on two
public datasets demonstrate the superiority of our proposed method over the
state of the art. The implementations and datasets are made available at:
https://github.com/LiyuanLucasLiu/Raw-to-End.
| 2,019 | Computation and Language |
Multi-Task Self-Supervised Learning for Disfluency Detection | Most existing approaches to disfluency detection heavily rely on
human-annotated data, which is expensive to obtain in practice. To tackle the
training data bottleneck, we investigate methods for combining multiple
self-supervised tasks-i.e., supervised tasks where data can be collected
without manual labeling. First, we construct large-scale pseudo training data
by randomly adding or deleting words from unlabeled news data, and propose two
self-supervised pre-training tasks: (i) tagging task to detect the added noisy
words. (ii) sentence classification to distinguish original sentences from
grammatically-incorrect sentences. We then combine these two tasks to jointly
train a network. The pre-trained network is then fine-tuned using
human-annotated disfluency detection training data. Experimental results on the
commonly used English Switchboard test set show that our approach can achieve
competitive performance compared to the previous systems (trained using the
full dataset) by using less than 1% (1000 sentences) of the training data. Our
method trained on the full dataset significantly outperforms previous methods,
reducing the error by 21% on English Switchboard.
| 2,020 | Computation and Language |
Towards Knowledge-Based Recommender Dialog System | In this paper, we propose a novel end-to-end framework called KBRD, which
stands for Knowledge-Based Recommender Dialog System. It integrates the
recommender system and the dialog generation system. The dialog system can
enhance the performance of the recommendation system by introducing
knowledge-grounded information about users' preferences, and the recommender
system can improve that of the dialog generation system by providing
recommendation-aware vocabulary bias. Experimental results demonstrate that our
proposed model has significant advantages over the baselines in both the
evaluation of dialog generation and recommendation. A series of analyses show
that the two systems can bring mutual benefits to each other, and the
introduced knowledge contributes to both their performances.
| 2,019 | Computation and Language |
Towards End-to-End Learning for Efficient Dialogue Agent by Modeling
Looking-ahead Ability | Learning an efficient manager of dialogue agent from data with little manual
intervention is important, especially for goal-oriented dialogues. However,
existing methods either take too many manual efforts (e.g. reinforcement
learning methods) or cannot guarantee the dialogue efficiency (e.g.
sequence-to-sequence methods). In this paper, we address this problem by
proposing a novel end-to-end learning model to train a dialogue agent that can
look ahead for several future turns and generate an optimal response to make
the dialogue efficient. Our method is data-driven and does not require too much
manual work for intervention during system design. We evaluate our method on
two datasets of different scenarios and the experimental results demonstrate
the efficiency of our model.
| 2,019 | Computation and Language |
XCMRC: Evaluating Cross-lingual Machine Reading Comprehension | We present XCMRC, the first public cross-lingual language understanding (XLU)
benchmark which aims to test machines on their cross-lingual reading
comprehension ability. To be specific, XCMRC is a Cross-lingual Cloze-style
Machine Reading Comprehension task which requires the reader to fill in a
missing word (we additionally provide ten noun candidates) in a sentence
written in target language (English / Chinese) by reading a given passage
written in source language (Chinese / English). Chinese and English are
rich-resource language pairs, in order to study low-resource cross-lingual
machine reading comprehension (XMRC), besides defining the common XCMRC task
which has no restrictions on use of external language resources, we also define
the pseudo low-resource XCMRC task by limiting the language resources to be
used. In addition, we provide two baselines for common XCMRC task and two for
pseudo XCMRC task respectively. We also provide an upper bound baseline for
both tasks. We found that for common XCMRC task, translation-based method and
multilingual sentence encoder-based method can obtain reasonable performance
but still have much room for improvement. As for pseudo low-resource XCMRC
task, due to strict restrictions on the use of language resources, our two
approaches are far below the upper bound so there are many challenges ahead.
| 2,019 | Computation and Language |
Feature-Less End-to-End Nested Term Extraction | In this paper, we proposed a deep learning-based end-to-end method on the
domain specified automatic term extraction (ATE), it considers possible term
spans within a fixed length in the sentence and predicts them whether they can
be conceptual terms. In comparison with current ATE methods, the model supports
nested term extraction and does not crucially need extra (extracted) features.
Results show that it can achieve high recall and a comparable precision on term
extraction task with inputting segmented raw text.
| 2,019 | Computation and Language |
Multi-class Hierarchical Question Classification for Multiple Choice
Science Exams | Prior work has demonstrated that question classification (QC), recognizing
the problem domain of a question, can help answer it more accurately. However,
developing strong QC algorithms has been hindered by the limited size and
complexity of annotated data available. To address this, we present the largest
challenge dataset for QC, containing 7,787 science exam questions paired with
detailed classification labels from a fine-grained hierarchical taxonomy of 406
problem domains. We then show that a BERT-based model trained on this dataset
achieves a large (+0.12 MAP) gain compared with previous methods, while also
achieving state-of-the-art performance on benchmark open-domain and biomedical
QC datasets. Finally, we show that using this model's predictions of question
topic significantly improves the accuracy of a question answering system by
+1.7% P@1, with substantial future gains possible as QC performance improves.
| 2,019 | Computation and Language |
What's Wrong with Hebrew NLP? And How to Make it Right | For languages with simple morphology, such as English, automatic annotation
pipelines such as spaCy or Stanford's CoreNLP successfully serve projects in
academia and the industry. For many morphologically-rich languages (MRLs),
similar pipelines show sub-optimal performance that limits their applicability
for text analysis in research and the industry.The sub-optimal performance is
mainly due to errors in early morphological disambiguation decisions, which
cannot be recovered later in the pipeline, yielding incoherent annotations on
the whole. In this paper we describe the design and use of the Onlp suite, a
joint morpho-syntactic parsing framework for processing Modern Hebrew texts.
The joint inference over morphology and syntax substantially limits error
propagation, and leads to high accuracy. Onlp provides rich and expressive
output which already serves diverse academic and commercial needs. Its
accompanying online demo further serves educational activities, introducing
Hebrew NLP intricacies to researchers and non-researchers alike.
| 2,019 | Computation and Language |
A Multivariate Model for Representing Semantic Non-compositionality | Semantically non-compositional phrases constitute an intriguing research
topic in Natural Language Processing. Semantic non-compositionality --the
situation when the meaning of a phrase cannot be derived from the meaning of
its components, is the main characteristic of such phrases, however, they bear
other characteristics such as high statistical association and
non-substitutability. In this work, we present a model for identifying
non-compositional phrases that takes into account all of these characteristics.
We show that the presented model remarkably outperforms the existing models of
identifying non-compositional phrases that mostly focus only on one of these
characteristics.
| 2,019 | Computation and Language |
A Multi-Type Multi-Span Network for Reading Comprehension that Requires
Discrete Reasoning | Rapid progress has been made in the field of reading comprehension and
question answering, where several systems have achieved human parity in some
simplified settings. However, the performance of these models degrades
significantly when they are applied to more realistic scenarios, such as
answers involve various types, multiple text strings are correct answers, or
discrete reasoning abilities are required. In this paper, we introduce the
Multi-Type Multi-Span Network (MTMSN), a neural reading comprehension model
that combines a multi-type answer predictor designed to support various answer
types (e.g., span, count, negation, and arithmetic expression) with a
multi-span extraction method for dynamically producing one or multiple text
spans. In addition, an arithmetic expression reranking mechanism is proposed to
rank expression candidates for further confirming the prediction. Experiments
show that our model achieves 79.9 F1 on the DROP hidden test set, creating new
state-of-the-art results. Source
code\footnote{\url{https://github.com/huminghao16/MTMSN}} is released to
facilitate future work.
| 2,019 | Computation and Language |
Visualizing and Understanding the Effectiveness of BERT | Language model pre-training, such as BERT, has achieved remarkable results in
many NLP tasks. However, it is unclear why the pre-training-then-fine-tuning
paradigm can improve performance and generalization capability across different
tasks. In this paper, we propose to visualize loss landscapes and optimization
trajectories of fine-tuning BERT on specific datasets. First, we find that
pre-training reaches a good initial point across downstream tasks, which leads
to wider optima and easier optimization compared with training from scratch. We
also demonstrate that the fine-tuning procedure is robust to overfitting, even
though BERT is highly over-parameterized for downstream tasks. Second, the
visualization results indicate that fine-tuning BERT tends to generalize better
because of the flat and wide optima, and the consistency between the training
loss surface and the generalization error surface. Third, the lower layers of
BERT are more invariant during fine-tuning, which suggests that the layers that
are close to input learn more transferable representations of language.
| 2,019 | Computation and Language |
SenseBERT: Driving Some Sense into BERT | The ability to learn from large unlabeled corpora has allowed neural language
models to advance the frontier in natural language understanding. However,
existing self-supervision techniques operate at the word form level, which
serves as a surrogate for the underlying semantic content. This paper proposes
a method to employ weak-supervision directly at the word sense level. Our
model, named SenseBERT, is pre-trained to predict not only the masked words but
also their WordNet supersenses. Accordingly, we attain a lexical-semantic level
language model, without the use of human annotation. SenseBERT achieves
significantly improved lexical understanding, as we demonstrate by
experimenting on SemEval Word Sense Disambiguation, and by attaining a state of
the art result on the Word in Context task.
| 2,020 | Computation and Language |
Towards Making the Most of BERT in Neural Machine Translation | GPT-2 and BERT demonstrate the effectiveness of using pre-trained language
models (LMs) on various natural language processing tasks. However, LM
fine-tuning often suffers from catastrophic forgetting when applied to
resource-rich tasks. In this work, we introduce a concerted training framework
(CTNMT) that is the key to integrate the pre-trained LMs to neural machine
translation (NMT). Our proposed CTNMT consists of three techniques: a)
asymptotic distillation to ensure that the NMT model can retain the previous
pre-trained knowledge; b) a dynamic switching gate to avoid catastrophic
forgetting of pre-trained knowledge; and c) a strategy to adjust the learning
paces according to a scheduled policy. Our experiments in machine translation
show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language
pair which even surpasses the previous state-of-the-art pre-training aided NMT
by 1.4 BLEU score. While for the large WMT14 English-French task with 40
millions of sentence-pairs, our base model still significantly improves upon
the state-of-the-art Transformer big model by more than 1 BLEU score. The code
and model can be downloaded from https://github.com/bytedance/neurst/
tree/master/examples/ctnmt.
| 2,022 | Computation and Language |
Transformer-based Automatic Post-Editing with a Context-Aware Encoding
Approach for Multi-Source Inputs | Recent approaches to the Automatic Post-Editing (APE) research have shown
that better results are obtained by multi-source models, which jointly encode
both source (src) and machine translation output (mt) to produce post-edited
sentence (pe). Along this trend, we present a new multi-source APE model based
on the Transformer. To construct effective joint representations, our model
internally learns to incorporate src context into mt representation. With this
approach, we achieve a significant improvement over baseline systems, as well
as the state-of-the-art multi-source APE model. Moreover, to demonstrate the
capability of our model to incorporate src context, we show that the word
alignment of the unknown MT system is successfully captured in our encoding
results.
| 2,019 | Computation and Language |
Improving Multi-Word Entity Recognition for Biomedical Texts | Biomedical Named Entity Recognition (BioNER) is a crucial step for analyzing
Biomedical texts, which aims at extracting biomedical named entities from a
given text. Different supervised machine learning algorithms have been applied
for BioNER by various researchers. The main requirement of these approaches is
an annotated dataset used for learning the parameters of machine learning
algorithms. Segment Representation (SR) models comprise of different tag sets
used for representing the annotated data, such as IOB2, IOE2 and IOBES. In this
paper, we propose an extension of IOBES model to improve the performance of
BioNER. The proposed SR model, FROBES, improves the representation of
multi-word entities. We used Bidirectional Long Short-Term Memory (BiLSTM)
network; an instance of Recurrent Neural Networks (RNN), to design a baseline
system for BioNER and evaluated the new SR model on two datasets, i2b2/VA 2010
challenge dataset and JNLPBA 2004 shared task dataset. The proposed SR model
outperforms other models for multi-word entities with length greater than two.
Further, the outputs of different SR models have been combined using majority
voting ensemble method which outperforms the baseline models performance.
| 2,018 | Computation and Language |
Simple and Effective Noisy Channel Modeling for Neural Machine
Translation | Previous work on neural noisy channel modeling relied on latent variable
models that incrementally process the source and target sentence. This makes
decoding decisions based on partial source prefixes even though the full source
is available. We pursue an alternative approach based on standard sequence to
sequence models which utilize the entire source. These models perform
remarkably well as channel models, even though they have neither been trained
on, nor designed to factor over incomplete target sentences. Experiments with
neural language models trained on billions of words show that noisy channel
models can outperform a direct model by up to 3.2 BLEU on WMT'17 German-English
translation. We evaluate on four language-pairs and our channel models
consistently outperform strong alternatives such right-to-left reranking models
and ensembles of direct models.
| 2,019 | Computation and Language |
Abductive Commonsense Reasoning | Abductive reasoning is inference to the most plausible explanation. For
example, if Jenny finds her house in a mess when she returns from work, and
remembers that she left a window open, she can hypothesize that a thief broke
into her house and caused the mess, as the most plausible explanation. While
abduction has long been considered to be at the core of how people interpret
and read between the lines in natural language (Hobbs et al., 1988), there has
been relatively little research in support of abductive natural language
inference and generation. We present the first study that investigates the
viability of language-based abductive reasoning. We introduce a challenge
dataset, ART, that consists of over 20k commonsense narrative contexts and 200k
explanations. Based on this dataset, we conceptualize two new tasks -- (i)
Abductive NLI: a multiple-choice question answering task for choosing the more
likely explanation, and (ii) Abductive NLG: a conditional generation task for
explaining given observations in natural language. On Abductive NLI, the best
model achieves 68.9% accuracy, well below human performance of 91.4%. On
Abductive NLG, the current best language generators struggle even more, as they
lack reasoning capabilities that are trivial for humans. Our analysis leads to
new insights into the types of reasoning that deep pre-trained language models
fail to perform--despite their strong performance on the related but more
narrowly defined task of entailment NLI--pointing to interesting avenues for
future research.
| 2,020 | Computation and Language |
Debiasing Personal Identities in Toxicity Classification | As Machine Learning models continue to be relied upon for making automated
decisions, the issue of model bias becomes more and more prevalent. In this
paper, we approach training a text classifica-tion model and optimize on bias
minimization by measuring not only the models performance on our dataset as a
whole, but also how it performs across different subgroups. This requires
measuring per-formance independently for different demographic subgroups and
measuring bias by comparing them to results from the rest of our data. We show
how unintended bias can be detected using these metrics and how removing bias
from a dataset completely can result in worse results.
| 2,019 | Computation and Language |
Building a Massive Corpus for Named Entity Recognition using Free Open
Data Sources | With the recent progress in machine learning, boosted by techniques such as
deep learning, many tasks can be successfully solved once a large enough
dataset is available for training. Nonetheless, human-annotated datasets are
often expensive to produce, especially when labels are fine-grained, as is the
case of Named Entity Recognition (NER), a task that operates with labels on a
word-level.
In this paper, we propose a method to automatically generate labeled datasets
for NER from public data sources by exploiting links and structured data from
DBpedia and Wikipedia. Due to the massive size of these data sources, the
resulting dataset -- SESAME Available at https://sesame-pt.github.io -- is
composed of millions of labeled sentences. We detail the method to generate the
dataset, report relevant statistics, and design a baseline using a neural
network, showing that our dataset helps building better NER predictors.
| 2,019 | Computation and Language |
BioFLAIR: Pretrained Pooled Contextualized Embeddings for Biomedical
Sequence Labeling Tasks | Biomedical Named Entity Recognition (NER) is a challenging problem in
biomedical information processing due to the widespread ambiguity of out of
context terms and extensive lexical variations. Performance on bioNER
benchmarks continues to improve due to advances like BERT, GPT, and XLNet.
FLAIR (1) is an alternative embedding model which is less computationally
intensive than the others mentioned. We test FLAIR and its pretrained PubMed
embeddings (which we term BioFLAIR) on a variety of bio NER tasks and compare
those with results from BERT-type networks. We also investigate the effects of
a small amount of additional pretraining on PubMed content, and of combining
FLAIR and ELMO models. We find that with the provided embeddings, FLAIR
performs on-par with the BERT networks - even establishing a new state of the
art on one benchmark. Additional pretraining did not provide a clear benefit,
although this might change with even more pretraining being done. Stacking the
FLAIR embeddings with others typically does provide a boost in the benchmark
results.
| 2,019 | Computation and Language |
Entity-aware ELMo: Learning Contextual Entity Representation for Entity
Disambiguation | We present a new local entity disambiguation system. The key to our system is
a novel approach for learning entity representations. In our approach we learn
an entity aware extension of Embedding for Language Model (ELMo) which we call
Entity-ELMo (E-ELMo). Given a paragraph containing one or more named entity
mentions, each mention is first defined as a function of the entire paragraph
(including other mentions), then they predict the referent entities. Utilizing
E-ELMo for local entity disambiguation, we outperform all of the
state-of-the-art local and global models on the popular benchmarks by improving
about 0.5\% on micro average accuracy for AIDA test-b with Yago candidate set.
The evaluation setup of the training data and candidate set are the same as our
baselines for fair comparison.
| 2,019 | Computation and Language |
On-Device Text Representations Robust To Misspellings via Projections | Recently, there has been a strong interest in developing natural language
applications that live on personal devices such as mobile phones, watches and
IoT with the objective to preserve user privacy and have low memory. Advances
in Locality-Sensitive Hashing (LSH)-based projection networks have demonstrated
state-of-the-art performance in various classification tasks without explicit
word (or word-piece) embedding lookup tables by computing on-the-fly text
representations. In this paper, we show that the projection based neural
classifiers are inherently robust to misspellings and perturbations of the
input text. We empirically demonstrate that the LSH projection based
classifiers are more robust to common misspellings compared to BiLSTMs (with
both word-piece & word-only tokenization) and fine-tuned BERT based methods.
When subject to misspelling attacks, LSH projection based classifiers had a
small average accuracy drop of 2.94% across multiple classifications tasks,
while the fine-tuned BERT model accuracy had a significant drop of 11.44%.
| 2,021 | Computation and Language |
Quoref: A Reading Comprehension Dataset with Questions Requiring
Coreferential Reasoning | Machine comprehension of texts longer than a single sentence often requires
coreference resolution. However, most current reading comprehension benchmarks
do not contain complex coreferential phenomena and hence fail to evaluate the
ability of models to resolve coreference. We present a new crowdsourced dataset
containing more than 24K span-selection questions that require resolving
coreference among entities in over 4.7K English paragraphs from Wikipedia.
Obtaining questions focused on such phenomena is challenging, because it is
hard to avoid lexical cues that shortcut complex reasoning. We deal with this
issue by using a strong baseline model as an adversary in the crowdsourcing
loop, which helps crowdworkers avoid writing questions with exploitable surface
cues. We show that state-of-the-art reading comprehension models perform
significantly worse than humans on this benchmark---the best model performance
is 70.5 F1, while the estimated human performance is 93.4 F1.
| 2,019 | Computation and Language |
Named Entity Recognition for Nepali Language | Named Entity Recognition have been studied for different languages like
English, German, Spanish and many others but no study have focused on Nepali
language. In this paper we propose a neural based Nepali NER using latest
state-of-the-art architecture based on grapheme-level which doesn't require any
hand-crafted features and no data pre-processing. Our novel neural based model
gained relative improvement of 33% to 50% compared to feature based SVM model
and up to 10% improvement over state-of-the-art neural based model developed
for languages beside Nepali.
| 2,019 | Computation and Language |
Pushing the Limits of Low-Resource Morphological Inflection | Recent years have seen exceptional strides in the task of automatic
morphological inflection generation. However, for a long tail of languages the
necessary resources are hard to come by, and state-of-the-art neural methods
that work well under higher resource settings perform poorly in the face of a
paucity of data. In response, we propose a battery of improvements that greatly
improve performance under such low-resource conditions. First, we present a
novel two-step attention architecture for the inflection decoder. In addition,
we investigate the effects of cross-lingual transfer from single and multiple
languages, as well as monolingual data hallucination. The macro-averaged
accuracy of our models outperforms the state-of-the-art by 15 percentage
points. Also, we identify the crucial factors for success with cross-lingual
transfer for morphological inflection: typological similarity and a common
representation across languages.
| 2,019 | Computation and Language |
Sketch-Driven Regular Expression Generation from Natural Language and
Examples | Recent systems for converting natural language descriptions into regular
expressions (regexes) have achieved some success, but typically deal with
short, formulaic text and can only produce simple regexes. Realworld regexes
are complex, hard to describe with brief sentences, and sometimes require
examples to fully convey the user's intent. We present a framework for regex
synthesis in this setting where both natural language (NL) and examples are
available. First, a semantic parser (either grammar-based or neural) maps the
natural language description into an intermediate sketch, which is an
incomplete regex containing holes to denote missing components. Then a program
synthesizer searches over the regex space defined by the sketch and finds a
regex that is consistent with the given string examples. Our semantic parser
can be trained purely from weak supervision based on correctness of the
synthesized regex, or it can leverage heuristically-derived sketches. We
evaluate on two prior datasets (Kushman and Barzilay, 2013; Locascio et al.,
2016) and a real-world dataset from Stack Overflow. Our system achieves
state-of-the-art performance on the prior datasets and solves 57% of the
real-world dataset, which existing neural systems completely fail on.
| 2,020 | Computation and Language |
Reasoning Over Paragraph Effects in Situations | A key component of successfully reading a passage of text is the ability to
apply knowledge gained from the passage to a new situation. In order to
facilitate progress on this kind of reading, we present ROPES, a challenging
benchmark for reading comprehension targeting Reasoning Over Paragraph Effects
in Situations. We target expository language describing causes and effects
(e.g., "animal pollinators increase efficiency of fertilization in flowers"),
as they have clear implications for new situations. A system is presented a
background passage containing at least one of these relations, a novel
situation that uses this background, and questions that require reasoning about
effects of the relationships in the background passage in the context of the
situation. We collect background passages from science textbooks and Wikipedia
that contain such phenomena, and ask crowd workers to author situations,
questions, and answers, resulting in a 14,322 question dataset. We analyze the
challenges of this task and evaluate the performance of state-of-the-art
reading comprehension models. The best model performs only slightly better than
randomly guessing an answer of the correct type, at 61.6% F1, well below the
human performance of 89.0%.
| 2,019 | Computation and Language |
Few-Shot Dialogue Generation Without Annotated Data: A Transfer Learning
Approach | Learning with minimal data is one of the key challenges in the development of
practical, production-ready goal-oriented dialogue systems. In a real-world
enterprise setting where dialogue systems are developed rapidly and are
expected to work robustly for an ever-growing variety of domains, products, and
scenarios, efficient learning from a limited number of examples becomes
indispensable.
In this paper, we introduce a technique to achieve state-of-the-art dialogue
generation performance in a few-shot setup, without using any annotated data.
We do this by leveraging background knowledge from a larger, more highly
represented dialogue source --- namely, the MetaLWOz dataset. We evaluate our
model on the Stanford Multi-Domain Dialogue Dataset, consisting of human-human
goal-oriented dialogues in in-car navigation, appointment scheduling, and
weather information domains.
We show that our few-shot approach achieves state-of-the art results on that
dataset by consistently outperforming the previous best model in terms of BLEU
and Entity F1 scores, while being more data-efficient by not requiring any data
annotation.
| 2,019 | Computation and Language |
Dually Interactive Matching Network for Personalized Response Selection
in Retrieval-Based Chatbots | This paper proposes a dually interactive matching network (DIM) for
presenting the personalities of dialogue agents in retrieval-based chatbots.
This model develops from the interactive matching network (IMN) which models
the matching degree between a context composed of multiple utterances and a
response candidate. Compared with previous persona fusion approaches which
enhance the representation of a context by calculating its similarity with a
given persona, the DIM model adopts a dual matching architecture, which
performs interactive matching between responses and contexts and between
responses and personas respectively for ranking response candidates.
Experimental results on PERSONA-CHAT dataset show that the DIM model
outperforms its baseline model, i.e., IMN with persona fusion, by a margin of
14.5% and outperforms the current state-of-the-art model by a margin of 27.7%
in terms of top-1 accuracy hits@1.
| 2,020 | Computation and Language |
BERT-Based Multi-Head Selection for Joint Entity-Relation Extraction | In this paper, we report our method for the Information Extraction task in
2019 Language and Intelligence Challenge. We incorporate BERT into the
multi-head selection framework for joint entity-relation extraction. This model
extends existing approaches from three perspectives. First, BERT is adopted as
a feature extraction layer at the bottom of the multi-head selection framework.
We further optimize BERT by introducing a semantic-enhanced task during BERT
pre-training. Second, we introduce a large-scale Baidu Baike corpus for entity
recognition pre-training, which is of weekly supervised learning since there is
no actual named entity label. Third, soft label embedding is proposed to
effectively transmit information between entity recognition and relation
extraction. Combining these three contributions, we enhance the information
extracting ability of the multi-head selection model and achieve F1-score 0.876
on testset-1 with a single model. By ensembling four variants of our model, we
finally achieve F1 score 0.892 (1st place) on testset-1 and F1 score 0.8924
(2nd place) on testset-2.
| 2,019 | Computation and Language |
Incorporating Word and Subword Units in Unsupervised Machine Translation
Using Language Model Rescoring | This paper describes CAiRE's submission to the unsupervised machine
translation track of the WMT'19 news shared task from German to Czech. We
leverage a phrase-based statistical machine translation (PBSMT) model and a
pre-trained language model to combine word-level neural machine translation
(NMT) and subword-level NMT models without using any parallel data. We propose
to solve the morphological richness problem of languages by training byte-pair
encoding (BPE) embeddings for German and Czech separately, and they are aligned
using MUSE (Conneau et al., 2018). To ensure the fluency and consistency of
translations, a rescoring mechanism is proposed that reuses the pre-trained
language model to select the translation candidates generated through beam
search. Moreover, a series of pre-processing and post-processing approaches are
applied to improve the quality of final translations.
| 2,019 | Computation and Language |
How Sequence-to-Sequence Models Perceive Language Styles? | Style is ubiquitous in our daily language uses, while what is language style
to learning machines? In this paper, by exploiting the second-order statistics
of semantic vectors of different corpora, we present a novel perspective on
this question via style matrix, i.e. the covariance matrix of semantic vectors,
and explain for the first time how Sequence-to-Sequence models encode style
information innately in its semantic vectors. As an application, we devise a
learning-free text style transfer algorithm, which explicitly constructs a pair
of transfer operators from the style matrices for style transfer. Moreover, our
algorithm is also observed to be flexible enough to transfer out-of-domain
sentences. Extensive experimental evidence justifies the informativeness of
style matrix and the competitive performance of our proposed style transfer
algorithm with the state-of-the-art methods.
| 2,019 | Computation and Language |
Densely Connected Graph Convolutional Networks for Graph-to-Sequence
Learning | We focus on graph-to-sequence learning, which can be framed as transducing
graph structures to sequences for text generation. To capture structural
information associated with graphs, we investigate the problem of encoding
graphs using graph convolutional networks (GCNs). Unlike various existing
approaches where shallow architectures were used for capturing local structural
information only, we introduce a dense connection strategy, proposing a novel
Densely Connected Graph Convolutional Networks (DCGCNs). Such a deep
architecture is able to integrate both local and non-local features to learn a
better structural representation of a graph. Our model outperforms the
state-of-the-art neural models significantly on AMRto-text generation and
syntax-based neural machine translation.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.