Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
KorQuAD1.0: Korean QA Dataset for Machine Reading Comprehension | Machine Reading Comprehension (MRC) is a task that requires machine to
understand natural language and answer questions by reading a document. It is
the core of automatic response technology such as chatbots and automatized
customer supporting systems. We present Korean Question Answering
Dataset(KorQuAD), a large-scale Korean dataset for extractive machine reading
comprehension task. It consists of 70,000+ human generated question-answer
pairs on Korean Wikipedia articles. We release KorQuAD1.0 and launch a
challenge at https://KorQuAD.github.io to encourage the development of
multilingual natural language processing research.
| 2,019 | Computation and Language |
Bridging the domain gap in cross-lingual document classification | The scarcity of labeled training data often prohibits the
internationalization of NLP models to multiple languages. Recent developments
in cross-lingual understanding (XLU) has made progress in this area, trying to
bridge the language barrier using language universal representations. However,
even if the language problem was resolved, models trained in one language would
not transfer to another language perfectly due to the natural domain drift
across languages and cultures. We consider the setting of semi-supervised
cross-lingual understanding, where labeled data is available in a source
language (English), but only unlabeled data is available in the target
language. We combine state-of-the-art cross-lingual methods with recently
proposed methods for weakly supervised learning such as unsupervised
pre-training and unsupervised data augmentation to simultaneously close both
the language gap and the domain gap in XLU. We show that addressing the domain
gap is crucial. We improve over strong baselines and achieve a new
state-of-the-art for cross-lingual document classification.
| 2,019 | Computation and Language |
Automatic detection of surgical site infections from a clinical data
warehouse | Reducing the incidence of surgical site infections (SSIs) is one of the
objectives of the French nosocomial infection control program. Manual
monitoring of SSIs is carried out each year by the hospital hygiene team and
surgeons at the University Hospital of Bordeaux. Our goal was to develop an
automatic detection algorithm based on hospital information system data. Three
years (2015, 2016 and 2017) of manual spine surgery monitoring have been used
as a gold standard to extract features and train machine learning algorithms.
The dataset contained 22 SSIs out of 2133 spine surgeries. Two different
approaches were compared. The first used several data sources and achieved the
best performance but is difficult to generalize to other institutions. The
second was based on free text only with semiautomatic extraction of
discriminant terms. The algorithms managed to identify all the SSIs with 20 and
26 false positives respectively on the dataset. Another evaluation is underway.
These results are encouraging for the development of semi-automated
surveillance methods.
| 2,019 | Computation and Language |
Domain Transfer in Dialogue Systems without Turn-Level Supervision | Task oriented dialogue systems rely heavily on specialized dialogue state
tracking (DST) modules for dynamically predicting user intent throughout the
conversation. State-of-the-art DST models are typically trained in a supervised
manner from manual annotations at the turn level. However, these annotations
are costly to obtain, which makes it difficult to create accurate dialogue
systems for new domains. To address these limitations, we propose a method,
based on reinforcement learning, for transferring DST models to new domains
without turn-level supervision. Across several domains, our experiments show
that this method quickly adapts off-the-shelf models to new domains and
performs on par with models trained with turn-level supervision. We also show
our method can improve models trained using turn-level supervision by
subsequent fine-tuning optimization toward dialog-level rewards.
| 2,019 | Computation and Language |
Prediction Uncertainty Estimation for Hate Speech Classification | As a result of social network popularity, in recent years, hate speech
phenomenon has significantly increased. Due to its harmful effect on minority
groups as well as on large communities, there is a pressing need for hate
speech detection and filtering. However, automatic approaches shall not
jeopardize free speech, so they shall accompany their decisions with
explanations and assessment of uncertainty. Thus, there is a need for
predictive machine learning models that not only detect hate speech but also
help users understand when texts cross the line and become unacceptable. The
reliability of predictions is usually not addressed in text classification. We
fill this gap by proposing the adaptation of deep neural networks that can
efficiently estimate prediction uncertainty. To reliably detect hate speech, we
use Monte Carlo dropout regularization, which mimics Bayesian inference within
neural networks. We evaluate our approach using different text embedding
methods. We visualize the reliability of results with a novel technique that
aids in understanding the classification reliability and errors.
| 2,019 | Computation and Language |
Fast transcription of speech in low-resource languages | We present software that, in only a few hours, transcribes forty hours of
recorded speech in a surprise language, using only a few tens of megabytes of
noisy text in that language, and a zero-resource grapheme to phoneme (G2P)
table. A pretrained acoustic model maps acoustic features to phonemes; a
reversed G2P maps these to graphemes; then a language model maps these to a
most-likely grapheme sequence, i.e., a transcription. This software has worked
successfully with corpora in Arabic, Assam, Kinyarwanda, Russian, Sinhalese,
Swahili, Tagalog, and Tamil.
| 2,019 | Computation and Language |
Communication-based Evaluation for Natural Language Generation | Natural language generation (NLG) systems are commonly evaluated using n-gram
overlap measures (e.g. BLEU, ROUGE). These measures do not directly capture
semantics or speaker intentions, and so they often turn out to be misaligned
with our true goals for NLG. In this work, we argue instead for
communication-based evaluations: assuming the purpose of an NLG system is to
convey information to a reader/listener, we can directly evaluate its
effectiveness at this task using the Rational Speech Acts model of pragmatic
language use. We illustrate with a color reference dataset that contains
descriptions in pre-defined quality categories, showing that our method better
aligns with these quality categories than do any of the prominent n-gram
overlap methods.
| 2,019 | Computation and Language |
Multilingual Neural Machine Translation for Zero-Resource Languages | In recent years, Neural Machine Translation (NMT) has been shown to be more
effective than phrase-based statistical methods, thus quickly becoming the
state of the art in machine translation (MT). However, NMT systems are limited
in translating low-resourced languages, due to the significant amount of
parallel data that is required to learn useful mappings between languages. In
this work, we show how the so-called multilingual NMT can help to tackle the
challenges associated with low-resourced language translation. The underlying
principle of multilingual NMT is to force the creation of hidden
representations of words in a shared semantic space across multiple languages,
thus enabling a positive parameter transfer across languages. Along this
direction, we present multilingual translation experiments with three languages
(English, Italian, Romanian) covering six translation directions, utilizing
both recurrent neural networks and transformer (or self-attentive) neural
networks. We then focus on the zero-shot translation problem, that is how to
leverage multi-lingual data in order to learn translation directions that are
not covered by the available training material. To this aim, we introduce our
recently proposed iterative self-training method, which incrementally improves
a multilingual NMT on a zero-shot direction by just relying on monolingual
data. Our results on TED talks data show that multilingual NMT outperforms
conventional bilingual NMT, that the transformer NMT outperforms recurrent NMT,
and that zero-shot NMT outperforms conventional pivoting methods and even
matches the performance of a fully-trained bilingual system.
| 2,019 | Computation and Language |
BottleSum: Unsupervised and Self-supervised Sentence Summarization using
the Information Bottleneck Principle | The principle of the Information Bottleneck (Tishby et al. 1999) is to
produce a summary of information X optimized to predict some other relevant
information Y. In this paper, we propose a novel approach to unsupervised
sentence summarization by mapping the Information Bottleneck principle to a
conditional language modelling objective: given a sentence, our approach seeks
a compressed sentence that can best predict the next sentence. Our iterative
algorithm under the Information Bottleneck objective searches gradually shorter
subsequences of the given sentence while maximizing the probability of the next
sentence conditioned on the summary. Using only pretrained language models with
no direct supervision, our approach can efficiently perform extractive sentence
summarization over a large corpus.
Building on our unsupervised extractive summarization (BottleSumEx), we then
present a new approach to self-supervised abstractive summarization
(BottleSumSelf), where a transformer-based language model is trained on the
output summaries of our unsupervised method. Empirical results demonstrate that
our extractive method outperforms other unsupervised models on multiple
automatic metrics. In addition, we find that our self-supervised abstractive
model outperforms unsupervised baselines (including our own) by human
evaluation along multiple attributes.
| 2,019 | Computation and Language |
Short-Text Classification Using Unsupervised Keyword Expansion | Short-text classification, like all data science, struggles to achieve high
performance using limited data. As a solution, a short sentence may be expanded
with new and relevant feature words to form an artificially enlarged dataset,
and add new features to testing data. This paper applies a novel approach to
text expansion by generating new words directly for each input sentence, thus
requiring no additional datasets or previous training. In this unsupervised
approach, new keywords are formed within the hidden states of a pre-trained
language model and then used to create extended pseudo documents. The word
generation process was assessed by examining how well the predicted words
matched to topics of the input sentence. It was found that this method could
produce 3-10 relevant new words for each target topic, while generating just 1
word related to each non-target topic. Generated words were then added to short
news headlines to create extended pseudo headlines. Experimental results have
shown that models trained using the pseudo headlines can improve classification
accuracy when limiting the number of training examples.
| 2,019 | Computation and Language |
Probing Natural Language Inference Models through Semantic Fragments | Do state-of-the-art models for language understanding already have, or can
they easily learn, abilities such as boolean coordination, quantification,
conditionals, comparatives, and monotonicity reasoning (i.e., reasoning about
word substitutions in sentential contexts)? While such phenomena are involved
in natural language inference (NLI) and go beyond basic linguistic
understanding, it is unclear the extent to which they are captured in existing
NLI benchmarks and effectively learned by models. To investigate this, we
propose the use of semantic fragments---systematically generated datasets that
each target a different semantic phenomenon---for probing, and efficiently
improving, such capabilities of linguistic models. This approach to creating
challenge datasets allows direct control over the semantic diversity and
complexity of the targeted linguistic phenomena, and results in a more precise
characterization of a model's linguistic behavior. Our experiments, using a
library of 8 such semantic fragments, reveal two remarkable findings: (a)
State-of-the-art models, including BERT, that are pre-trained on existing NLI
benchmark datasets perform poorly on these new fragments, even though the
phenomena probed here are central to the NLI task. (b) On the other hand, with
only a few minutes of additional fine-tuning---with a carefully selected
learning rate and a novel variation of "inoculation"---a BERT-based model can
master all of these logic and monotonicity fragments while retaining its
performance on established NLI benchmarks.
| 2,019 | Computation and Language |
Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End
Speech Translation | End-to-end speech translation, a hot topic in recent years, aims to translate
a segment of audio into a specific language with an end-to-end model.
Conventional approaches employ multi-task learning and pre-training methods for
this task, but they suffer from the huge gap between pre-training and
fine-tuning. To address these issues, we propose a Tandem Connectionist
Encoding Network (TCEN) which bridges the gap by reusing all subnets in
fine-tuning, keeping the roles of subnets consistent, and pre-training the
attention module. Furthermore, we propose two simple but effective methods to
guarantee the speech encoder outputs and the MT encoder inputs are consistent
in terms of semantic representation and sequence length. Experimental results
show that our model outperforms baselines 2.2 BLEU on a large benchmark
dataset.
| 2,019 | Computation and Language |
Grounding learning of modifier dynamics: An application to color naming | Grounding is crucial for natural language understanding. An important subtask
is to understand modified color expressions, such as 'dirty blue'. We present a
model of color modifiers that, compared with previous additive models in RGB
space, learns more complex transformations. In addition, we present a model
that operates in the HSV color space. We show that certain adjectives are
better modeled in that space. To account for all modifiers, we train a hard
ensemble model that selects a color space depending on the modifier color pair.
Experimental results show significant and consistent improvements compared to
the state-of-the-art baseline model.
| 2,019 | Computation and Language |
Learning Explicit and Implicit Structures for Targeted Sentiment
Analysis | Targeted sentiment analysis is the task of jointly predicting target entities
and their associated sentiment information. Existing research efforts mostly
regard this joint task as a sequence labeling problem, building models that can
capture explicit structures in the output space. However, the importance of
capturing implicit global structural information that resides in the input
space is largely unexplored. In this work, we argue that both types of
information (implicit and explicit structural information) are crucial for
building a successful targeted sentiment analysis model. Our experimental
results show that properly capturing both information is able to lead to better
performance than competitive existing approaches. We also conduct extensive
experiments to investigate our model's effectiveness and robustness.
| 2,019 | Computation and Language |
Simple yet Effective Bridge Reasoning for Open-Domain Multi-Hop Question
Answering | A key challenge of multi-hop question answering (QA) in the open-domain
setting is to accurately retrieve the supporting passages from a large corpus.
Existing work on open-domain QA typically relies on off-the-shelf information
retrieval (IR) techniques to retrieve \textbf{answer passages}, i.e., the
passages containing the groundtruth answers. However, IR-based approaches are
insufficient for multi-hop questions, as the topic of the second or further
hops is not explicitly covered by the question. To resolve this issue, we
introduce a new sub-problem of open-domain multi-hop QA, which aims to
recognize the bridge (\emph{i.e.}, the anchor that links to the answer passage)
from the context of a set of start passages with a reading comprehension model.
This model, the \textbf{bridge reasoner}, is trained with a weakly supervised
signal and produces the candidate answer passages for the \textbf{passage
reader} to extract the answer. On the full-wiki HotpotQA benchmark, we
significantly improve the baseline method by 14 point F1. Without using any
memory-inefficient contextual embeddings, our result is also competitive with
the state-of-the-art that applies BERT in multiple modules.
| 2,019 | Computation and Language |
Multi-step Entity-centric Information Retrieval for Multi-Hop Question
Answering | Multi-hop question answering (QA) requires an information retrieval (IR)
system that can find \emph{multiple} supporting evidence needed to answer the
question, making the retrieval process very challenging. This paper introduces
an IR technique that uses information of entities present in the initially
retrieved evidence to learn to `\emph{hop}' to other relevant evidence. In a
setting, with more than \textbf{5 million} Wikipedia paragraphs, our approach
leads to significant boost in retrieval performance. The retrieved evidence
also increased the performance of an existing QA model (without any training)
on the \hotpot benchmark by \textbf{10.59} F1.
| 2,019 | Computation and Language |
K-BERT: Enabling Language Representation with Knowledge Graph | Pre-trained language representation models, such as BERT, capture a general
language representation from large-scale corpora, but lack domain-specific
knowledge. When reading a domain text, experts make inferences with relevant
knowledge. For machines to achieve this capability, we propose a
knowledge-enabled language representation model (K-BERT) with knowledge graphs
(KGs), in which triples are injected into the sentences as domain knowledge.
However, too much knowledge incorporation may divert the sentence from its
correct meaning, which is called knowledge noise (KN) issue. To overcome KN,
K-BERT introduces soft-position and visible matrix to limit the impact of
knowledge. K-BERT can easily inject domain knowledge into the models by
equipped with a KG without pre-training by-self because it is capable of
loading model parameters from the pre-trained BERT. Our investigation reveals
promising results in twelve NLP tasks. Especially in domain-specific tasks
(including finance, law, and medicine), K-BERT significantly outperforms BERT,
which demonstrates that K-BERT is an excellent choice for solving the
knowledge-driven problems that require experts.
| 2,019 | Computation and Language |
SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in
Spoken Dialogues and Chats | We present an overview of the EmotionX 2019 Challenge, held at the 7th
International Workshop on Natural Language Processing for Social Media
(SocialNLP), in conjunction with IJCAI 2019. The challenge entailed predicting
emotions in spoken and chat-based dialogues using augmented EmotionLines
datasets. EmotionLines contains two distinct datasets: the first includes
excerpts from a US-based TV sitcom episode scripts (Friends) and the second
contains online chats (EmotionPush). A total of thirty-six teams registered to
participate in the challenge. Eleven of the teams successfully submitted their
predictions performance evaluation. The top-scoring team achieved a micro-F1
score of 81.5% for the spoken-based dialogues (Friends) and 79.5% for the
chat-based dialogues (EmotionPush).
| 2,019 | Computation and Language |
Course Concept Expansion in MOOCs with External Knowledge and
Interactive Game | As Massive Open Online Courses (MOOCs) become increasingly popular, it is
promising to automatically provide extracurricular knowledge for MOOC users.
Suffering from semantic drifts and lack of knowledge guidance, existing methods
can not effectively expand course concepts in complex MOOC environments. In
this paper, we first build a novel boundary during searching for new concepts
via external knowledge base and then utilize heterogeneous features to verify
the high-quality results. In addition, to involve human efforts in our model,
we design an interactive optimization mechanism based on a game. Our
experiments on the four datasets from Coursera and XuetangX show that the
proposed method achieves significant improvements(+0.19 by MAP) over existing
methods. The source code and datasets have been published.
| 2,019 | Computation and Language |
Span-based Joint Entity and Relation Extraction with Transformer
Pre-training | We introduce SpERT, an attention model for span-based joint entity and
relation extraction. Our key contribution is a light-weight reasoning on BERT
embeddings, which features entity recognition and filtering, as well as
relation classification with a localized, marker-free context representation.
The model is trained using strong within-sentence negative samples, which are
efficiently extracted in a single BERT pass. These aspects facilitate a search
over all spans in the sentence.
In ablation studies, we demonstrate the benefits of pre-training, strong
negative sampling and localized context. Our model outperforms prior work by up
to 2.6% F1 score on several datasets for joint entity and relation extraction.
| 2,021 | Computation and Language |
Character-Centric Storytelling | Sequential vision-to-language or visual storytelling has recently been one of
the areas of focus in computer vision and language modeling domains. Though
existing models generate narratives that read subjectively well, there could be
cases when these models miss out on generating stories that account and address
all prospective human and animal characters in the image sequences. Considering
this scenario, we propose a model that implicitly learns relationships between
provided characters and thereby generates stories with respective characters in
scope. We use the VIST dataset for this purpose and report numerous statistics
on the dataset. Eventually, we describe the model, explain the experiment and
discuss our current status and future work.
| 2,020 | Computation and Language |
Pointer-based Fusion of Bilingual Lexicons into Neural Machine
Translation | Neural machine translation (NMT) systems require large amounts of high
quality in-domain parallel corpora for training. State-of-the-art NMT systems
still face challenges related to out-of-vocabulary words and dealing with
low-resource language pairs. In this paper, we propose and compare several
models for fusion of bilingual lexicons with an end-to-end trained
sequence-to-sequence model for machine translation. The result is a fusion
model with two information sources for the decoder: a neural conditional
language model and a bilingual lexicon. This fusion model learns how to combine
both sources of information in order to produce higher quality translation
output. Our experiments show that our proposed models work well in relatively
low-resource scenarios, and also effectively reduce the parameter size and
training cost for NMT without sacrificing performance.
| 2,019 | Computation and Language |
Learning to Deceive with Attention-Based Explanations | Attention mechanisms are ubiquitous components in neural architectures
applied to natural language processing. In addition to yielding gains in
predictive accuracy, attention weights are often claimed to confer
interpretability, purportedly useful both for providing insights to
practitioners and for explaining why a model makes its decisions to
stakeholders. We call the latter use of attention mechanisms into question by
demonstrating a simple method for training models to produce deceptive
attention masks. Our method diminishes the total weight assigned to designated
impermissible tokens, even when the models can be shown to nevertheless rely on
these features to drive predictions. Across multiple models and tasks, our
approach manipulates attention weights while paying surprisingly little cost in
accuracy. Through a human study, we show that our manipulated attention-based
explanations deceive people into thinking that predictions from a model biased
against gender minorities do not rely on the gender. Consequently, our results
cast doubt on attention's reliability as a tool for auditing algorithms in the
context of fairness and accountability.
| 2,020 | Computation and Language |
Say Anything: Automatic Semantic Infelicity Detection in L2 English
Indefinite Pronouns | Computational research on error detection in second language speakers has
mainly addressed clear grammatical anomalies typical to learners at the
beginner-to-intermediate level. We focus instead on acquisition of subtle
semantic nuances of English indefinite pronouns by non-native speakers at
varying levels of proficiency. We first lay out theoretical, linguistically
motivated hypotheses, and supporting empirical evidence on the nature of the
challenges posed by indefinite pronouns to English learners. We then suggest
and evaluate an automatic approach for detection of atypical usage patterns,
demonstrating that deep learning architectures are promising for this task
involving nuanced semantic anomalies.
| 2,019 | Computation and Language |
Do NLP Models Know Numbers? Probing Numeracy in Embeddings | The ability to understand and work with numbers (numeracy) is critical for
many complex reasoning tasks. Currently, most NLP models treat numbers in text
in the same way as other tokens---they embed them as distributed vectors. Is
this enough to capture numeracy? We begin by investigating the numerical
reasoning capabilities of a state-of-the-art question answering model on the
DROP dataset. We find this model excels on questions that require numerical
reasoning, i.e., it already captures numeracy. To understand how this
capability emerges, we probe token embedding methods (e.g., BERT, GloVe) on
synthetic list maximum, number decoding, and addition tasks. A surprising
degree of numeracy is naturally present in standard embeddings. For example,
GloVe and word2vec accurately encode magnitude for numbers up to 1,000.
Furthermore, character-level embeddings are even more precise---ELMo captures
numeracy the best for all pre-trained methods---but BERT, which uses sub-word
units, is less exact.
| 2,019 | Computation and Language |
Semantic Relatedness Based Re-ranker for Text Spotting | Applications such as textual entailment, plagiarism detection or document
clustering rely on the notion of semantic similarity, and are usually
approached with dimension reduction techniques like LDA or with embedding-based
neural approaches. We present a scenario where semantic similarity is not
enough, and we devise a neural approach to learn semantic relatedness. The
scenario is text spotting in the wild, where a text in an image (e.g. street
sign, advertisement or bus destination) must be identified and recognized. Our
goal is to improve the performance of vision systems by leveraging semantic
information. Our rationale is that the text to be spotted is often related to
the image context in which it appears (word pairs such as Delta-airplane, or
quarters-parking are not similar, but are clearly related). We show how
learning a word-to-word or word-to-sentence relatedness score can improve the
performance of text spotting systems up to 2.9 points, outperforming other
measures in a benchmark dataset.
| 2,019 | Computation and Language |
Revealing the Importance of Semantic Retrieval for Machine Reading at
Scale | Machine Reading at Scale (MRS) is a challenging task in which a system is
given an input query and is asked to produce a precise output by "reading"
information from a large knowledge base. The task has gained popularity with
its natural combination of information retrieval (IR) and machine comprehension
(MC). Advancements in representation learning have led to separated progress in
both IR and MC; however, very few studies have examined the relationship and
combined design of retrieval and comprehension at different levels of
granularity, for development of MRS systems. In this work, we give general
guidelines on system design for MRS by proposing a simple yet effective
pipeline system with special consideration on hierarchical semantic retrieval
at both paragraph and sentence level, and their potential effects on the
downstream task. The system is evaluated on both fact verification and
open-domain multihop QA, achieving state-of-the-art results on the leaderboard
test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of
semantic retrieval, we present ablation and analysis studies to quantify the
contribution of neural retrieval modules at both paragraph-level and
sentence-level, and illustrate that intermediate semantic retrieval modules are
vital for not only effectively filtering upstream information and thus saving
downstream computation, but also for shaping upstream data distribution and
providing better data for downstream modeling. Code/data made publicly
available at: https://github.com/easonnie/semanticRetrievalMRS
| 2,019 | Computation and Language |
Megatron-LM: Training Multi-Billion Parameter Language Models Using
Model Parallelism | Recent work in language modeling demonstrates that training large transformer
models advances the state of the art in Natural Language Processing
applications. However, very large models can be quite difficult to train due to
memory constraints. In this work, we present our techniques for training very
large transformer models and implement a simple, efficient intra-layer model
parallel approach that enables training transformer models with billions of
parameters. Our approach does not require a new compiler or library changes, is
orthogonal and complimentary to pipeline model parallelism, and can be fully
implemented with the insertion of a few communication operations in native
PyTorch. We illustrate this approach by converging transformer based models up
to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the
entire application with 76% scaling efficiency when compared to a strong single
GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To
demonstrate that large language models can further advance the state of the art
(SOTA), we train an 8.3 billion parameter transformer language model similar to
GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful
attention to the placement of layer normalization in BERT-like models is
critical to achieving increased performance as the model size grows. Using the
GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA
perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%)
datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9%
compared to SOTA accuracy of 89.4%).
| 2,020 | Computation and Language |
Extractive Summarization of Long Documents by Combining Global and Local
Context | In this paper, we propose a novel neural single document extractive
summarization model for long documents, incorporating both the global context
of the whole document and the local context within the current topic. We
evaluate the model on two datasets of scientific papers, Pubmed and arXiv,
where it outperforms previous work, both extractive and abstractive models, on
ROUGE-1, ROUGE-2 and METEOR scores. We also show that, consistently with our
goal, the benefits of our method become stronger as we apply it to longer
documents. Rather surprisingly, an ablation study indicates that the benefits
of our model seem to come exclusively from modeling the local context, even for
the longest documents.
| 2,019 | Computation and Language |
DOVER: A Method for Combining Diarization Outputs | Speech recognition and other natural language tasks have long benefited from
voting-based algorithms as a method to aggregate outputs from several systems
to achieve a higher accuracy than any of the individual systems. Diarization,
the task of segmenting an audio stream into speaker-homogeneous and co-indexed
regions, has so far not seen the benefit of this strategy because the structure
of the task does not lend itself to a simple voting approach. This paper
presents DOVER (diarization output voting error reduction), an algorithm for
weighted voting among diarization hypotheses, in the spirit of the ROVER
algorithm for combining speech recognition hypotheses. We evaluate the
algorithm for diarization of meeting recordings with multiple microphones, and
find that it consistently reduces diarization error rate over the average of
results from individual channels, and often improves on the single best channel
chosen by an oracle.
| 2,019 | Computation and Language |
Simultaneous Speech Recognition and Speaker Diarization for Monaural
Dialogue Recordings with Target-Speaker Acoustic Models | This paper investigates the use of target-speaker automatic speech
recognition (TS-ASR) for simultaneous speech recognition and speaker
diarization of single-channel dialogue recordings. TS-ASR is a technique to
automatically extract and recognize only the speech of a target speaker given a
short sample utterance of that speaker. One obvious drawback of TS-ASR is that
it cannot be used when the speakers in the recordings are unknown because it
requires a sample of the target speakers in advance of decoding. To remove this
limitation, we propose an iterative method, in which (i) the estimation of
speaker embeddings and (ii) TS-ASR based on the estimated speaker embeddings
are alternately executed. We evaluated the proposed method by using very
challenging dialogue recordings in which the speaker overlap ratio was over
20%. We confirmed that the proposed method significantly reduced both the word
error rate (WER) and diarization error rate (DER). Our proposed method combined
with i-vector speaker embeddings ultimately achieved a WER that differed by
only 2.1 % from that of TS-ASR given oracle speaker embeddings. Furthermore,
our method can solve speaker diarization simultaneously as a by-product and
achieved better DER than that of the conventional clustering-based speaker
diarization method based on i-vector.
| 2,019 | Computation and Language |
SUPP.AI: Finding Evidence for Supplement-Drug Interactions | Dietary supplements are used by a large portion of the population, but
information on their pharmacologic interactions is incomplete. To address this
challenge, we present SUPP.AI, an application for browsing evidence of
supplement-drug interactions (SDIs) extracted from the biomedical literature.
We train a model to automatically extract supplement information and identify
such interactions from the scientific literature. To address the lack of
labeled data for SDI identification, we use labels of the closely related task
of identifying drug-drug interactions (DDIs) for supervision. We fine-tune the
contextualized word representations of the RoBERTa language model using labeled
DDI data, and apply the fine-tuned model to identify supplement interactions.
We extract 195k evidence sentences from 22M articles (P=0.82, R=0.58, F1=0.68)
for 60k interactions. We create the SUPP.AI application for users to search
evidence sentences extracted by our model. SUPP.AI is an attempt to close the
information gap on dietary supplements by making up-to-date evidence on SDIs
more discoverable for researchers, clinicians, and consumers.
| 2,020 | Computation and Language |
Recursive Graphical Neural Networks for Text Classification | The complicated syntax structure of natural language is hard to be explicitly
modeled by sequence-based models. Graph is a natural structure to describe the
complicated relation between tokens. The recent advance in Graph Neural
Networks (GNN) provides a powerful tool to model graph structure data, but
simple graph models such as Graph Convolutional Networks (GCN) suffer from
over-smoothing problem, that is, when stacking multiple layers, all nodes will
converge to the same value. In this paper, we propose a novel Recursive
Graphical Neural Networks model (ReGNN) to represent text organized in the form
of graph. In our proposed model, LSTM is used to dynamically decide which part
of the aggregated neighbor information should be transmitted to upper layers
thus alleviating the over-smoothing problem. Furthermore, to encourage the
exchange between the local and global information, a global graph-level node is
designed. We conduct experiments on both single and multiple label text
classification tasks. Experiment results show that our ReGNN model surpasses
the strong baselines significantly in most of the datasets and greatly
alleviates the over-smoothing problem.
| 2,019 | Computation and Language |
Modeling Conversation Structure and Temporal Dynamics for Jointly
Predicting Rumor Stance and Veracity | Automatically verifying rumorous information has become an important and
challenging task in natural language processing and social media analytics.
Previous studies reveal that people's stances towards rumorous messages can
provide indicative clues for identifying the veracity of rumors, and thus
determining the stances of public reactions is a crucial preceding step for
rumor veracity prediction. In this paper, we propose a hierarchical multi-task
learning framework for jointly predicting rumor stance and veracity on Twitter,
which consists of two components. The bottom component of our framework
classifies the stances of tweets in a conversation discussing a rumor via
modeling the structural property based on a novel graph convolutional network.
The top component predicts the rumor veracity by exploiting the temporal
dynamics of stance evolution. Experimental results on two benchmark datasets
show that our method outperforms previous methods in both rumor stance
classification and veracity prediction.
| 2,019 | Computation and Language |
Improving Natural Language Inference with a Pretrained Parser | We introduce a novel approach to incorporate syntax into natural language
inference (NLI) models. Our method uses contextual token-level vector
representations from a pretrained dependency parser. Like other contextual
embedders, our method is broadly applicable to any neural model. We experiment
with four strong NLI models (decomposable attention model, ESIM, BERT, and
MT-DNN), and show consistent benefit to accuracy across three NLI benchmarks.
| 2,019 | Computation and Language |
Pre-trained Language Model for Biomedical Question Answering | The recent success of question answering systems is largely attributed to
pre-trained language models. However, as language models are mostly pre-trained
on general domain corpora such as Wikipedia, they often have difficulty in
understanding biomedical questions. In this paper, we investigate the
performance of BioBERT, a pre-trained biomedical language model, in answering
biomedical questions including factoid, list, and yes/no type questions.
BioBERT uses almost the same structure across various question types and
achieved the best performance in the 7th BioASQ Challenge (Task 7b, Phase B).
BioBERT pre-trained on SQuAD or SQuAD 2.0 easily outperformed previous
state-of-the-art models. BioBERT obtains the best performance when it uses the
appropriate pre-/post-processing strategies for questions, passages, and
answers.
| 2,019 | Computation and Language |
Text Length Adaptation in Sentiment Classification | Can a text classifier generalize well for datasets where the text length is
different? For example, when short reviews are sentiment-labeled, can these
transfer to predict the sentiment of long reviews (i.e., short to long
transfer), or vice versa? While unsupervised transfer learning has been
well-studied for cross domain/lingual transfer tasks, Cross Length Transfer
(CLT) has not yet been explored. One reason is the assumption that length
difference is trivially transferable in classification. We show that it is not,
because short/long texts differ in context richness and word intensity. We
devise new benchmark datasets from diverse domains and languages, and show that
existing models from similar tasks cannot deal with the unique challenge of
transferring across text lengths. We introduce a strong baseline model called
BaggedCNN that treats long texts as bags containing short texts. We propose a
state-of-the-art CLT model called Length Transfer Networks (LeTraNets) that
introduces a two-way encoding scheme for short and long texts using multiple
training mechanisms. We test our models and find that existing models perform
worse than the BaggedCNN baseline, while LeTraNets outperforms all models.
| 2,019 | Computation and Language |
A Lexical, Syntactic, and Semantic Perspective for Understanding Style
in Text | With a growing interest in modeling inherent subjectivity in natural
language, we present a linguistically-motivated process to understand and
analyze the writing style of individuals from three perspectives: lexical,
syntactic, and semantic. We discuss the stylistically expressive elements
within each of these levels and use existing methods to quantify the linguistic
intuitions related to some of these elements. We show that such a multi-level
analysis is useful for developing a well-knit understanding of style - which is
independent of the natural language task at hand, and also demonstrate its
value in solving three downstream tasks: authors' style analysis, authorship
attribution, and emotion prediction. We conduct experiments on a variety of
datasets, comprising texts from social networking sites, user reviews, legal
documents, literary books, and newswire. The results on the aforementioned
tasks and datasets illustrate that such a multi-level understanding of style,
which has been largely ignored in recent works, models style-related
subjectivity in text and can be leveraged to improve performance on multiple
downstream tasks both qualitatively and quantitatively.
| 2,019 | Computation and Language |
Subword ELMo | Embedding from Language Models (ELMo) has shown to be effective for improving
many natural language processing (NLP) tasks, and ELMo takes character
information to compose word representation to train language models.However,
the character is an insufficient and unnatural linguistic unit for word
representation.Thus we introduce Embedding from Subword-aware Language Models
(ESuLMo) which learns word representation from subwords using unsupervised
segmentation over words.We show that ESuLMo can enhance four benchmark NLP
tasks more effectively than ELMo, including syntactic dependency parsing,
semantic role labeling, implicit discourse relation recognition and textual
entailment, which brings a meaningful improvement over ELMo.
| 2,019 | Computation and Language |
Using BERT for Word Sense Disambiguation | Word Sense Disambiguation (WSD), which aims to identify the correct sense of
a given polyseme, is a long-standing problem in NLP. In this paper, we propose
to use BERT to extract better polyseme representations for WSD and explore
several ways of combining BERT and the classifier. We also utilize sense
definitions to train a unified classifier for all words, which enables the
model to disambiguate unseen polysemes. Experiments show that our model
achieves the state-of-the-art results on the standard English All-word WSD
evaluation.
| 2,019 | Computation and Language |
Enriching BERT with Knowledge Graph Embeddings for Document
Classification | In this paper, we focus on the classification of books using short
descriptive texts (cover blurbs) and additional metadata. Building upon BERT, a
deep neural language model, we demonstrate how to combine text representations
with metadata and knowledge graph embeddings, which encode author information.
Compared to the standard BERT approach we achieve considerably better results
for the classification task. For a more coarse-grained classification using
eight labels we achieve an F1- score of 87.20, while a detailed classification
using 343 labels yields an F1-score of 64.70. We make the source code and
trained models of our experiments publicly available
| 2,019 | Computation and Language |
Simple, Scalable Adaptation for Neural Machine Translation | Fine-tuning pre-trained Neural Machine Translation (NMT) models is the
dominant approach for adapting to new languages and domains. However,
fine-tuning requires adapting and maintaining a separate model for each target
task. We propose a simple yet efficient approach for adaptation in NMT. Our
proposed approach consists of injecting tiny task specific adapter layers into
a pre-trained model. These lightweight adapters, with just a small fraction of
the original model size, adapt the model to multiple individual tasks
simultaneously. We evaluate our approach on two tasks: (i) Domain Adaptation
and (ii) Massively Multilingual NMT. Experiments on domain adaptation
demonstrate that our proposed approach is on par with full fine-tuning on
various domains, dataset sizes and model capacities. On a massively
multilingual dataset of 103 languages, our adaptation approach bridges the gap
between individual bilingual models and one massively multilingual model for
most language pairs, paving the way towards universal machine translation.
| 2,019 | Computation and Language |
Word Recognition, Competition, and Activation in a Model of Visually
Grounded Speech | In this paper, we study how word-like units are represented and activated in
a recurrent neural model of visually grounded speech. The model used in our
experiments is trained to project an image and its spoken description in a
common representation space. We show that a recurrent model trained on spoken
sentences implicitly segments its input into word-like units and reliably maps
them to their correct visual referents. We introduce a methodology originating
from linguistics to analyse the representation learned by neural networks --
the gating paradigm -- and show that the correct representation of a word is
only activated if the network has access to first phoneme of the target word,
suggesting that the network does not rely on a global acoustic pattern.
Furthermore, we find out that not all speech frames (MFCC vectors in our case)
play an equal role in the final encoded representation of a given word, but
that some frames have a crucial effect on it. Finally, we suggest that word
representation could be activated through a process of lexical competition.
| 2,019 | Computation and Language |
Hierarchical Meta-Embeddings for Code-Switching Named Entity Recognition | In countries that speak multiple main languages, mixing up different
languages within a conversation is commonly called code-switching. Previous
works addressing this challenge mainly focused on word-level aspects such as
word embeddings. However, in many cases, languages share common subwords,
especially for closely related languages, but also for languages that are
seemingly irrelevant. Therefore, we propose Hierarchical Meta-Embeddings (HME)
that learn to combine multiple monolingual word-level and subword-level
embeddings to create language-agnostic lexical representations. On the task of
Named Entity Recognition for English-Spanish code-switching data, our model
achieves the state-of-the-art performance in the multilingual settings. We also
show that, in cross-lingual settings, our model not only leverages closely
related languages, but also learns from languages with different roots.
Finally, we show that combining different subunits are crucial for capturing
code-switching entities.
| 2,019 | Computation and Language |
Code-Switched Language Models Using Neural Based Synthetic Data from
Parallel Sentences | Training code-switched language models is difficult due to lack of data and
complexity in the grammatical structure. Linguistic constraint theories have
been used for decades to generate artificial code-switching sentences to cope
with this issue. However, this require external word alignments or constituency
parsers that create erroneous results on distant languages. We propose a
sequence-to-sequence model using a copy mechanism to generate code-switching
data by leveraging parallel monolingual translations from a limited source of
code-switching data. The model learns how to combine words from parallel
sentences and identifies when to switch one language to the other. Moreover, it
captures code-switching constraints by attending and aligning the words in
inputs, without requiring any external knowledge. Based on experimental
results, the language model trained with the generated sentences achieves
state-of-the-art performance and improves end-to-end automatic speech
recognition.
| 2,019 | Computation and Language |
Fine-Tuning Language Models from Human Preferences | Reward learning enables the application of reinforcement learning (RL) to
tasks where reward is defined by human judgment, building a model of reward by
asking humans questions. Most work on reward learning has used simulated
environments, but complex information about values is often expressed in
natural language, and we believe reward learning for language is a key to
making RL practical and safe for real-world tasks. In this paper, we build on
advances in generative pretraining of language models to apply reward learning
to four natural language tasks: continuing text with positive sentiment or
physically descriptive language, and summarization tasks on the TL;DR and
CNN/Daily Mail datasets. For stylistic continuation we achieve good results
with only 5,000 comparisons evaluated by humans. For summarization, models
trained with 60,000 comparisons copy whole sentences from the input but skip
irrelevant preamble; this leads to reasonable ROUGE scores and very good
performance according to our human labelers, but may be exploiting the fact
that labelers rely on simple heuristics.
| 2,020 | Computation and Language |
Do We Need Neural Models to Explain Human Judgments of Acceptability? | Native speakers can judge whether a sentence is an acceptable instance of
their language. Acceptability provides a means of evaluating whether
computational language models are processing language in a human-like manner.
We test the ability of computational language models, simple language features,
and word embeddings to predict native English speakers judgments of
acceptability on English-language essays written by non-native speakers. We
find that much of the sentence acceptability variance can be captured by a
combination of features including misspellings, word order, and word similarity
(Pearson's r = 0.494). While predictive neural models fit acceptability
judgments well (r = 0.527), we find that a 4-gram model with statistical
smoothing is just as good (r = 0.528). Thanks to incorporating a count of
misspellings, our 4-gram model surpasses both the previous unsupervised
state-of-the art (Lau et al., 2015; r = 0.472), and the average non-expert
native speaker (r = 0.46). Our results demonstrate that acceptability is well
captured by n-gram statistics and simple language features.
| 2,019 | Computation and Language |
Cross-Lingual Contextual Word Embeddings Mapping With Multi-Sense Words
In Mind | Recent work in cross-lingual contextual word embedding learning cannot handle
multi-sense words well. In this work, we explore the characteristics of
contextual word embeddings and show the link between contextual word embeddings
and word senses. We propose two improving solutions by considering contextual
multi-sense word embeddings as noise (removal) and by generating cluster level
average anchor embeddings for contextual multi-sense word embeddings
(replacement). Experiments show that our solutions can improve the supervised
contextual word embeddings alignment for multi-sense words in a microscopic
perspective without hurting the macroscopic performance on the bilingual
lexicon induction task. For unsupervised alignment, our methods significantly
improve the performance on the bilingual lexicon induction task for more than
10 points.
| 2,019 | Computation and Language |
Sentiment-Aware Recommendation System for Healthcare using Social Media | Over the last decade, health communities (known as forums) have evolved into
platforms where more and more users share their medical experiences, thereby
seeking guidance and interacting with people of the community. The shared
content, though informal and unstructured in nature, contains valuable medical
and/or health-related information and can be leveraged to produce structured
suggestions to the common people. In this paper, at first we propose a stacked
deep learning model for sentiment analysis from the medical forum data. The
stacked model comprises of Convolutional Neural Network (CNN) followed by a
Long Short Term Memory (LSTM) and then by another CNN. For a blog classified
with positive sentiment, we retrieve the top-n similar posts. Thereafter, we
develop a probabilistic model for suggesting the suitable treatments or
procedures for a particular disease or health condition. We believe that
integration of medical sentiment and suggestion would be beneficial to the
users for finding the relevant contents regarding medications and medical
conditions, without having to manually stroll through a large amount of
unstructured contents.
| 2,019 | Computation and Language |
Alleviating Sequence Information Loss with Data Overlapping and Prime
Batch Sizes | In sequence modeling tasks the token order matters, but this information can
be partially lost due to the discretization of the sequence into data points.
In this paper, we study the imbalance between the way certain token pairs are
included in data points and others are not. We denote this a token order
imbalance (TOI) and we link the partial sequence information loss to a
diminished performance of the system as a whole, both in text and speech
processing tasks. We then provide a mechanism to leverage the full token order
information -Alleviated TOI- by iteratively overlapping the token composition
of data points. For recurrent networks, we use prime numbers for the batch size
to avoid redundancies when building batches from overlapped data points. The
proposed method achieved state of the art performance in both text and speech
related tasks.
| 2,019 | Computation and Language |
CASA-NLU: Context-Aware Self-Attentive Natural Language Understanding
for Task-Oriented Chatbots | Natural Language Understanding (NLU) is a core component of dialog systems.
It typically involves two tasks - intent classification (IC) and slot labeling
(SL), which are then followed by a dialogue management (DM) component. Such NLU
systems cater to utterances in isolation, thus pushing the problem of context
management to DM. However, contextual information is critical to the correct
prediction of intents and slots in a conversation. Prior work on contextual NLU
has been limited in terms of the types of contextual signals used and the
understanding of their impact on the model. In this work, we propose a
context-aware self-attentive NLU (CASA-NLU) model that uses multiple signals,
such as previous intents, slots, dialog acts and utterances over a variable
context window, in addition to the current user utterance. CASA-NLU outperforms
a recurrent contextual NLU baseline on two conversational datasets, yielding a
gain of up to 7% on the IC task for one of the datasets. Moreover, a
non-contextual variant of CASA-NLU achieves state-of-the-art performance for IC
task on standard public datasets - Snips and ATIS.
| 2,019 | Computation and Language |
Espresso: A Fast End-to-end Neural Speech Recognition Toolkit | We present Espresso, an open-source, modular, extensible end-to-end neural
automatic speech recognition (ASR) toolkit based on the deep learning library
PyTorch and the popular neural machine translation toolkit fairseq. Espresso
supports distributed training across GPUs and computing nodes, and features
various decoding approaches commonly employed in ASR, including look-ahead
word-based language model fusion, for which a fast, parallelized decoder is
implemented. Espresso achieves state-of-the-art ASR performance on the WSJ,
LibriSpeech, and Switchboard data sets among other end-to-end systems without
data augmentation, and is 4--11x faster for decoding than similar systems (e.g.
ESPnet).
| 2,019 | Computation and Language |
Low-Resource Parsing with Crosslingual Contextualized Representations | Despite advances in dependency parsing, languages with small treebanks still
present challenges. We assess recent approaches to multilingual contextual word
representations (CWRs), and compare them for crosslingual transfer from a
language with a large treebank to a language with a small or nonexistent
treebank, by sharing parameters between languages in the parser itself. We
experiment with a diverse selection of languages in both simulated and truly
low-resource scenarios, and show that multilingual CWRs greatly facilitate
low-resource dependency parsing even without crosslingual supervision such as
dictionaries or parallel text. Furthermore, we examine the non-contextual part
of the learned language models (which we call a "decontextual probe") to
demonstrate that polyglot language models better encode crosslingual lexical
correspondence compared to aligned monolingual language models. This analysis
provides further evidence that polyglot training is an effective approach to
crosslingual transfer.
| 2,019 | Computation and Language |
Summary Level Training of Sentence Rewriting for Abstractive
Summarization | As an attempt to combine extractive and abstractive summarization, Sentence
Rewriting models adopt the strategy of extracting salient sentences from a
document first and then paraphrasing the selected ones to generate a summary.
However, the existing models in this framework mostly rely on sentence-level
rewards or suboptimal labels, causing a mismatch between a training objective
and evaluation metric. In this paper, we present a novel training signal that
directly maximizes summary-level ROUGE scores through reinforcement learning.
In addition, we incorporate BERT into our model, making good use of its ability
on natural language understanding. In extensive experiments, we show that a
combination of our proposed model and training procedure obtains new
state-of-the-art performance on both CNN/Daily Mail and New York Times
datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
| 2,019 | Computation and Language |
Characterizing Collective Attention via Descriptor Context: A Case Study
of Public Discussions of Crisis Events | Social media datasets make it possible to rapidly quantify collective
attention to emerging topics and breaking news, such as crisis events.
Collective attention is typically measured by aggregate counts, such as the
number of posts that mention a name or hashtag. But according to rationalist
models of natural language communication, the collective salience of each
entity will be expressed not only in how often it is mentioned, but in the form
that those mentions take. This is because natural language communication is
premised on (and customized to) the expectations that speakers and writers have
about how their messages will be interpreted by the intended audience. We test
this idea by conducting a large-scale analysis of public online discussions of
breaking news events on Facebook and Twitter, focusing on five recent crisis
events. We examine how people refer to locations, focusing specifically on
contextual descriptors, such as "San Juan" versus "San Juan, Puerto Rico."
Rationalist accounts of natural language communication predict that such
descriptors will be unnecessary (and therefore omitted) when the named entity
is expected to have high prior salience to the reader. We find that the use of
contextual descriptors is indeed associated with proxies for social and
informational expectations, including macro-level factors like the location's
global salience and micro-level factors like audience engagement. We also find
a consistent decrease in descriptor context use over the lifespan of each
crisis event. These findings provide evidence about how social media users
communicate with their audiences, and point towards more fine-grained models of
collective attention that may help researchers and crisis response
organizations to better understand public perception of unfolding crisis
events.
| 2,020 | Computation and Language |
Made for Each Other: Broad-coverage Semantic Structures Meet Preposition
Supersenses | Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013)
is a typologically-informed, broad-coverage semantic annotation scheme that
describes coarse-grained predicate-argument structure but currently lacks
semantic roles. We argue that lexicon-free annotation of the semantic roles
marked by prepositions, as formulated by Schneider et al. (2018b), is
complementary and suitable for integration within UCCA. We show empirically for
English that the schemes, though annotated independently, are compatible and
can be combined in a single semantic graph. A comparison of several approaches
to parsing the integrated representation lays the groundwork for future
research on this task.
| 2,019 | Computation and Language |
Modeling Event Background for If-Then Commonsense Reasoning Using
Context-aware Variational Autoencoder | Understanding event and event-centered commonsense reasoning are crucial for
natural language processing (NLP). Given an observed event, it is trivial for
human to infer its intents and effects, while this type of If-Then reasoning
still remains challenging for NLP systems. To facilitate this, a If-Then
commonsense reasoning dataset Atomic is proposed, together with an RNN-based
Seq2Seq model to conduct such reasoning. However, two fundamental problems
still need to be addressed: first, the intents of an event may be multiple,
while the generations of RNN-based Seq2Seq models are always semantically
close; second, external knowledge of the event background may be necessary for
understanding events and conducting the If-Then reasoning. To address these
issues, we propose a novel context-aware variational autoencoder effectively
learning event background information to guide the If-Then reasoning.
Experimental results show that our approach improves the accuracy and diversity
of inferences compared with state-of-the-art baseline methods.
| 2,019 | Computation and Language |
How to Write Summaries with Patterns? Learning towards Abstractive
Summarization through Prototype Editing | Under special circumstances, summaries should conform to a particular style
with patterns, such as court judgments and abstracts in academic papers. To
this end, the prototype document-summary pairs can be utilized to generate
better summaries. There are two main challenges in this task: (1) the model
needs to incorporate learned patterns from the prototype, but (2) should avoid
copying contents other than the patternized words---such as irrelevant
facts---into the generated summaries. To tackle these challenges, we design a
model named Prototype Editing based Summary Generator (PESG). PESG first learns
summary patterns and prototype facts by analyzing the correlation between a
prototype document and its summary. Prototype facts are then utilized to help
extract facts from the input document. Next, an editing generator generates new
summary based on the summary pattern or extracted facts. Finally, to address
the second challenge, a fact checker is used to estimate mutual information
between the input document and generated summary, providing an additional
signal for the generator. Extensive experiments conducted on a large-scale
real-world text summarization dataset show that PESG achieves the
state-of-the-art performance in terms of both automatic metrics and human
evaluations.
| 2,019 | Computation and Language |
How Additional Knowledge can Improve Natural Language Commonsense
Question Answering? | Recently several datasets have been proposed to encourage research in
Question Answering domains where commonsense knowledge is expected to play an
important role. Recent language models such as ROBERTA, BERT and GPT that have
been pre-trained on Wikipedia articles and books have shown reasonable
performance with little fine-tuning on several such Multiple Choice
Question-Answering (MCQ) datasets. Our goal in this work is to develop methods
to incorporate additional (commonsense) knowledge into language model-based
approaches for better question-answering in such domains. In this work, we
first categorize external knowledge sources, and show performance does improve
on using such sources. We then explore three different strategies for knowledge
incorporation and four different models for question-answering using external
commonsense knowledge. We analyze our predictions to explore the scope of
further improvements.
| 2,020 | Computation and Language |
Procedural Reasoning Networks for Understanding Multimodal Procedures | This paper addresses the problem of comprehending procedural commonsense
knowledge. This is a challenging task as it requires identifying key entities,
keeping track of their state changes, and understanding temporal and causal
relations. Contrary to most of the previous work, in this study, we do not rely
on strong inductive bias and explore the question of how multimodality can be
exploited to provide a complementary semantic signal. Towards this end, we
introduce a new entity-aware neural comprehension model augmented with external
relational memory units. Our model learns to dynamically update entity states
in relation to each other while reading the text instructions. Our experimental
analysis on the visual reasoning tasks in the recently proposed RecipeQA
dataset reveals that our approach improves the accuracy of the previously
reported models by a large margin. Moreover, we find that our model learns
effective dynamic representations of entities even though we do not use any
supervision at the level of entity states.
| 2,019 | Computation and Language |
ASU at TextGraphs 2019 Shared Task: Explanation ReGeneration using
Language Models and Iterative Re-Ranking | In this work we describe the system from Natural Language Processing group at
Arizona State University for the TextGraphs 2019 Shared Task. The task focuses
on Explanation Regeneration, an intermediate step towards general multi-hop
inference on large graphs. Our approach consists of modeling the explanation
regeneration task as a \textit{learning to rank} problem, for which we use
state-of-the-art language models and explore dataset preparation techniques. We
utilize an iterative re-ranking based approach to further improve the rankings.
Our system secured 2nd rank in the task with a mean average precision (MAP) of
41.3\% on the test set.
| 2,019 | Computation and Language |
An Edit-centric Approach for Wikipedia Article Quality Assessment | We propose an edit-centric approach to assess Wikipedia article quality as a
complementary alternative to current full document-based techniques. Our model
consists of a main classifier equipped with an auxiliary generative module
which, for a given edit, jointly provides an estimation of its quality and
generates a description in natural language. We performed an empirical study to
assess the feasibility of the proposed model and its cost-effectiveness in
terms of data and quality requirements.
| 2,019 | Computation and Language |
A Split-and-Recombine Approach for Follow-up Query Analysis | Context-dependent semantic parsing has proven to be an important yet
challenging task. To leverage the advances in context-independent semantic
parsing, we propose to perform follow-up query analysis, aiming to restate
context-dependent natural language queries with contextual information. To
accomplish the task, we propose STAR, a novel approach with a well-designed
two-phase process. It is parser-independent and able to handle multifarious
follow-up scenarios in different domains. Experiments on the FollowUp dataset
show that STAR outperforms the state-of-the-art baseline by a large margin of
nearly 8%. The superiority on parsing results verifies the feasibility of
follow-up query analysis. We also explore the extensibility of STAR on the SQA
dataset, which is very promising.
| 2,019 | Computation and Language |
Extracting Conceptual Knowledge from Natural Language Text Using Maximum
Likelihood Principle | Domain-specific knowledge graphs constructed from natural language text are
ubiquitous in today's world. In many such scenarios the base text, from which
the knowledge graph is constructed, concerns itself with practical, on-hand,
actual or ground-reality information about the domain. Product documentation in
software engineering domain are one example of such base texts. Other examples
include blogs and texts related to digital artifacts, reports on emerging
markets and business models, patient medical records, etc. Though the above
sources contain a wealth of knowledge about their respective domains, the
conceptual knowledge on which they are based is often missing or unclear.
Access to this conceptual knowledge can enormously increase the utility of
available data and assist in several tasks such as knowledge graph completion,
grounding, querying, etc.
Our contributions in this paper are twofold. First, we propose a novel
Markovian stochastic model for document generation from conceptual knowledge.
The uniqueness of our approach lies in the fact that the conceptual knowledge
in the writer's mind forms a component of the parameter set of our stochastic
model. Secondly, we solve the inverse problem of learning the best conceptual
knowledge from a given document, by finding model parameters which maximize the
likelihood of generating the specific document over all possible parameter
values. This likelihood maximization is done using an application of Baum-Welch
algorithm, which is a known special case of Expectation-Maximization (EM)
algorithm. We run our conceptualization algorithm on several well-known natural
language sources and obtain very encouraging results. The results of our
extensive experiments concur with the hypothesis that the information contained
in these sources has a well-defined and rigorous underlying conceptual
structure, which can be discovered using our method.
| 2,019 | Computation and Language |
Improving Generalization by Incorporating Coverage in Natural Language
Inference | The task of natural language inference (NLI) is to identify the relation
between the given premise and hypothesis. While recent NLI models achieve very
high performance on individual datasets, they fail to generalize across similar
datasets. This indicates that they are solving NLI datasets instead of the task
itself. In order to improve generalization, we propose to extend the input
representations with an abstract view of the relation between the hypothesis
and the premise, i.e., how well the individual words, or word n-grams, of the
hypothesis are covered by the premise. Our experiments show that the use of
this information considerably improves generalization across different NLI
datasets without requiring any external knowledge or additional data. Finally,
we show that using the coverage information is not only beneficial for
improving the performance across different datasets of the same task. The
resulting generalization improves the performance across datasets that belong
to similar but not the same tasks.
| 2,019 | Computation and Language |
RUN through the Streets: A New Dataset and Baseline Models for Realistic
Urban Navigation | Following navigation instructions in natural language requires a composition
of language, action, and knowledge of the environment. Knowledge of the
environment may be provided via visual sensors or as a symbolic world
representation referred to as a map. Here we introduce the Realistic Urban
Navigation (RUN) task, aimed at interpreting navigation instructions based on a
real, dense, urban map. Using Amazon Mechanical Turk, we collected a dataset of
2515 instructions aligned with actual routes over three regions of Manhattan.
We propose a strong baseline for the task and empirically investigate which
aspects of the neural architecture are important for the RUN success. Our
results empirically show that entity abstraction, attention over words and
worlds, and a constantly updating world-state, significantly contribute to task
accuracy.
| 2,019 | Computation and Language |
Analysing Neural Language Models: Contextual Decomposition Reveals
Default Reasoning in Number and Gender Assignment | Extensive research has recently shown that recurrent neural language models
are able to process a wide range of grammatical phenomena. How these models are
able to perform these remarkable feats so well, however, is still an open
question. To gain more insight into what information LSTMs base their decisions
on, we propose a generalisation of Contextual Decomposition (GCD). In
particular, this setup enables us to accurately distil which part of a
prediction stems from semantic heuristics, which part truly emanates from
syntactic cues and which part arise from the model biases themselves instead.
We investigate this technique on tasks pertaining to syntactic agreement and
co-reference resolution and discover that the model strongly relies on a
default reasoning effect to perform these tasks.
| 2,019 | Computation and Language |
CogniVal: A Framework for Cognitive Word Embedding Evaluation | An interesting method of evaluating word representations is by how much they
reflect the semantic representations in the human brain. However, most, if not
all, previous works only focus on small datasets and a single modality. In this
paper, we present the first multi-modal framework for evaluating English word
representations based on cognitive lexical semantics. Six types of word
embeddings are evaluated by fitting them to 15 datasets of eye-tracking, EEG
and fMRI signals recorded during language processing. To achieve a global score
over all evaluation hypotheses, we apply statistical significance testing
accounting for the multiple comparisons problem. This framework is easily
extensible and available to include other intrinsic and extrinsic evaluation
methods. We find strong correlations in the results between cognitive datasets,
across recording modalities and to their performance on extrinsic NLP tasks.
| 2,019 | Computation and Language |
A Random Gossip BMUF Process for Neural Language Modeling | Neural network language model (NNLM) is an essential component of industrial
ASR systems. One important challenge of training an NNLM is to leverage between
scaling the learning process and handling big data. Conventional approaches
such as block momentum provides a blockwise model update filtering (BMUF)
process and achieves almost linear speedups with no performance degradation for
speech recognition. However, it needs to calculate the model average from all
computing nodes (e.g., GPUs) and when the number of computing nodes is large,
the learning suffers from the severe communication latency. As a consequence,
BMUF is not suitable under restricted network conditions. In this paper, we
present a decentralized BMUF process, in which the model is split into
different components, each of which is updated by communicating to some
randomly chosen neighbor nodes with the same component, followed by a BMUF-like
process. We apply this method to several LSTM language modeling tasks.
Experimental results show that our approach achieves consistently better
performance than conventional BMUF. In particular, we obtain a lower perplexity
than the single-GPU baseline on the wiki-text-103 benchmark using 4 GPUs. In
addition, no performance degradation is observed when scaling to 8 and 16 GPUs.
| 2,020 | Computation and Language |
Argumentative Relation Classification as Plausibility Ranking | We formulate argumentative relation classification (support vs. attack) as a
text-plausibility ranking task. To this aim, we propose a simple reconstruction
trick which enables us to build minimal pairs of plausible and implausible
texts by simulating natural contexts in which two argumentative units are
likely or unlikely to appear. We show that this method is competitive with
previous work albeit it is considerably simpler. In a recently introduced
content-based version of the task, where contextual discourse clues are hidden,
the approach offers a performance increase of more than 10% macro F1. With
respect to the scarce attack-class, the method achieves a large increase in
precision while the incurred loss in recall is small or even nonexistent.
| 2,019 | Computation and Language |
A Corpus for Automatic Readability Assessment and Text Simplification of
German | In this paper, we present a corpus for use in automatic readability
assessment and automatic text simplification of German. The corpus is compiled
from web sources and consists of approximately 211,000 sentences. As a novel
contribution, it contains information on text structure, typography, and
images, which can be exploited as part of machine learning approaches to
readability assessment and text simplification. The focus of this publication
is on representing such information as an extension to an existing corpus
standard.
| 2,019 | Computation and Language |
Self-Training for End-to-End Speech Recognition | We revisit self-training in the context of end-to-end speech recognition. We
demonstrate that training with pseudo-labels can substantially improve the
accuracy of a baseline model. Key to our approach are a strong baseline
acoustic and language model used to generate the pseudo-labels, filtering
mechanisms tailored to common errors from sequence-to-sequence models, and a
novel ensemble approach to increase pseudo-label diversity. Experiments on the
LibriSpeech corpus show that with an ensemble of four models and label
filtering, self-training yields a 33.9% relative improvement in WER compared
with a baseline trained on 100 hours of labelled data in the noisy speech
setting. In the clean speech setting, self-training recovers 59.3% of the gap
between the baseline and an oracle model, which is at least 93.8% relatively
higher than what previous approaches can achieve.
| 2,020 | Computation and Language |
Goal-Embedded Dual Hierarchical Model for Task-Oriented Dialogue
Generation | Hierarchical neural networks are often used to model inherent structures
within dialogues. For goal-oriented dialogues, these models miss a mechanism
adhering to the goals and neglect the distinct conversational patterns between
two interlocutors. In this work, we propose Goal-Embedded Dual Hierarchical
Attentional Encoder-Decoder (G-DuHA) able to center around goals and capture
interlocutor-level disparity while modeling goal-oriented dialogues.
Experiments on dialogue generation, response generation, and human evaluations
demonstrate that the proposed model successfully generates higher-quality, more
diverse and goal-centric dialogues. Moreover, we apply data augmentation via
goal-oriented dialogue generation for task-oriented dialog systems with better
performance achieved.
| 2,019 | Computation and Language |
Improved Variational Neural Machine Translation by Promoting Mutual
Information | Posterior collapse plagues VAEs for text, especially for conditional text
generation with strong autoregressive decoders. In this work, we address this
problem in variational neural machine translation by explicitly promoting
mutual information between the latent variables and the data. Our model extends
the conditional variational autoencoder (CVAE) with two new ingredients: first,
we propose a modified evidence lower bound (ELBO) objective which explicitly
promotes mutual information; second, we regularize the probabilities of the
decoder by mixing an auxiliary factorized distribution which is directly
predicted by the latent variables. We present empirical results on the
Transformer architecture and show the proposed model effectively addressed
posterior collapse: latent variables are no longer ignored in the presence of
powerful decoder. As a result, the proposed model yields improved translation
quality while demonstrating superior performance in terms of data efficiency
and robustness.
| 2,019 | Computation and Language |
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models | Neural NLP models are increasingly accurate but are imperfect and
opaque---they break in counterintuitive ways and leave end users puzzled at
their behavior. Model interpretation methods ameliorate this opacity by
providing explanations for specific model predictions. Unfortunately, existing
interpretation codebases make it difficult to apply these methods to new models
and tasks, which hinders adoption for practitioners and burdens
interpretability researchers. We introduce AllenNLP Interpret, a flexible
framework for interpreting NLP models. The toolkit provides interpretation
primitives (e.g., input gradients) for any AllenNLP model and task, a suite of
built-in interpretation methods, and a library of front-end visualization
components. We demonstrate the toolkit's flexibility and utility by
implementing live demos for five interpretation methods (e.g., saliency maps
and adversarial attacks) on a variety of models and tasks (e.g., masked
language modeling using BERT and reading comprehension using BiDAF). These
demos, alongside our code and tutorials, are available at
https://allennlp.org/interpret .
| 2,019 | Computation and Language |
What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question
Answering | Multi-hop textual question answering requires combining information from
multiple sentences. We focus on a natural setting where, unlike typical reading
comprehension, only partial information is provided with each question. The
model must retrieve and use additional knowledge to correctly answer the
question. To tackle this challenge, we develop a novel approach that explicitly
identifies the knowledge gap between a key span in the provided knowledge and
the answer choices. The model, GapQA, learns to fill this gap by determining
the relationship between the span and an answer choice, based on retrieved
knowledge targeting this gap. We propose jointly training a model to
simultaneously fill this knowledge gap and compose it with the provided partial
knowledge. On the OpenBookQA dataset, given partial knowledge, explicitly
identifying what's missing substantially outperforms previous approaches.
| 2,019 | Computation and Language |
Cross-lingual Dependency Parsing with Unlabeled Auxiliary Languages | Cross-lingual transfer learning has become an important weapon to battle the
unavailability of annotated resources for low-resource languages. One of the
fundamental techniques to transfer across languages is learning
\emph{language-agnostic} representations, in the form of word embeddings or
contextual encodings. In this work, we propose to leverage unannotated
sentences from auxiliary languages to help learning language-agnostic
representations. Specifically, we explore adversarial training for learning
contextual encoders that produce invariant representations across languages to
facilitate cross-lingual transfer. We conduct experiments on cross-lingual
dependency parsing where we train a dependency parser on a source language and
transfer it to a wide range of target languages. Experiments on 28 target
languages demonstrate that adversarial training significantly improves the
overall transfer performances under several different settings. We conduct a
careful analysis to evaluate the language-agnostic representations resulted
from adversarial training.
| 2,019 | Computation and Language |
Towards Neural Language Evaluators | We review three limitations of BLEU and ROUGE -- the most popular metrics
used to assess reference summaries against hypothesis summaries, come up with
criteria for what a good metric should behave like and propose concrete ways to
use recent Transformers-based Language Models to assess reference summaries
against hypothesis summaries.
| 2,019 | Computation and Language |
Named Entity Recognition with Partially Annotated Training Data | Supervised machine learning assumes the availability of fully-labeled data,
but in many cases, such as low-resource languages, the only data available is
partially annotated. We study the problem of Named Entity Recognition (NER)
with partially annotated training data in which a fraction of the named
entities are labeled, and all other tokens, entities or otherwise, are labeled
as non-entity by default. In order to train on this noisy dataset, we need to
distinguish between the true and false negatives. To this end, we introduce a
constraint-driven iterative algorithm that learns to detect false negatives in
the noisy set and downweigh them, resulting in a weighted training set. With
this set, we train a weighted NER model. We evaluate our algorithm with
weighted variants of neural and non-neural NER models on data in 8 languages
from several language and script families, showing strong ability to learn from
partial data. Finally, to show real-world efficacy, we evaluate on a Bengali
NER corpus annotated by non-speakers, outperforming the prior state-of-the-art
by over 5 points F1.
| 2,019 | Computation and Language |
Working Hard or Hardly Working: Challenges of Integrating Typology into
Neural Dependency Parsers | This paper explores the task of leveraging typology in the context of
cross-lingual dependency parsing. While this linguistic information has shown
great promise in pre-neural parsing, results for neural architectures have been
mixed. The aim of our investigation is to better understand this
state-of-the-art. Our main findings are as follows: 1) The benefit of
typological information is derived from coarsely grouping languages into
syntactically-homogeneous clusters rather than from learning to leverage
variations along individual typological dimensions in a compositional manner;
2) Typology consistent with the actual corpus statistics yields better transfer
performance; 3) Typological similarity is only a rough proxy of cross-lingual
transferability with respect to parsing.
| 2,019 | Computation and Language |
BERT Meets Chinese Word Segmentation | Chinese word segmentation (CWS) is a fundamental task for Chinese language
understanding. Recently, neural network-based models have attained superior
performance in solving the in-domain CWS task. Last year, Bidirectional Encoder
Representation from Transformers (BERT), a new language representation model,
has been proposed as a backbone model for many natural language tasks and
redefined the corresponding performance. The excellent performance of BERT
motivates us to apply it to solve the CWS task. By conducting intensive
experiments in the benchmark datasets from the second International Chinese
Word Segmentation Bake-off, we obtain several keen observations. BERT can
slightly improve the performance even when the datasets contain the issue of
labeling inconsistency. When applying sufficiently learned features, Softmax, a
simpler classifier, can attain the same performance as that of a more
complicated classifier, e.g., Conditional Random Field (CRF). The performance
of BERT usually increases as the model size increases. The features extracted
by BERT can be also applied as good candidates for other neural network models.
| 2,019 | Computation and Language |
Jointly Learning Entity and Relation Representations for Entity
Alignment | Entity alignment is a viable means for integrating heterogeneous knowledge
among different knowledge graphs (KGs). Recent developments in the field often
take an embedding-based approach to model the structural information of KGs so
that entity alignment can be easily performed in the embedding space. However,
most existing works do not explicitly utilize useful relation representations
to assist in entity alignment, which, as we will show in the paper, is a simple
yet effective way for improving entity alignment. This paper presents a novel
joint learning framework for entity alignment. At the core of our approach is a
Graph Convolutional Network (GCN) based framework for learning both entity and
relation representations. Rather than relying on pre-aligned relation seeds to
learn relation representations, we first approximate them using entity
embeddings learned by the GCN. We then incorporate the relation approximation
into entities to iteratively learn better representations for both. Experiments
performed on three real-world cross-lingual datasets show that our approach
substantially outperforms state-of-the-art entity alignment methods.
| 2,019 | Computation and Language |
Sampling Bias in Deep Active Classification: An Empirical Study | The exploding cost and time needed for data labeling and model training are
bottlenecks for training DNN models on large datasets. Identifying smaller
representative data samples with strategies like active learning can help
mitigate such bottlenecks. Previous works on active learning in NLP identify
the problem of sampling bias in the samples acquired by uncertainty-based
querying and develop costly approaches to address it. Using a large empirical
study, we demonstrate that active set selection using the posterior entropy of
deep models like FastText.zip (FTZ) is robust to sampling biases and to various
algorithmic choices (query size and strategies) unlike that suggested by
traditional literature. We also show that FTZ based query strategy produces
sample sets similar to those from more sophisticated approaches (e.g ensemble
networks). Finally, we show the effectiveness of the selected samples by
creating tiny high-quality datasets, and utilizing them for fast and cheap
training of large models. Based on the above, we propose a simple baseline for
deep active text classification that outperforms the state-of-the-art. We
expect the presented work to be useful and informative for dataset compression
and for problems involving active, semi-supervised or online learning
scenarios. Code and models are available at:
https://github.com/drimpossible/Sampling-Bias-Active-Learning
| 2,019 | Computation and Language |
A Critical Analysis of Biased Parsers in Unsupervised Parsing | A series of recent papers has used a parsing algorithm due to Shen et al.
(2018) to recover phrase-structure trees based on proxies for "syntactic
depth." These proxy depths are obtained from the representations learned by
recurrent language models augmented with mechanisms that encourage the
(unsupervised) discovery of hierarchical structure latent in natural language
sentences. Using the same parser, we show that proxies derived from a
conventional LSTM language model produce trees comparably well to the
specialized architectures used in previous work. However, we also provide a
detailed analysis of the parsing algorithm, showing (1) that it is
incomplete---that is, it can recover only a fraction of possible trees---and
(2) that it has a marked bias for right-branching structures which results in
inflated performance in right-branching languages like English. Our analysis
shows that evaluating with biased parsing algorithms can inflate the apparent
structural competence of language models.
| 2,019 | Computation and Language |
Generating Philosophical Statements using Interpolated Markov Models and
Dynamic Templates | Automatically imitating input text is a common task in natural language
generation, often used to create humorous results. Classic algorithms for
learning to imitate text, e.g. simple Markov chains, usually have a trade-off
between originality and syntactic correctness. We present two ways of
automatically parodying philosophical statements from examples overcoming this
issue, and show how these can work in interactive systems as well. The first
algorithm uses interpolated Markov models with extensions to improve the
quality of the generated texts. For the second algorithm, we propose
dynamically extracting templates and filling these with new content. To
illustrate these algorithms, we implemented TorfsBot, a Twitterbot imitating
the witty, semi-philosophical tweets of professor Rik Torfs, the previous KU
Leuven rector. We found that users preferred generative models that focused on
local coherent sentences, rather than those mimicking the global structure of a
philosophical statement. The proposed algorithms are thus valuable new tools
for automatic parody as well as template learning systems.
| 2,019 | Computation and Language |
Language models and Automated Essay Scoring | In this paper, we present a new comparative study on automatic essay scoring
(AES). The current state-of-the-art natural language processing (NLP) neural
network architectures are used in this work to achieve above human-level
accuracy on the publicly available Kaggle AES dataset. We compare two powerful
language models, BERT and XLNet, and describe all the layers and network
architectures in these models. We elucidate the network architectures of BERT
and XLNet using clear notation and diagrams and explain the advantages of
transformer architectures over traditional recurrent neural network
architectures. Linear algebra notation is used to clarify the functions of
transformers and attention mechanisms. We compare the results with more
traditional methods, such as bag of words (BOW) and long short term memory
(LSTM) networks.
| 2,019 | Computation and Language |
Multi-sense Definition Modeling using Word Sense Decompositions | Word embeddings capture syntactic and semantic information about words.
Definition modeling aims to make the semantic content in each embedding
explicit, by outputting a natural language definition based on the embedding.
However, existing definition models are limited in their ability to generate
accurate definitions for different senses of the same word. In this paper, we
introduce a new method that enables definition modeling for multiple senses. We
show how a Gumble-Softmax approach outperforms baselines at matching
sense-specific embeddings to definitions during training. In experiments, our
multi-sense definition model improves recall over a state-of-the-art
single-sense definition model by a factor of three, without harming precision.
| 2,019 | Computation and Language |
Generative Dialog Policy for Task-oriented Dialog Systems | There is an increasing demand for task-oriented dialogue systems which can
assist users in various activities such as booking tickets and restaurant
reservations. In order to complete dialogues effectively, dialogue policy plays
a key role in task-oriented dialogue systems. As far as we know, the existing
task-oriented dialogue systems obtain the dialogue policy through
classification, which can assign either a dialogue act and its corresponding
parameters or multiple dialogue acts without their corresponding parameters for
a dialogue action. In fact, a good dialogue policy should construct multiple
dialogue acts and their corresponding parameters at the same time. However,
it's hard for existing classification-based methods to achieve this goal. Thus,
to address the issue above, we propose a novel generative dialogue policy
learning method. Specifically, the proposed method uses attention mechanism to
find relevant segments of given dialogue context and input utterance and then
constructs the dialogue policy by a seq2seq way for task-oriented dialogue
systems. Extensive experiments on two benchmark datasets show that the proposed
model significantly outperforms the state-of-the-art baselines. In addition, we
have publicly released our codes.
| 2,019 | Computation and Language |
BSDAR: Beam Search Decoding with Attention Reward in Neural Keyphrase
Generation | This study mainly investigates two common decoding problems in neural
keyphrase generation: sequence length bias and beam diversity. To tackle the
problems, we introduce a beam search decoding strategy based on word-level and
ngram-level reward function to constrain and refine Seq2Seq inference at test
time. Results show that our simple proposal can overcome the algorithm bias to
shorter and nearly identical sequences, resulting in a significant improvement
of the decoding performance on generating keyphrases that are present and
absent in source text.
| 2,023 | Computation and Language |
Deep Contextualized Pairwise Semantic Similarity for Arabic Language
Questions | Question semantic similarity is a challenging and active research problem
that is very useful in many NLP applications, such as detecting duplicate
questions in community question answering platforms such as Quora. Arabic is
considered to be an under-resourced language, has many dialects, and rich in
morphology. Combined together, these challenges make identifying semantically
similar questions in Arabic even more difficult. In this paper, we introduce a
novel approach to tackle this problem, and test it on two benchmarks; one for
Modern Standard Arabic (MSA), and another for the 24 major Arabic dialects. We
are able to show that our new system outperforms state-of-the-art approaches by
achieving 93% F1-score on the MSA benchmark and 82% on the dialectical one.
This is achieved by utilizing contextualized word representations (ELMo
embeddings) trained on a text corpus containing MSA and dialectic sentences.
This in combination with a pairwise fine-grained similarity layer, helps our
question-to-question similarity model to generalize predictions on different
dialects while being trained only on question-to-question MSA data.
| 2,019 | Computation and Language |
A simple discriminative training method for machine translation with
large-scale features | Margin infused relaxed algorithms (MIRAs) dominate model tuning in
statistical machine translation in the case of large scale features, but also
they are famous for the complexity in implementation. We introduce a new
method, which regards an N-best list as a permutation and minimizes the
Plackett-Luce loss of ground-truth permutations. Experiments with large-scale
features demonstrate that, the new method is more robust than MERT; though it
is only matchable with MIRAs, it has a comparatively advantage, easier to
implement.
| 2,019 | Computation and Language |
Controllable Length Control Neural Encoder-Decoder via Reinforcement
Learning | Controlling output length in neural language generation is valuable in many
scenarios, especially for the tasks that have length constraints. A model with
stronger length control capacity can produce sentences with more specific
length, however, it usually sacrifices semantic accuracy of the generated
sentences. Here, we denote a concept of Controllable Length Control (CLC) for
the trade-off between length control capacity and semantic accuracy of the
language generation model. More specifically, CLC is to alter length control
capacity of the model so as to generate sentence with corresponding quality.
This is meaningful in real applications when length control capacity and
outputs quality are requested with different priorities, or to overcome
unstability of length control during model training. In this paper, we propose
two reinforcement learning (RL) methods to adjust the trade-off between length
control capacity and semantic accuracy of length control models. Results show
that our RL methods improve scores across a wide range of target lengths and
achieve the goal of CLC. Additionally, two models LenMC and LenLInit modified
on previous length-control models are proposed to obtain better performance in
summarization task while still maintain the ability to control length.
| 2,019 | Computation and Language |
Pivot-based Transfer Learning for Neural Machine Translation between
Non-English Languages | We present effective pre-training strategies for neural machine translation
(NMT) using parallel corpora involving a pivot language, i.e., source-pivot and
pivot-target, leading to a significant improvement in source-target
translation. We propose three methods to increase the relation among source,
pivot, and target languages in the pre-training: 1) step-wise training of a
single model for different language pairs, 2) additional adapter component to
smoothly connect pre-trained encoder and decoder, and 3) cross-lingual encoder
training via autoencoding of the pivot language. Our methods greatly outperform
multilingual models up to +2.6% BLEU in WMT 2019 French-German and German-Czech
tasks. We show that our improvements are valid also in zero-shot/zero-resource
scenarios.
| 2,019 | Computation and Language |
Designing dialogue systems: A mean, grumpy, sarcastic chatbot in the
browser | In this work we explore a deep learning-based dialogue system that generates
sarcastic and humorous responses from a conversation design perspective. We
trained a seq2seq model on a carefully curated dataset of 3000
question-answering pairs, the core of our mean, grumpy, sarcastic chatbot. We
show that end-to-end systems learn patterns very quickly from small datasets
and thus, are able to transfer simple linguistic structures representing
abstract concepts to unseen settings. We also deploy our LSTM-based
encoder-decoder model in the browser, where users can directly interact with
the chatbot. Human raters evaluated linguistic quality, creativity and
human-like traits, revealing the system's strengths, limitations and potential
for future research.
| 2,019 | Computation and Language |
Creative GANs for generating poems, lyrics, and metaphors | Generative models for text have substantially contributed to tasks like
machine translation and language modeling, using maximum likelihood
optimization (MLE). However, for creative text generation, where multiple
outputs are possible and originality and uniqueness are encouraged, MLE falls
short. Methods optimized for MLE lead to outputs that can be generic,
repetitive and incoherent. In this work, we use a Generative Adversarial
Network framework to alleviate this problem. We evaluate our framework on
poetry, lyrics and metaphor datasets, each with widely different
characteristics, and report better performance of our objective function over
other generative models.
| 2,019 | Computation and Language |
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with
Multi-lingual Language Representation Model | Because it is not feasible to collect training data for every language, there
is a growing interest in cross-lingual transfer learning. In this paper, we
systematically explore zero-shot cross-lingual transfer learning on reading
comprehension tasks with a language representation model pre-trained on
multi-lingual corpus. The experimental results show that with pre-trained
language representation zero-shot learning is feasible, and translating the
source data into the target language is not necessary and even degrades the
performance. We further explore what does the model learn in zero-shot setting.
| 2,019 | Computation and Language |
SANVis: Visual Analytics for Understanding Self-Attention Networks | Attention networks, a deep neural network architecture inspired by humans'
attention mechanism, have seen significant success in image captioning, machine
translation, and many other applications. Recently, they have been further
evolved into an advanced approach called multi-head self-attention networks,
which can encode a set of input vectors, e.g., word vectors in a sentence, into
another set of vectors. Such encoding aims at simultaneously capturing diverse
syntactic and semantic features within a set, each of which corresponds to a
particular attention head, forming altogether multi-head attention. Meanwhile,
the increased model complexity prevents users from easily understanding and
manipulating the inner workings of models. To tackle the challenges, we present
a visual analytics system called SANVis, which helps users understand the
behaviors and the characteristics of multi-head self-attention networks. Using
a state-of-the-art self-attention model called Transformer, we demonstrate
usage scenarios of SANVis in machine translation tasks. Our system is available
at http://short.sanvis.org
| 2,019 | Computation and Language |
A Deep Learning-Based Approach for Measuring the Domain Similarity of
Persian Texts | In this paper, we propose a novel approach for measuring the degree of
similarity between categories of two pieces of Persian text, which were
published as descriptions of two separate advertisements. We built an
appropriate dataset for this work using a dataset which consists of
advertisements posted on an e-commerce website. We generated a significant
number of paired texts from this dataset and assigned each pair a score from 0
to 3, which demonstrates the degree of similarity between the domains of the
pair. In this work, we represent words with word embedding vectors derived from
word2vec. Then deep neural network models are used to represent texts.
Eventually, we employ concatenation of absolute difference and bit-wise
multiplication and a fully-connected neural network to produce a probability
distribution vector for the score of the pairs. Through a supervised learning
approach, we trained our model on a GPU, and our best model achieved an F1
score of 0.9865.
| 2,019 | Computation and Language |
NSURL-2019 Shared Task 8: Semantic Question Similarity in Arabic | Question semantic similarity (Q2Q) is a challenging task that is very useful
in many NLP applications, such as detecting duplicate questions and question
answering systems. In this paper, we present the results and findings of the
shared task (Semantic Question Similarity in Arabic). The task was organized as
part of the first workshop on NLP Solutions for Under Resourced Languages
(NSURL 2019) The goal of the task is to predict whether two questions are
semantically similar or not, even if they are phrased differently. A total of 9
teams participated in the task. The datasets created for this task are made
publicly available to support further research on Arabic Q2Q.
| 2,019 | Computation and Language |
A Gated Self-attention Memory Network for Answer Selection | Answer selection is an important research problem, with applications in many
areas. Previous deep learning based approaches for the task mainly adopt the
Compare-Aggregate architecture that performs word-level comparison followed by
aggregation. In this work, we take a departure from the popular
Compare-Aggregate architecture, and instead, propose a new gated self-attention
memory network for the task. Combined with a simple transfer learning technique
from a large-scale online corpus, our model outperforms previous methods by a
large margin, achieving new state-of-the-art results on two standard answer
selection datasets: TrecQA and WikiQA.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.