Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Modeling the Complexity and Descriptive Adequacy of Construction
Grammars | This paper uses the Minimum Description Length paradigm to model the
complexity of CxGs (operationalized as the encoding size of a grammar)
alongside their descriptive adequacy (operationalized as the encoding size of a
corpus given a grammar). These two quantities are combined to measure the
quality of potential CxGs against unannotated corpora, supporting
discovery-device CxGs for English, Spanish, French, German, and Italian. The
results show (i) that these grammars provide significant generalizations as
measured using compression and (ii) that more complex CxGs with access to
multiple levels of representation provide greater generalizations than
single-representation CxGs.
| 2,019 | Computation and Language |
Multi-lingual Dialogue Act Recognition with Deep Learning Methods | This paper deals with multi-lingual dialogue act (DA) recognition. The
proposed approaches are based on deep neural networks and use word2vec
embeddings for word representation. Two multi-lingual models are proposed for
this task. The first approach uses one general model trained on the embeddings
from all available languages. The second method trains the model on a single
pivot language and a linear transformation method is used to project other
languages onto the pivot language. The popular convolutional neural network and
LSTM architectures with different set-ups are used as classifiers. To the best
of our knowledge this is the first attempt at multi-lingual DA recognition
using neural networks. The multi-lingual models are validated experimentally on
two languages from the Verbmobil corpus.
| 2,019 | Computation and Language |
Cross-topic distributional semantic representations via unsupervised
mappings | In traditional Distributional Semantic Models (DSMs) the multiple senses of a
polysemous word are conflated into a single vector space representation. In
this work, we propose a DSM that learns multiple distributional representations
of a word based on different topics. First, a separate DSM is trained for each
topic and then each of the topic-based DSMs is aligned to a common vector
space. Our unsupervised mapping approach is motivated by the hypothesis that
words preserving their relative distances in different topic semantic
sub-spaces constitute robust \textit{semantic anchors} that define the mappings
between them. Aligned cross-topic representations achieve state-of-the-art
results for the task of contextual word similarity. Furthermore, evaluation on
NLP downstream tasks shows that multiple topic-based embeddings outperform
single-prototype models.
| 2,019 | Computation and Language |
Corpora Generation for Grammatical Error Correction | Grammatical Error Correction (GEC) has been recently modeled using the
sequence-to-sequence framework. However, unlike sequence transduction problems
such as machine translation, GEC suffers from the lack of plentiful parallel
data. We describe two approaches for generating large parallel datasets for GEC
using publicly available Wikipedia data. The first method extracts
source-target pairs from Wikipedia edit histories with minimal filtration
heuristics, while the second method introduces noise into Wikipedia sentences
via round-trip translation through bridge languages. Both strategies yield
similar sized parallel corpora containing around 4B tokens. We employ an
iterative decoding strategy that is tailored to the loosely supervised nature
of our constructed corpora. We demonstrate that neural GEC models trained using
either type of corpora give similar performance. Fine-tuning these models on
the Lang-8 corpus and ensembling allows us to surpass the state of the art on
both the CoNLL-2014 benchmark and the JFLEG task. We provide systematic
analysis that compares the two approaches to data generation and highlights the
effectiveness of ensembling.
| 2,019 | Computation and Language |
Adapting RNN Sequence Prediction Model to Multi-label Set Prediction | We present an adaptation of RNN sequence models to the problem of multi-label
classification for text, where the target is a set of labels, not a sequence.
Previous such RNN models define probabilities for sequences but not for sets;
attempts to obtain a set probability are after-thoughts of the network design,
including pre-specifying the label order, or relating the sequence probability
to the set probability in ad hoc ways.
Our formulation is derived from a principled notion of set probability, as
the sum of probabilities of corresponding permutation sequences for the set. We
provide a new training objective that maximizes this set probability, and a new
prediction objective that finds the most probable set on a test document. These
new objectives are theoretically appealing because they give the RNN model
freedom to discover the best label order, which often is the natural one (but
different among documents).
We develop efficient procedures to tackle the computation difficulties
involved in training and prediction. Experiments on benchmark datasets
demonstrate that we outperform state-of-the-art methods for this task.
| 2,019 | Computation and Language |
wav2vec: Unsupervised Pre-training for Speech Recognition | We explore unsupervised pre-training for speech recognition by learning
representations of raw audio. wav2vec is trained on large amounts of unlabeled
audio data and the resulting representations are then used to improve acoustic
model training. We pre-train a simple multi-layer convolutional neural network
optimized via a noise contrastive binary classification task. Our experiments
on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up
to 36% when only a few hours of transcribed data is available. Our approach
achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the
best reported character-based system in the literature while using two orders
of magnitude less labeled training data.
| 2,019 | Computation and Language |
Crowdsourcing Lightweight Pyramids for Manual Summary Evaluation | Conducting a manual evaluation is considered an essential part of summary
evaluation methodology. Traditionally, the Pyramid protocol, which exhaustively
compares system summaries to references, has been perceived as very reliable,
providing objective scores. Yet, due to the high cost of the Pyramid method and
the required expertise, researchers resorted to cheaper and less thorough
manual evaluation methods, such as Responsiveness and pairwise comparison,
attainable via crowdsourcing. We revisit the Pyramid approach, proposing a
lightweight sampling-based version that is crowdsourcable. We analyze the
performance of our method in comparison to original expert-based Pyramid
evaluations, showing higher correlation relative to the common Responsiveness
method. We release our crowdsourced Summary-Content-Units, along with all
crowdsourcing scripts, for future evaluations.
| 2,019 | Computation and Language |
Strong Baselines for Complex Word Identification across Multiple
Languages | Complex Word Identification (CWI) is the task of identifying which words or
phrases in a sentence are difficult to understand by a target audience. The
latest CWI Shared Task released data for two settings: monolingual (i.e. train
and test in the same language) and cross-lingual (i.e. test in a language not
seen during training). The best monolingual models relied on language-dependent
features, which do not generalise in the cross-lingual setting, while the best
cross-lingual model used neural networks with multi-task learning. In this
paper, we present monolingual and cross-lingual CWI models that perform as well
as (or better than) most models submitted to the latest CWI Shared Task. We
show that carefully selected features and simple learning models can achieve
state-of-the-art performance, and result in strong baselines for future
development in this area. Finally, we discuss how inconsistencies in the
annotation of the data can explain some of the results obtained.
| 2,019 | Computation and Language |
Modeling Interpersonal Linguistic Coordination in Conversations using
Word Mover's Distance | Linguistic coordination is a well-established phenomenon in spoken
conversations and often associated with positive social behaviors and outcomes.
While there have been many attempts to measure lexical coordination or
entrainment in literature, only a few have explored coordination in syntactic
or semantic space. In this work, we attempt to combine these different aspects
of coordination into a single measure by leveraging distances in a neural word
representation space. In particular, we adopt the recently proposed Word
Mover's Distance with word2vec embeddings and extend it to measure the
dissimilarity in language used in multiple consecutive speaker turns. To
validate our approach, we apply this measure for two case studies in the
clinical psychology domain. We find that our proposed measure is correlated
with the therapist's empathy towards their patient in Motivational Interviewing
and with affective behaviors in Couples Therapy. In both case studies, our
proposed metric exhibits higher correlation than previously proposed measures.
When applied to the couples with relationship improvement, we also notice a
significant decrease in the proposed measure over the course of therapy,
indicating higher linguistic coordination.
| 2,019 | Computation and Language |
Direct speech-to-speech translation with a sequence-to-sequence model | We present an attention-based sequence-to-sequence neural network which can
directly translate speech from one language into speech in another language,
without relying on an intermediate text representation. The network is trained
end-to-end, learning to map speech spectrograms into target spectrograms in
another language, corresponding to the translated content (in a different
canonical voice). We further demonstrate the ability to synthesize translated
speech using the voice of the source speaker. We conduct experiments on two
Spanish-to-English speech translation datasets, and find that the proposed
model slightly underperforms a baseline cascade of a direct speech-to-text
translation model and a text-to-speech synthesis model, demonstrating the
feasibility of the approach on this very challenging task.
| 2,019 | Computation and Language |
Evaluating the Representational Hub of Language and Vision Models | The multimodal models used in the emerging field at the intersection of
computational linguistics and computer vision implement the bottom-up
processing of the `Hub and Spoke' architecture proposed in cognitive science to
represent how the brain processes and combines multi-sensory inputs. In
particular, the Hub is implemented as a neural network encoder. We investigate
the effect on this encoder of various vision-and-language tasks proposed in the
literature: visual question answering, visual reference resolution, and
visually grounded dialogue. To measure the quality of the representations
learned by the encoder, we use two kinds of analyses. First, we evaluate the
encoder pre-trained on the different vision-and-language tasks on an existing
diagnostic task designed to assess multimodal semantic understanding. Second,
we carry out a battery of analyses aimed at studying how the encoder merges and
exploits the two modalities.
| 2,019 | Computation and Language |
Building a mixed-lingual neural TTS system with only monolingual data | When deploying a Chinese neural text-to-speech (TTS) synthesis system, one of
the challenges is to synthesize Chinese utterances with English phrases or
words embedded. This paper looks into the problem in the encoder-decoder
framework when only monolingual data from a target speaker is available.
Specifically, we view the problem from two aspects: speaker consistency within
an utterance and naturalness. We start the investigation with an Average Voice
Model which is built from multi-speaker monolingual data, i.e. Mandarin and
English data. On the basis of that, we look into speaker embedding for speaker
consistency within an utterance and phoneme embedding for naturalness and
intelligibility and study the choice of data for model training. We report the
findings and discuss the challenges to build a mixed-lingual TTS system with
only monolingual data.
| 2,019 | Computation and Language |
Adapting Sequence to Sequence models for Text Normalization in Social
Media | Social media offer an abundant source of valuable raw data, however informal
writing can quickly become a bottleneck for many natural language processing
(NLP) tasks. Off-the-shelf tools are usually trained on formal text and cannot
explicitly handle noise found in short online posts. Moreover, the variety of
frequently occurring linguistic variations presents several challenges, even
for humans who might not be able to comprehend the meaning of such posts,
especially when they contain slang and abbreviations. Text Normalization aims
to transform online user-generated text to a canonical form. Current text
normalization systems rely on string or phonetic similarity and classification
models that work on a local fashion. We argue that processing contextual
information is crucial for this task and introduce a social media text
normalization hybrid word-character attention-based encoder-decoder model that
can serve as a pre-processing step for NLP applications to adapt to noisy text
in social media. Our character-based component is trained on synthetic
adversarial examples that are designed to capture errors commonly found in
online user-generated text. Experiments show that our model surpasses neural
architectures designed for text normalization and achieves comparable
performance with state-of-the-art related work.
| 2,019 | Computation and Language |
A Crowdsourced Frame Disambiguation Corpus with Ambiguity | We present a resource for the task of FrameNet semantic frame disambiguation
of over 5,000 word-sentence pairs from the Wikipedia corpus. The annotations
were collected using a novel crowdsourcing approach with multiple workers per
sentence to capture inter-annotator disagreement. In contrast to the typical
approach of attributing the best single frame to each word, we provide a list
of frames with disagreement-based scores that express the confidence with which
each frame applies to the word. This is based on the idea that inter-annotator
disagreement is at least partly caused by ambiguity that is inherent to the
text and frames. We have found many examples where the semantics of individual
frames overlap sufficiently to make them acceptable alternatives for
interpreting a sentence. We have argued that ignoring this ambiguity creates an
overly arbitrary target for training and evaluating natural language processing
systems - if humans cannot agree, why would we expect the correct answer from a
machine to be any different? To process this data we also utilized an expanded
lemma-set provided by the Framester system, which merges FN with WordNet to
enhance coverage. Our dataset includes annotations of 1,000 sentence-word pairs
whose lemmas are not part of FN. Finally we present metrics for evaluating
frame disambiguation systems that account for ambiguity.
| 2,020 | Computation and Language |
Political Text Scaling Meets Computational Semantics | During the last fifteen years, automatic text scaling has become one of the
key tools of the Text as Data community in political science. Prominent text
scaling algorithms, however, rely on the assumption that latent positions can
be captured just by leveraging the information about word frequencies in
documents under study. We challenge this traditional view and present a new,
semantically aware text scaling algorithm, SemScale, which combines recent
developments in the area of computational linguistics with unsupervised
graph-based clustering. We conduct an extensive quantitative analysis over a
collection of speeches from the European Parliament in five different languages
and from two different legislative terms, and show that a scaling approach
relying on semantic document representations is often better at capturing known
underlying political dimensions than the established frequency-based (i.e.,
symbolic) scaling method. We further validate our findings through a series of
experiments focused on text preprocessing and feature selection, document
representation, scaling of party manifestos, and a supervised extension of our
algorithm. To catalyze further research on this new branch of text scaling
methods, we release a Python implementation of SemScale with all included data
sets and evaluation procedures.
| 2,021 | Computation and Language |
IIT (BHU) Varanasi at MSR-SRST 2018: A Language Model Based Approach for
Natural Language Generation | This paper describes our submission system for the Shallow Track of Surface
Realization Shared Task 2018 (SRST'18). The task was to convert genuine UD
structures, from which word order information had been removed and the tokens
had been lemmatized, into their correct sentential form. We divide the problem
statement into two parts, word reinflection and correct word order prediction.
For the first sub-problem, we use a Long Short Term Memory based
Encoder-Decoder approach. For the second sub-problem, we present a Language
Model (LM) based approach. We apply two different sub-approaches in the LM
Based approach and the combined result of these two approaches is considered as
the final output of the system.
| 2,018 | Computation and Language |
CITE: A Corpus of Image-Text Discourse Relations | This paper presents a novel crowd-sourced resource for multimodal discourse:
our resource characterizes inferences in image-text contexts in the domain of
cooking recipes in the form of coherence relations. Like previous corpora
annotating discourse structure between text arguments, such as the Penn
Discourse Treebank, our new corpus aids in establishing a better understanding
of natural communication and common-sense reasoning, while our findings have
implications for a wide range of applications, such as understanding and
generation of multimodal documents.
| 2,019 | Computation and Language |
Legal Area Classification: A Comparative Study of Text Classifiers on
Singapore Supreme Court Judgments | This paper conducts a comparative study on the performance of various machine
learning (``ML'') approaches for classifying judgments into legal areas. Using
a novel dataset of 6,227 Singapore Supreme Court judgments, we investigate how
state-of-the-art NLP methods compare against traditional statistical models
when applied to a legal corpus that comprised few but lengthy documents. All
approaches tested, including topic model, word embedding, and language
model-based classifiers, performed well with as little as a few hundred
judgments. However, more work needs to be done to optimize state-of-the-art
methods for the legal domain.
| 2,019 | Computation and Language |
A Repository of Conversational Datasets | Progress in Machine Learning is often driven by the availability of large
datasets, and consistent evaluation metrics for comparing modeling approaches.
To this end, we present a repository of conversational datasets consisting of
hundreds of millions of examples, and a standardised evaluation procedure for
conversational response selection models using '1-of-100 accuracy'. The
repository contains scripts that allow researchers to reproduce the standard
datasets, or to adapt the pre-processing and data filtering steps to their
needs. We introduce and evaluate several competitive baselines for
conversational response selection, whose implementations are shared in the
repository, as well as a neural encoder model that is trained on the entire
training set.
| 2,019 | Computation and Language |
Improving Distantly-supervised Entity Typing with Compact Latent Space
Clustering | Recently, distant supervision has gained great success on Fine-grained Entity
Typing (FET). Despite its efficiency in reducing manual labeling efforts, it
also brings the challenge of dealing with false entity type labels, as distant
supervision assigns labels in a context agnostic manner. Existing works
alleviated this issue with partial-label loss, but usually suffer from
confirmation bias, which means the classifier fit a pseudo data distribution
given by itself. In this work, we propose to regularize distantly supervised
models with Compact Latent Space Clustering (CLSC) to bypass this problem and
effectively utilize noisy data yet. Our proposed method first dynamically
constructs a similarity graph of different entity mentions; infer the labels of
noisy instances via label propagation. Based on the inferred labels, mention
embeddings are updated accordingly to encourage entity mentions with close
semantics to form a compact cluster in the embedding space,thus leading to
better classification performance. Extensive experiments on standard benchmarks
show that our CLSC model consistently outperforms state-of-the-art distantly
supervised entity typing systems by a significant margin.
| 2,019 | Computation and Language |
End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual
Transfer Learning | End-to-end text-to-speech (TTS) has shown great success on large quantities
of paired text plus speech data. However, laborious data collection remains
difficult for at least 95% of the languages over the world, which hinders the
development of TTS in different languages. In this paper, we aim to build TTS
systems for such low-resource (target) languages where only very limited paired
data are available. We show such TTS can be effectively constructed by
transferring knowledge from a high-resource (source) language. Since the model
trained on source language cannot be directly applied to target language due to
input space mismatch, we propose a method to learn a mapping between source and
target linguistic symbols. Benefiting from this learned mapping, pronunciation
information can be preserved throughout the transferring procedure. Preliminary
experiments show that we only need around 15 minutes of paired data to obtain a
relatively good TTS system. Furthermore, analytic studies demonstrated that the
automatically discovered mapping correlate well with the phonetic expertise.
| 2,019 | Computation and Language |
Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering | Recently, a simple combination of passage retrieval using off-the-shelf IR
techniques and a BERT reader was found to be very effective for question
answering directly on Wikipedia, yielding a large improvement over the previous
state of the art on a standard benchmark dataset. In this paper, we present a
data augmentation technique using distant supervision that exploits positive as
well as negative examples. We apply a stage-wise approach to fine tuning BERT
on multiple datasets, starting with data that is "furthest" from the test data
and ending with the "closest". Experimental results show large gains in
effectiveness over previous approaches on English QA datasets, and we establish
new baselines on two recent Chinese QA datasets.
| 2,019 | Computation and Language |
From News to Medical: Cross-domain Discourse Segmentation | The first step in discourse analysis involves dividing a text into segments.
We annotate the first high-quality small-scale medical corpus in English with
discourse segments and analyze how well news-trained segmenters perform on this
domain. While we expectedly find a drop in performance, the nature of the
segmentation errors suggests some problems can be addressed earlier in the
pipeline, while others would require expanding the corpus to a trainable size
to learn the nuances of the medical domain.
| 2,019 | Computation and Language |
Rare Words: A Major Problem for Contextualized Embeddings And How to Fix
it by Attentive Mimicking | Pretraining deep neural network architectures with a language modeling
objective has brought large improvements for many natural language processing
tasks. Exemplified by BERT, a recently proposed such architecture, we
demonstrate that despite being trained on huge amounts of data, deep language
models still struggle to understand rare words. To fix this problem, we adapt
Attentive Mimicking, a method that was designed to explicitly learn embeddings
for rare words, to deep language models. In order to make this possible, we
introduce one-token approximation, a procedure that enables us to use Attentive
Mimicking even when the underlying language model uses subword-based
tokenization, i.e., it does not assign embeddings to all words. To evaluate our
method, we create a novel dataset that tests the ability of language models to
capture semantic properties of words without any task-specific fine-tuning.
Using this dataset, we show that adding our adapted version of Attentive
Mimicking to BERT does indeed substantially improve its understanding of rare
words.
| 2,019 | Computation and Language |
Distributed representation of multi-sense words: A loss-driven approach | Word2Vec's Skip Gram model is the current state-of-the-art approach for
estimating the distributed representation of words. However, it assumes a
single vector per word, which is not well-suited for representing words that
have multiple senses. This work presents LDMI, a new model for estimating
distributional representations of words. LDMI relies on the idea that, if a
word carries multiple senses, then having a different representation for each
of its senses should lead to a lower loss associated with predicting its
co-occurring words, as opposed to the case when a single vector representation
is used for all the senses. After identifying the multi-sense words, LDMI
clusters the occurrences of these words to assign a sense to each occurrence.
Experiments on the contextual word similarity task show that LDMI leads to
better performance than competing approaches.
| 2,018 | Computation and Language |
Text segmentation on multilabel documents: A distant-supervised approach | Segmenting text into semantically coherent segments is an important task with
applications in information retrieval and text summarization. Developing
accurate topical segmentation requires the availability of training data with
ground truth information at the segment level. However, generating such labeled
datasets, especially for applications in which the meaning of the labels is
user-defined, is expensive and time-consuming. In this paper, we develop an
approach that instead of using segment-level ground truth information, it
instead uses the set of labels that are associated with a document and are
easier to obtain as the training data essentially corresponds to a multilabel
dataset. Our method, which can be thought of as an instance of distant
supervision, improves upon the previous approaches by exploiting the fact that
consecutive sentences in a document tend to talk about the same topic, and
hence, probably belong to the same class. Experiments on the text segmentation
task on a variety of datasets show that the segmentation produced by our method
beats the competing approaches on four out of five datasets and performs at par
on the fifth dataset. On the multilabel text classification task, our method
performs at par with the competing approaches, while requiring significantly
less time to estimate than the competing approaches.
| 2,018 | Computation and Language |
No Adjective Ordering Mystery, and No Raven Paradox, Just an Ontological
Mishap | In the concluding remarks of Ontological Promiscuity Hobbs (1985) made what
we believe to be a very insightful observation: given that semantics is an
attempt at specifying the relation between language and the world, if "one can
assume a theory of the world that is isomorphic to the way we talk about it ...
then semantics becomes nearly trivial". But how exactly can we rectify our
logical formalisms so that semantics, an endeavor that has occupied the most
penetrating minds for over two centuries, can become (nearly) trivial, and what
exactly does it mean to assume a theory of the world in our semantics? In this
paper we hope to provide answers for both questions. First, we believe that a
commonsense theory of the world can (and should) be embedded in our semantic
formalisms resulting in a logical semantics grounded in commonsense
metaphysics. Moreover, we believe the first step to accomplishing this vision
is rectifying what we think was a crucial oversight in logical semantics,
namely the failure to distinguish between two fundamentally different types of
concepts: (i) ontological concepts, that correspond to what Cocchiarella (2001)
calls first-intension concepts and are types in a strongly-typed ontology; and
(ii) logical concepts (or second intension concepts), that are predicates
corresponding to properties of (and relations between) objects of various
ontological types1. In such a framework, which we will refer to henceforth by
ontologik, it will be shown how type unification and other type operations can
be used to account for the `missing text phenomenon' (MTP) (see Saba, 2019a)
that is at the heart of most challenges in the semantics of natural language,
by uncovering the significant amount of missing text that is never explicitly
stated in everyday discourse, but is often implicitly assumed as shared
background knowledge.
| 2,019 | Computation and Language |
Pun Generation with Surprise | We tackle the problem of generating a pun sentence given a pair of homophones
(e.g., "died" and "dyed"). Supervised text generation is inappropriate due to
the lack of a large corpus of puns, and even if such a corpus existed, mimicry
is at odds with generating novel content. In this paper, we propose an
unsupervised approach to pun generation using a corpus of unhumorous text and
what we call the local-global surprisal principle: we posit that in a pun
sentence, there is a strong association between the pun word (e.g., "dyed") and
the distant context, as well as a strong association between the alternative
word (e.g., "died") and the immediate context. This contrast creates surprise
and thus humor. We instantiate this principle for pun generation in two ways:
(i) as a measure based on the ratio of probabilities under a language model,
and (ii) a retrieve-and-edit approach based on words suggested by a skip-gram
model. Human evaluation shows that our retrieve-and-edit approach generates
puns successfully 31% of the time, tripling the success rate of a neural
generation baseline.
| 2,019 | Computation and Language |
Semantic query-by-example speech search using visual grounding | A number of recent studies have started to investigate how speech systems can
be trained on untranscribed speech by leveraging accompanying images at
training time. Examples of tasks include keyword prediction and within- and
across-mode retrieval. Here we consider how such models can be used for
query-by-example (QbE) search, the task of retrieving utterances relevant to a
given spoken query. We are particularly interested in semantic QbE, where the
task is not only to retrieve utterances containing exact instances of the
query, but also utterances whose meaning is relevant to the query. We follow a
segmental QbE approach where variable-duration speech segments (queries, search
utterances) are mapped to fixed-dimensional embedding vectors. We show that a
QbE system using an embedding function trained on visually grounded speech data
outperforms a purely acoustic QbE system in terms of both exact and semantic
retrieval performance.
| 2,019 | Computation and Language |
Improving Human Text Comprehension through Semi-Markov CRF-based Neural
Section Title Generation | Titles of short sections within long documents support readers by guiding
their focus towards relevant passages and by providing anchor-points that help
to understand the progression of the document. The positive effects of section
titles are even more pronounced when measured on readers with less developed
reading abilities, for example in communities with limited labeled text
resources.
We, therefore, aim to develop techniques to generate section titles in
low-resource environments. In particular, we present an extractive pipeline for
section title generation by first selecting the most salient sentence and then
applying deletion-based compression. Our compression approach is based on a
Semi-Markov Conditional Random Field that leverages unsupervised
word-representations such as ELMo or BERT, eliminating the need for a complex
encoder-decoder architecture. The results show that this approach leads to
competitive performance with sequence-to-sequence models with high resources,
while strongly outperforming it with low resources. In a human-subject study
across subjects with varying reading abilities, we find that our section titles
improve the speed of completing comprehension tasks while retaining similar
accuracy.
| 2,019 | Computation and Language |
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech
Translation | Speech translation has traditionally been approached through cascaded models
consisting of a speech recognizer trained on a corpus of transcribed speech,
and a machine translation system trained on parallel texts. Several recent
works have shown the feasibility of collapsing the cascade into a single,
direct model that can be trained in an end-to-end fashion on a corpus of
translated speech. However, experiments are inconclusive on whether the cascade
or the direct model is stronger, and have only been conducted under the
unrealistic assumption that both are trained on equal amounts of data, ignoring
other available speech recognition and machine translation corpora.
In this paper, we demonstrate that direct speech translation models require
more data to perform well than cascaded models, and while they allow including
auxiliary data through multi-task training, they are poor at exploiting such
data, putting them at a severe disadvantage. As a remedy, we propose the use of
end-to-end trainable models with two attention mechanisms, the first
establishing source speech to source text alignments, the second modeling
source to target text alignment. We show that such models naturally decompose
into multi-task-trainable recognition and translation tasks and propose an
attention-passing technique that alleviates error propagation issues in a
previous formulation of a model with two attention stages. Our proposed model
outperforms all examined baselines and is able to exploit auxiliary training
data much more effectively than direct attentional models.
| 2,019 | Computation and Language |
Latent Code and Text-based Generative Adversarial Networks for Soft-text
Generation | Text generation with generative adversarial networks (GANs) can be divided
into the text-based and code-based categories according to the type of signals
used for discrimination. In this work, we introduce a novel text-based approach
called Soft-GAN to effectively exploit GAN setup for text generation. We
demonstrate how autoencoders (AEs) can be used for providing a continuous
representation of sentences, which we will refer to as soft-text. This soft
representation will be used in GAN discrimination to synthesize similar
soft-texts. We also propose hybrid latent code and text-based GAN (LATEXT-GAN)
approaches with one or more discriminators, in which a combination of the
latent code and the soft-text is used for GAN discriminations. We perform a
number of subjective and objective experiments on two well-known datasets (SNLI
and Image COCO) to validate our techniques. We discuss the results using
several evaluation metrics and show that the proposed techniques outperform the
traditional GAN-based text-generation methods.
| 2,019 | Computation and Language |
Natural Language Semantics With Pictures: Some Language & Vision
Datasets and Potential Uses for Computational Semantics | Propelling, and propelled by, the "deep learning revolution", recent years
have seen the introduction of ever larger corpora of images annotated with
natural language expressions. We survey some of these corpora, taking a
perspective that reverses the usual directionality, as it were, by viewing the
images as semantic annotation of the natural language expressions. We discuss
datasets that can be derived from the corpora, and tasks of potential interest
for computational semanticists that can be defined on those. In this, we make
use of relations provided by the corpora (namely, the link between expression
and image, and that between two expressions linked to the same image) and
relations that we can add (similarity relations between expressions, or between
images). Specifically, we show that in this way we can create data that can be
used to learn and evaluate lexical and compositional grounded semantics, and we
show that the "linked to same image" relation tracks a semantic implication
relation that is recognisable to annotators even in the absence of the linking
image as evidence. Finally, as an example of possible benefits of this
approach, we show that an exemplar-model-based approach to implication beats a
(simple) distributional space-based one on some derived datasets, while lending
itself to explainability.
| 2,019 | Computation and Language |
Multi-Head Multi-Layer Attention to Deep Language Representations for
Grammatical Error Detection | It is known that a deep neural network model pre-trained with large-scale
data greatly improves the accuracy of various tasks, especially when there are
resource constraints. However, the information needed to solve a given task can
vary, and simply using the output of the final layer is not necessarily
sufficient. Moreover, to our knowledge, exploiting large language
representation models to detect grammatical errors has not yet been studied. In
this work, we investigate the effect of utilizing information not only from the
final layer but also from intermediate layers of a pre-trained language
representation model to detect grammatical errors. We propose a multi-head
multi-layer attention model that determines the appropriate layers in
Bidirectional Encoder Representation from Transformers (BERT). The proposed
method achieved the best scores on three datasets for grammatical error
detection tasks, outperforming the current state-of-the-art method by 6.0
points on FCE, 8.2 points on CoNLL14, and 12.2 points on JFLEG in terms of
F_0.5. We also demonstrate that by using multi-head multi-layer attention, our
model can exploit a broader range of information for each token in a sentence
than a model that uses only the final layer's information.
| 2,019 | Computation and Language |
Learning Twitter User Sentiments on Climate Change with Limited Labeled
Data | While it is well-documented that climate change accepters and deniers have
become increasingly polarized in the United States over time, there has been no
large-scale examination of whether these individuals are prone to changing
their opinions as a result of natural external occurrences. On the
sub-population of Twitter users, we examine whether climate change sentiment
changes in response to five separate natural disasters occurring in the U.S. in
2018. We begin by showing that relevant tweets can be classified with over 75%
accuracy as either accepting or denying climate change when using our
methodology to compensate for limited labeled data; results are robust across
several machine learning models and yield geographic-level results in line with
prior research. We then apply RNNs to conduct a cohort-level analysis showing
that the 2018 hurricanes yielded a statistically significant increase in
average tweet sentiment affirming climate change. However, this effect does not
hold for the 2018 blizzard and wildfires studied, implying that Twitter users'
opinions on climate change are fairly ingrained on this subset of natural
disasters.
| 2,020 | Computation and Language |
Something's Brewing! Early Prediction of Controversy-causing Posts from
Discussion Features | Controversial posts are those that split the preferences of a community,
receiving both significant positive and significant negative feedback. Our
inclusion of the word "community" here is deliberate: what is controversial to
some audiences may not be so to others. Using data from several different
communities on reddit.com, we predict the ultimate controversiality of posts,
leveraging features drawn from both the textual content and the tree structure
of the early comments that initiate the discussion. We find that even when only
a handful of comments are available, e.g., the first 5 comments made within 15
minutes of the original post, discussion features often add predictive capacity
to strong content-and-rate only baselines. Additional experiments on domain
transfer suggest that conversation-structure features often generalize to other
communities better than conversation-content features do.
| 2,019 | Computation and Language |
Be Concise and Precise: Synthesizing Open-Domain Entity Descriptions
from Facts | Despite being vast repositories of factual information, cross-domain
knowledge graphs, such as Wikidata and the Google Knowledge Graph, only
sparsely provide short synoptic descriptions for entities. Such descriptions
that briefly identify the most discernible features of an entity provide
readers with a near-instantaneous understanding of what kind of entity they are
being presented. They can also aid in tasks such as named entity
disambiguation, ontological type determination, and answering entity queries.
Given the rapidly increasing numbers of entities in knowledge graphs, a fully
automated synthesis of succinct textual descriptions from underlying factual
information is essential. To this end, we propose a novel fact-to-sequence
encoder-decoder model with a suitable copy mechanism to generate concise and
precise textual descriptions of entities. In an in-depth evaluation, we
demonstrate that our method significantly outperforms state-of-the-art
alternatives.
| 2,019 | Computation and Language |
Positional Encoding to Control Output Sequence Length | Neural encoder-decoder models have been successful in natural language
generation tasks. However, real applications of abstractive summarization must
consider additional constraint that a generated summary should not exceed a
desired length. In this paper, we propose a simple but effective extension of a
sinusoidal positional encoding (Vaswani et al., 2017) to enable neural
encoder-decoder model to preserves the length constraint. Unlike in previous
studies where that learn embeddings representing each length, the proposed
method can generate a text of any length even if the target length is not
present in training data. The experimental results show that the proposed
method can not only control the generation length but also improve the ROUGE
scores.
| 2,019 | Computation and Language |
Doc2EDAG: An End-to-End Document-level Framework for Chinese Financial
Event Extraction | Most existing event extraction (EE) methods merely extract event arguments
within the sentence scope. However, such sentence-level EE methods struggle to
handle soaring amounts of documents from emerging applications, such as
finance, legislation, health, etc., where event arguments always scatter across
different sentences, and even multiple such event mentions frequently co-exist
in the same document. To address these challenges, we propose a novel
end-to-end model, Doc2EDAG, which can generate an entity-based directed acyclic
graph to fulfill the document-level EE (DEE) effectively. Moreover, we
reformalize a DEE task with the no-trigger-words design to ease the
document-level event labeling. To demonstrate the effectiveness of Doc2EDAG, we
build a large-scale real-world dataset consisting of Chinese financial
announcements with the challenges mentioned above. Extensive experiments with
comprehensive analyses illustrate the superiority of Doc2EDAG over
state-of-the-art methods. Data and codes can be found at
https://github.com/dolphin-zs/Doc2EDAG.
| 2,019 | Computation and Language |
Unsupervised acoustic unit discovery for speech synthesis using discrete
latent-variable neural networks | For our submission to the ZeroSpeech 2019 challenge, we apply discrete
latent-variable neural networks to unlabelled speech and use the discovered
units for speech synthesis. Unsupervised discrete subword modelling could be
useful for studies of phonetic category learning in infants or in low-resource
speech technology requiring symbolic input. We use an autoencoder (AE)
architecture with intermediate discretisation. We decouple acoustic unit
discovery from speaker modelling by conditioning the AE's decoder on the
training speaker identity. At test time, unit discovery is performed on speech
from an unseen speaker, followed by unit decoding conditioned on a known target
speaker to obtain reconstructed filterbanks. This output is fed to a neural
vocoder to synthesise speech in the target speaker's voice. For discretisation,
categorical variational autoencoders (CatVAEs), vector-quantised VAEs (VQ-VAEs)
and straight-through estimation are compared at different compression levels on
two languages. Our final model uses convolutional encoding, VQ-VAE
discretisation, deconvolutional decoding and an FFTNet vocoder. We show that
decoupled speaker conditioning intrinsically improves discrete acoustic
representations, yielding competitive synthesis quality compared to the
challenge baseline.
| 2,019 | Computation and Language |
Causality Extraction based on Self-Attentive BiLSTM-CRF with Transferred
Embeddings | Causality extraction from natural language texts is a challenging open
problem in artificial intelligence. Existing methods utilize patterns,
constraints, and machine learning techniques to extract causality, heavily
depending on domain knowledge and requiring considerable human effort and time
for feature engineering. In this paper, we formulate causality extraction as a
sequence labeling problem based on a novel causality tagging scheme. On this
basis, we propose a neural causality extractor with the BiLSTM-CRF model as the
backbone, named SCITE (Self-attentive BiLSTM-CRF wIth Transferred Embeddings),
which can directly extract cause and effect without extracting candidate causal
pairs and identifying their relations separately. To address the problem of
data insufficiency, we transfer contextual string embeddings, also known as
Flair embeddings, which are trained on a large corpus in our task. In addition,
to improve the performance of causality extraction, we introduce a multihead
self-attention mechanism into SCITE to learn the dependencies between causal
words. We evaluate our method on a public dataset, and experimental results
demonstrate that our method achieves significant and consistent improvement
compared to baselines.
| 2,021 | Computation and Language |
Subjective Assessment of Text Complexity: A Dataset for German Language | This paper presents TextComplexityDE, a dataset consisting of 1000 sentences
in German language taken from 23 Wikipedia articles in 3 different
article-genres to be used for developing text-complexity predictor models and
automatic text simplification in German language. The dataset includes
subjective assessment of different text-complexity aspects provided by German
learners in level A and B. In addition, it contains manual simplification of
250 of those sentences provided by native speakers and subjective assessment of
the simplified sentences by participants from the target group. The subjective
ratings were collected using both laboratory studies and crowdsourcing
approach.
| 2,019 | Computation and Language |
Sameness Entices, but Novelty Enchants in Fanfiction Online | Cultural evolution is driven by how we choose what to consume and share with
others. A common belief is that the cultural artifacts that succeed are ones
that balance novelty and conventionality. This balance theory suggests that
people prefer works that are familiar, but not so familiar as to be boring;
novel, but not so novel as to violate the expectations of their genre. We test
this idea using a large dataset of fanfiction. We apply a multiple regression
model and a generalized additive model to examine how the recognition a work
receives varies with its novelty, estimated through a Latent Dirichlet
Allocation topic model, in the context of existing works. We find the opposite
pattern of what the balance theory predicts$\unicode{x2014}$overall success
decline almost monotonically with novelty and exhibits a U-shaped, instead of
an inverse U-shaped, curve. This puzzle is resolved by teasing out two
competing forces: sameness attracts the mass whereas novelty provides
enjoyment. Taken together, even though the balance theory holds in terms of
expressed enjoyment, the overall success can show the opposite pattern due to
the dominant role of sameness to attract the audience. Under these two forces,
cultural evolution may have to work against inertia$\unicode{x2014}$the
appetite for consuming the familiar$\unicode{x2014}$and may resemble a
punctuated equilibrium, marked by occasional leaps.
| 2,023 | Computation and Language |
Unsupervised Discovery of Multimodal Links in Multi-image,
Multi-sentence Documents | Images and text co-occur constantly on the web, but explicit links between
images and sentences (or other intra-document textual units) are often not
present. We present algorithms that discover image-sentence relationships
without relying on explicit multimodal annotation in training. We experiment on
seven datasets of varying difficulty, ranging from documents consisting of
groups of images captioned post hoc by crowdworkers to naturally-occurring
user-generated multimodal documents. We find that a structured training
objective based on identifying whether collections of images and sentences
co-occur in documents can suffice to predict links between specific sentences
and specific images within the same document at test time.
| 2,019 | Computation and Language |
UTFPR at SemEval-2019 Task 5: Hate Speech Identification with Recurrent
Neural Networks | In this paper we revisit the problem of automatically identifying hate speech
in posts from social media. We approach the task using a system based on
minimalistic compositional Recurrent Neural Networks (RNN). We tested our
approach on the SemEval-2019 Task 5: Multilingual Detection of Hate Speech
Against Immigrants and Women in Twitter (HatEval) shared task dataset. The
dataset made available by the HatEval organizers contained English and Spanish
posts retrieved from Twitter annotated with respect to the presence of hateful
content and its target. In this paper we present the results obtained by our
system in comparison to the other entries in the shared task. Our system
achieved competitive performance ranking 7th in sub-task A out of 62 systems in
the English track.
| 2,019 | Computation and Language |
Mitigating the Impact of Speech Recognition Errors on Spoken Question
Answering by Adversarial Domain Adaptation | Spoken question answering (SQA) is challenging due to complex reasoning on
top of the spoken documents. The recent studies have also shown the
catastrophic impact of automatic speech recognition (ASR) errors on SQA.
Therefore, this work proposes to mitigate the ASR errors by aligning the
mismatch between ASR hypotheses and their corresponding reference
transcriptions. An adversarial model is applied to this domain adaptation task,
which forces the model to learn domain-invariant features the QA model can
effectively utilize in order to improve the SQA results. The experiments
successfully demonstrate the effectiveness of our proposed model, and the
results are better than the previous best model by 2% EM score.
| 2,019 | Computation and Language |
Semantic Characteristics of Schizophrenic Speech | Natural language processing tools are used to automatically detect
disturbances in transcribed speech of schizophrenia inpatients who speak
Hebrew. We measure topic mutation over time and show that controls maintain
more cohesive speech than inpatients. We also examine differences in how
inpatients and controls use adjectives and adverbs to describe content words
and show that the ones used by controls are more common than the those of
inpatients. We provide experimental results and show their potential for
automatically detecting schizophrenia in patients by means only of their speech
patterns.
| 2,019 | Computation and Language |
A Systematic Study of Leveraging Subword Information for Learning Word
Representations | The use of subword-level information (e.g., characters, character n-grams,
morphemes) has become ubiquitous in modern word representation learning. Its
importance is attested especially for morphologically rich languages which
generate a large number of rare words. Despite a steadily increasing interest
in such subword-informed word representations, their systematic comparative
analysis across typologically diverse languages and different tasks is still
missing. In this work, we deliver such a study focusing on the variation of two
crucial components required for subword-level integration into word
representation models: 1) segmentation of words into subword units, and 2)
subword composition functions to obtain final word representations. We propose
a general framework for learning subword-informed word representations that
allows for easy experimentation with different segmentation and composition
components, also including more advanced techniques based on position
embeddings and self-attention. Using the unified framework, we run experiments
over a large number of subword-informed word representation configurations (60
in total) on 3 tasks (general and rare word similarity, dependency parsing,
fine-grained entity typing) for 5 languages representing 3 language types. Our
main results clearly indicate that there is no "one-sizefits-all"
configuration, as performance is both language- and task-dependent. We also
show that configurations based on unsupervised segmentation (e.g., BPE,
Morfessor) are sometimes comparable to or even outperform the ones based on
supervised word segmentation.
| 2,019 | Computation and Language |
Posterior-regularized REINFORCE for Instance Selection in Distant
Supervision | This paper provides a new way to improve the efficiency of the REINFORCE
training process. We apply it to the task of instance selection in distant
supervision. Modeling the instance selection in one bag as a sequential
decision process, a reinforcement learning agent is trained to determine
whether an instance is valuable or not and construct a new bag with less noisy
instances. However unbiased methods, such as REINFORCE, could usually take much
time to train. This paper adopts posterior regularization (PR) to integrate
some domain-specific rules in instance selection using REINFORCE. As the
experiment results show, this method remarkably improves the performance of the
relation classifier trained on cleaned distant supervision dataset as well as
the efficiency of the REINFORCE training.
| 2,019 | Computation and Language |
Reinforcement Learning Based Emotional Editing Constraint Conversation
Generation | In recent years, the generation of conversation content based on deep neural
networks has attracted many researchers. However, traditional neural language
models tend to generate general replies, lacking logical and emotional factors.
This paper proposes a conversation content generation model that combines
reinforcement learning with emotional editing constraints to generate more
meaningful and customizable emotional replies. The model divides the replies
into three clauses based on pre-generated keywords and uses the emotional
editor to further optimize the final reply. The model combines multi-task
learning with multiple indicator rewards to comprehensively optimize the
quality of replies. Experiments shows that our model can not only improve the
fluency of the replies, but also significantly enhance the logical relevance
and emotional relevance of the replies.
| 2,019 | Computation and Language |
End-to-End Speech Translation with Knowledge Distillation | End-to-end speech translation (ST), which directly translates from source
language speech into target language text, has attracted intensive attentions
in recent years. Compared to conventional pipeline systems, end-to-end ST
models have advantages of lower latency, smaller model size and less error
propagation. However, the combination of speech recognition and text
translation in one model is more difficult than each of these two tasks. In
this paper, we propose a knowledge distillation approach to improve ST model by
transferring the knowledge from text translation model. Specifically, we first
train a text translation model, regarded as a teacher model, and then ST model
is trained to learn output probabilities from teacher model through knowledge
distillation. Experiments on English- French Augmented LibriSpeech and
English-Chinese TED corpus show that end-to-end ST is possible to implement on
both similar and dissimilar language pairs. In addition, with the instruction
of teacher model, end-to-end ST model can gain significant improvements by over
3.5 BLEU points.
| 2,019 | Computation and Language |
Patent Analytics Based on Feature Vector Space Model: A Case of IoT | The number of approved patents worldwide increases rapidly each year, which
requires new patent analytics to efficiently mine the valuable information
attached to these patents. Vector space model (VSM) represents documents as
high-dimensional vectors, where each dimension corresponds to a unique term.
While originally proposed for information retrieval systems, VSM has also seen
wide applications in patent analytics, and used as a fundamental tool to map
patent documents to structured data. However, VSM method suffers from several
limitations when applied to patent analysis tasks, such as loss of
sentence-level semantics and curse-of-dimensionality problems. In order to
address the above limitations, we propose a patent analytics based on feature
vector space model (FVSM), where the FVSM is constructed by mapping patent
documents to feature vectors extracted by convolutional neural networks (CNN).
The applications of FVSM for three typical patent analysis tasks, i.e., patents
similarity comparison, patent clustering, and patent map generation are
discussed. A case study using patents related to Internet of Things (IoT)
technology is illustrated to demonstrate the performance and effectiveness of
FVSM. The proposed FVSM can be adopted by other patent analysis studies to
replace VSM, based on which various big data learning tasks can be performed.
| 2,019 | Computation and Language |
Contextual Aware Joint Probability Model Towards Question Answering
System | In this paper, we address the question answering challenge with the SQuAD 2.0
dataset. We design a model architecture which leverages BERT's capability of
context-aware word embeddings and BiDAF's context interactive exploration
mechanism. By integrating these two state-of-the-art architectures, our system
tries to extract the contextual word representation at word and character
levels, for better comprehension of both question and context and their
correlations. We also propose our original joint posterior probability
predictor module and its associated loss functions. Our best model so far
obtains F1 score of 75.842% and EM score of 72.24% on the test PCE leaderboad.
| 2,019 | Computation and Language |
Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment
Analysis | Sentiment analysis, mostly based on text, has been rapidly developing in the
last decade and has attracted widespread attention in both academia and
industry. However, the information in the real world usually comes from
multiple modalities, such as audio and text. Therefore, in this paper, based on
audio and text, we consider the task of multimodal sentiment analysis and
propose a novel fusion strategy including both multi-feature fusion and
multi-modality fusion to improve the accuracy of audio-text sentiment analysis.
We call it the DFF-ATMF (Deep Feature Fusion - Audio and Text Modality Fusion)
model, which consists of two parallel branches, the audio modality based branch
and the text modality based branch. Its core mechanisms are the fusion of
multiple feature vectors and multiple modality attention. Experiments on the
CMU-MOSI dataset and the recently released CMU-MOSEI dataset, both collected
from YouTube for sentiment analysis, show the very competitive results of our
DFF-ATMF model. Furthermore, by virtue of attention weight distribution
heatmaps, we also demonstrate the deep features learned by using DFF-ATMF are
complementary to each other and robust. Surprisingly, DFF-ATMF also achieves
new state-of-the-art results on the IEMOCAP dataset, indicating that the
proposed fusion strategy also has a good generalization ability for multimodal
emotion recognition.
| 2,019 | Computation and Language |
Effective Estimation of Deep Generative Language Models | Advances in variational inference enable parameterisation of probabilistic
models by deep neural networks. This combines the statistical transparency of
the probabilistic modelling framework with the representational power of deep
learning. Yet, due to a problem known as posterior collapse, it is difficult to
estimate such models in the context of language modelling effectively. We
concentrate on one such model, the variational auto-encoder, which we argue is
an important building block in hierarchical probabilistic models of language.
This paper contributes a sober view of the problem, a survey of techniques to
address it, novel techniques, and extensions to the model. To establish a
ranking of techniques, we perform a systematic comparison using Bayesian
optimisation and find that many techniques perform reasonably similar, given
enough resources. Still, a favourite can be named based on convenience. We also
make several empirical observations and recommendations of best practices that
should help researchers interested in this exciting field.
| 2,020 | Computation and Language |
Amobee at SemEval-2019 Tasks 5 and 6: Multiple Choice CNN Over
Contextual Embedding | This article describes Amobee's participation in "HatEval: Multilingual
detection of hate speech against immigrants and women in Twitter" (task 5) and
"OffensEval: Identifying and Categorizing Offensive Language in Social Media"
(task 6). The goal of task 5 was to detect hate speech targeted to women and
immigrants. The goal of task 6 was to identify and categorized offensive
language in social media, and identify offense target. We present a novel type
of convolutional neural network called "Multiple Choice CNN" (MC-CNN) that we
used over our newly developed contextual embedding, Rozental et al. (2019). For
both tasks we used this architecture and achieved 4th place out of 69
participants with an F1 score of 0.53 in task 5, in task 6 achieved 2nd place
(out of 75) in Sub-task B - automatic categorization of offense types (our
model reached places 18/2/7 out of 103/75/65 for sub-tasks A, B and C
respectively in task 6).
| 2,019 | Computation and Language |
Automatic Accuracy Prediction for AMR Parsing | Abstract Meaning Representation (AMR) represents sentences as directed,
acyclic and rooted graphs, aiming at capturing their meaning in a machine
readable format. AMR parsing converts natural language sentences into such
graphs. However, evaluating a parser on new data by means of comparison to
manually created AMR graphs is very costly. Also, we would like to be able to
detect parses of questionable quality, or preferring results of alternative
systems by selecting the ones for which we can assess good quality. We propose
AMR accuracy prediction as the task of predicting several metrics of
correctness for an automatically generated AMR parse - in absence of the
corresponding gold parse. We develop a neural end-to-end multi-output
regression model and perform three case studies: firstly, we evaluate the
model's capacity of predicting AMR parse accuracies and test whether it can
reliably assign high scores to gold parses. Secondly, we perform parse
selection based on predicted parse accuracies of candidate parses from
alternative systems, with the aim of improving overall results. Finally, we
predict system ranks for submissions from two AMR shared tasks on the basis of
their predicted parse accuracy averages. All experiments are carried out across
two different domains and show that our method is effective.
| 2,019 | Computation and Language |
Guiding CTC Posterior Spike Timings for Improved Posterior Fusion and
Knowledge Distillation | Conventional automatic speech recognition (ASR) systems trained from
frame-level alignments can easily leverage posterior fusion to improve ASR
accuracy and build a better single model with knowledge distillation.
End-to-end ASR systems trained using the Connectionist Temporal Classification
(CTC) loss do not require frame-level alignment and hence simplify model
training. However, sparse and arbitrary posterior spike timings from CTC models
pose a new set of challenges in posterior fusion from multiple models and
knowledge distillation between CTC models. We propose a method to train a CTC
model so that its spike timings are guided to align with those of a pre-trained
guiding CTC model. As a result, all models that share the same guiding model
have aligned spike timings. We show the advantage of our method in various
scenarios including posterior fusion of CTC models and knowledge distillation
between CTC models with different architectures. With the 300-hour Switchboard
training data, the single word CTC model distilled from multiple models
improved the word error rates to 13.7%/23.1% from 14.9%/24.1% on the Hub5 2000
Switchboard/CallHome test sets without using any data augmentation, language
model, or complex decoder.
| 2,019 | Computation and Language |
MoralStrength: Exploiting a Moral Lexicon and Embedding Similarity for
Moral Foundations Prediction | Moral rhetoric plays a fundamental role in how we perceive and interpret the
information we receive, greatly influencing our decision-making process.
Especially when it comes to controversial social and political issues, our
opinions and attitudes are hardly ever based on evidence alone. The Moral
Foundations Dictionary (MFD) was developed to operationalize moral values in
the text. In this study, we present MoralStrength, a lexicon of approximately
1,000 lemmas, obtained as an extension of the Moral Foundations Dictionary,
based on WordNet synsets. Moreover, for each lemma it provides with a
crowdsourced numeric assessment of Moral Valence, indicating the strength with
which a lemma is expressing the specific value. We evaluated the predictive
potentials of this moral lexicon, defining three utilization approaches of
increased complexity, ranging from lemmas' statistical properties to a deep
learning approach of word embeddings based on semantic similarity. Logistic
regression models trained on the features extracted from MoralStrength,
significantly outperformed the current state-of-the-art, reaching an F1-score
of 87.6% over the previous 62.4% (p-value<0.01), and an average F1-Score of
86.25% over six different datasets. Such findings pave the way for further
research, allowing for an in-depth understanding of moral narratives in text
for a wide range of social issues.
| 2,019 | Computation and Language |
DocBERT: BERT for Document Classification | We present, to our knowledge, the first application of BERT to document
classification. A few characteristics of the task might lead one to think that
BERT is not the most appropriate model: syntactic structures matter less for
content categories, documents can often be longer than typical BERT input, and
documents often have multiple labels. Nevertheless, we show that a
straightforward classification model using BERT is able to achieve the state of
the art across four popular datasets. To address the computational expense
associated with BERT inference, we distill knowledge from BERT-large to small
bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x
fewer parameters. The primary contribution of our paper is improved baselines
that can provide the foundation for future work.
| 2,019 | Computation and Language |
Headline Generation: Learning from Decomposable Document Titles | We propose a novel method for generating titles for unstructured text
documents. We reframe the problem as a sequential question-answering task. A
deep neural network is trained on document-title pairs with decomposable
titles, meaning that the vocabulary of the title is a subset of the vocabulary
of the document. To train the model we use a corpus of millions of publicly
available document-title pairs: news articles and headlines. We present the
results of a randomized double-blind trial in which subjects were unaware of
which titles were human or machine-generated. When trained on approximately 1.5
million news articles, the model generates headlines that humans judge to be as
good or better than the original human-written headlines in the majority of
cases.
| 2,019 | Computation and Language |
One Homonym per Translation | The study of homonymy is vital to resolving fundamental problems in lexical
semantics. In this paper, we propose four hypotheses that characterize the
unique behavior of homonyms in the context of translations, discourses,
collocations, and sense clusters. We present a new annotated homonym resource
that allows us to test our hypotheses on existing WSD resources. The results of
the experiments provide strong empirical evidence for the hypotheses. This
study represents a step towards a computational method for distinguishing
between homonymy and polysemy, and constructing a definitive inventory of
coarse-grained senses.
| 2,020 | Computation and Language |
Neural Constituency Parsing of Speech Transcripts | This paper studies the performance of a neural self-attentive parser on
transcribed speech. Speech presents parsing challenges that do not appear in
written text, such as the lack of punctuation and the presence of speech
disfluencies (including filled pauses, repetitions, corrections, etc.).
Disfluencies are especially problematic for conventional syntactic parsers,
which typically fail to find any EDITED disfluency nodes at all. This motivated
the development of special disfluency detection systems, and special mechanisms
added to parsers specifically to handle disfluencies. However, we show here
that neural parsers can find EDITED disfluency nodes, and the best neural
parsers find them with an accuracy surpassing that of specialized disfluency
detection systems, thus making these specialized mechanisms unnecessary. This
paper also investigates a modified loss function that puts more weight on
EDITED nodes. It also describes tree-transformations that simplify the
disfluency detection task by providing alternative encodings of disfluencies
and syntactic information.
| 2,020 | Computation and Language |
ConvLab: Multi-Domain End-to-End Dialog System Platform | We present ConvLab, an open-source multi-domain end-to-end dialog system
platform, that enables researchers to quickly set up experiments with reusable
components and compare a large set of different approaches, ranging from
conventional pipeline systems to end-to-end neural models, in common
environments. ConvLab offers a set of fully annotated datasets and associated
pre-trained reference models. As a showcase, we extend the MultiWOZ dataset
with user dialog act annotations to train all component models and demonstrate
how ConvLab makes it easy and effortless to conduct complicated experiments in
multi-domain end-to-end dialog settings.
| 2,019 | Computation and Language |
Analytical Methods for Interpretable Ultradense Word Embeddings | Word embeddings are useful for a wide variety of tasks, but they lack
interpretability. By rotating word spaces, interpretable dimensions can be
identified while preserving the information contained in the embeddings without
any loss. In this work, we investigate three methods for making word spaces
interpretable by rotation: Densifier (Rothe et al., 2016), linear SVMs and
DensRay, a new method we propose. In contrast to Densifier, DensRay can be
computed in closed form, is hyperparameter-free and thus more robust than
Densifier. We evaluate the three methods on lexicon induction and set-based
word analogy. In addition we provide qualitative insights as to how
interpretable word spaces can be used for removing gender bias from embeddings.
| 2,019 | Computation and Language |
Societal Controversies in Wikipedia Articles | Collaborative content creation inevitably reaches situations where different
points of view lead to conflict. We focus on Wikipedia, the free encyclopedia
anyone may edit, where disputes about content in controversial articles often
reflect larger societal debates. While Wikipedia has a public edit history and
discussion section for every article, the substance of these sections is
difficult to phantom for Wikipedia users interested in the development of an
article and in locating which topics were most controversial. In this paper we
present Contropedia, a tool that augments Wikipedia articles and gives insight
into the development of controversial topics. Contropedia uses an efficient
language agnostic measure based on the edit history that focuses on wiki links
to easily identify which topics within a Wikipedia article have been most
controversial and when.
| 2,015 | Computation and Language |
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings | Gender bias is highly impacting natural language processing applications.
Word embeddings have clearly been proven both to keep and amplify gender biases
that are present in current data sources. Recently, contextualized word
embeddings have enhanced previous word embedding techniques by computing word
vector representations dependent on the sentence they appear in.
In this paper, we study the impact of this conceptual change in the word
embedding computation in relation with gender bias. Our analysis includes
different measures previously applied in the literature to standard word
embeddings. Our findings suggest that contextualized word embeddings are less
biased than standard ones even when the latter are debiased.
| 2,019 | Computation and Language |
Clause-Wise and Recursive Decoding for Complex and Cross-Domain
Text-to-SQL Generation | Most deep learning approaches for text-to-SQL generation are limited to the
WikiSQL dataset, which only supports very simple queries over a single table.
We focus on the Spider dataset, a complex and cross-domain text-to-SQL task,
which includes complex queries over multiple tables. In this paper, we propose
a SQL clause-wise decoding neural architecture with a self-attention based
database schema encoder to address the Spider task. Each of the clause-specific
decoders consists of a set of sub-modules, which is defined by the syntax of
each clause. Additionally, our model works recursively to support nested
queries. When evaluated on the Spider dataset, our approach achieves 4.6\% and
9.8\% accuracy gain in the test and dev sets, respectively. In addition, we
show that our model is significantly more effective at predicting complex and
nested queries than previous work.
| 2,019 | Computation and Language |
Towards VQA Models That Can Read | Studies have shown that a dominant class of questions asked by visually
impaired users on images of their surroundings involves reading text in the
image. But today's VQA models can not read! Our paper takes a first step
towards addressing this problem. First, we introduce a new "TextVQA" dataset to
facilitate progress on this important problem. Existing datasets either have a
small proportion of questions about text (e.g., the VQA dataset) or are too
small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408
images that require reasoning about text to answer. Second, we introduce a
novel model architecture that reads text in the image, reasons about it in the
context of the image and the question, and predicts an answer which might be a
deduction based on the text and the image or composed of the strings found in
the image. Consequently, we call our approach Look, Read, Reason & Answer
(LoRRA). We show that LoRRA outperforms existing state-of-the-art VQA models on
our TextVQA dataset. We find that the gap between human performance and machine
performance is significantly larger on TextVQA than on VQA 2.0, suggesting that
TextVQA is well-suited to benchmark progress along directions complementary to
VQA 2.0.
| 2,019 | Computation and Language |
Language Modeling through Long Term Memory Network | Recurrent Neural Networks (RNN), Long Short-Term Memory Networks (LSTM), and
Memory Networks which contain memory are popularly used to learn patterns in
sequential data. Sequential data has long sequences that hold relationships.
RNN can handle long sequences but suffers from the vanishing and exploding
gradient problems. While LSTM and other memory networks address this problem,
they are not capable of handling long sequences (50 or more data points long
sequence patterns). Language modelling requiring learning from longer sequences
are affected by the need for more information in memory. This paper introduces
Long Term Memory network (LTM), which can tackle the exploding and vanishing
gradient problems and handles long sequences without forgetting. LTM is
designed to scale data in the memory and gives a higher weight to the input in
the sequence. LTM avoid overfitting by scaling the cell state after achieving
the optimal results. The LTM is tested on Penn treebank dataset, and Text8
dataset and LTM achieves test perplexities of 83 and 82 respectively. 650 LTM
cells achieved a test perplexity of 67 for Penn treebank, and 600 cells
achieved a test perplexity of 77 for Text8. LTM achieves state of the art
results by only using ten hidden LTM cells for both datasets.
| 2,019 | Computation and Language |
No Permanent Friends or Enemies: Tracking Relationships between Nations
from News | Understanding the dynamics of international politics is important yet
challenging for civilians. In this work, we explore unsupervised neural models
to infer relations between nations from news articles. We extend existing
models by incorporating shallow linguistics information and propose a new
automatic evaluation metric that aligns relationship dynamics with manually
annotated key events. As understanding international relations requires
carefully analyzing complex relationships, we conduct in-person human
evaluations with three groups of participants. Overall, humans prefer the
outputs of our model and give insightful feedback that suggests future
directions for human-centered models. Furthermore, our model reveals
interesting regional differences in news coverage. For instance, with respect
to US-China relations, Singaporean media focus more on "strengthening" and
"purchasing", while US media focus more on "criticizing" and "denouncing".
| 2,019 | Computation and Language |
Genie: A Generator of Natural Language Semantic Parsers for Virtual
Assistant Commands | To understand diverse natural language commands, virtual assistants today are
trained with numerous labor-intensive, manually annotated sentences. This paper
presents a methodology and the Genie toolkit that can handle new compound
commands with significantly less manual effort. We advocate formalizing the
capability of virtual assistants with a Virtual Assistant Programming Language
(VAPL) and using a neural semantic parser to translate natural language into
VAPL code. Genie needs only a small realistic set of input sentences for
validating the neural model. Developers write templates to synthesize data;
Genie uses crowdsourced paraphrases and data augmentation, along with the
synthesized data, to train a semantic parser. We also propose design principles
that make VAPL languages amenable to natural language translation. We apply
these principles to revise ThingTalk, the language used by the Almond virtual
assistant. We use Genie to build the first semantic parser that can support
compound virtual assistants commands with unquoted free-form parameters. Genie
achieves a 62% accuracy on realistic user inputs. We demonstrate Genie's
generality by showing a 19% and 31% improvement over the previous state of the
art on a music skill, aggregate functions, and access control.
| 2,019 | Computation and Language |
Query-focused Sentence Compression in Linear Time | Search applications often display shortened sentences which must contain
certain query terms and must fit within the space constraints of a user
interface. This work introduces a new transition-based sentence compression
technique developed for such settings. Our query-focused method constructs
length and lexically constrained compressions in linear time, by growing a
subgraph in the dependency parse of a sentence. This theoretically efficient
approach achieves an 11X empirical speedup over baseline ILP methods, while
better reconstructing gold constrained shortenings. Such speedups help
query-focused applications, because users are measurably hindered by interface
lags. Additionally, our technique does not require an ILP solver or a GPU.
| 2,019 | Computation and Language |
Identifying Offensive Posts and Targeted Offense from Twitter | In this paper we present our approach and the system description for Sub-task
A and Sub Task B of SemEval 2019 Task 6: Identifying and Categorizing Offensive
Language in Social Media. Sub-task A involves identifying if a given tweet is
offensive or not, and Sub Task B involves detecting if an offensive tweet is
targeted towards someone (group or an individual). Our models for Sub-task A is
based on an ensemble of Convolutional Neural Network, Bidirectional LSTM with
attention, and Bidirectional LSTM + Bidirectional GRU, whereas for Sub-task B,
we rely on a set of heuristics derived from the training data and manual
observation. We provide detailed analysis of the results obtained using the
trained models. Our team ranked 5th out of 103 participants in Sub-task A,
achieving a macro F1 score of 0.807, and ranked 8th out of 75 participants in
Sub Task B achieving a macro F1 of 0.695.
| 2,019 | Computation and Language |
Suggestion Mining from Online Reviews using ULMFiT | In this paper we present our approach and the system description for Sub Task
A of SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums.
Given a sentence, the task asks to predict whether the sentence consists of a
suggestion or not. Our model is based on Universal Language Model Fine-tuning
for Text Classification. We apply various pre-processing techniques before
training the language and the classification model. We further provide detailed
analysis of the results obtained using the trained model. Our team ranked 10th
out of 34 participants, achieving an F1 score of 0.7011. We publicly share our
implementation at https://github.com/isarth/SemEval9_MIDAS
| 2,019 | Computation and Language |
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT | Pretrained contextual representation models (Peters et al., 2018; Devlin et
al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new
release of BERT (Devlin, 2018) includes a model simultaneously pretrained on
104 languages with impressive performance for zero-shot cross-lingual transfer
on a natural language inference task. This paper explores the broader
cross-lingual potential of mBERT (multilingual) as a zero shot language
transfer model on 5 NLP tasks covering a total of 39 languages from various
language families: NLI, document classification, NER, POS tagging, and
dependency parsing. We compare mBERT with the best-published methods for
zero-shot cross-lingual transfer and find mBERT competitive on each task.
Additionally, we investigate the most effective strategy for utilizing mBERT in
this manner, determine to what extent mBERT generalizes away from language
specific features, and measure factors that influence cross-lingual transfer.
| 2,019 | Computation and Language |
Learning Programmatic Idioms for Scalable Semantic Parsing | Programmers typically organize executable source code using high-level coding
patterns or idiomatic structures such as nested loops, exception handlers and
recursive blocks, rather than as individual code tokens. In contrast, state of
the art (SOTA) semantic parsers still map natural language instructions to
source code by building the code syntax tree one node at a time. In this paper,
we introduce an iterative method to extract code idioms from large source code
corpora by repeatedly collapsing most-frequent depth-2 subtrees of their syntax
trees, and train semantic parsers to apply these idioms during decoding.
Applying idiom-based decoding on a recent context-dependent semantic parsing
task improves the SOTA by 2.2\% BLEU score while reducing training time by more
than 50\%. This improved speed enables us to scale up the model by training on
an extended training set that is 5$\times$ larger, to further move up the SOTA
by an additional 2.3\% BLEU and 0.9\% exact match. Finally, idioms also
significantly improve accuracy of semantic parsing to SQL on the ATIS-SQL
dataset, when training data is limited.
| 2,019 | Computation and Language |
Code-Switching for Enhancing NMT with Pre-Specified Translation | Leveraging user-provided translation to constrain NMT has practical
significance. Existing methods can be classified into two main categories,
namely the use of placeholder tags for lexicon words and the use of hard
constraints during decoding. Both methods can hurt translation fidelity for
various reasons. We investigate a data augmentation method, making
code-switched training data by replacing source phrases with their target
translations. Our method does not change the MNT model or decoding algorithm,
allowing the model to learn lexicon translations by copying source-side target
words. Extensive experiments show that our method achieves consistent
improvements over existing approaches, improving translation of constrained
words without hurting unconstrained words.
| 2,019 | Computation and Language |
Recognizing the vocabulary of Brazilian popular newspapers with a
free-access computational dictionary | We report an experiment to check the identification of a set of words in
popular written Portuguese with two versions of a computational dictionary of
Brazilian Portuguese, DELAF PB 2004 and DELAF PB 2015. This dictionary is
freely available for use in linguistic analyses of Brazilian Portuguese and
other researches, which justifies critical study. The vocabulary comes from the
PorPopular corpus, made of popular newspapers Di{\'a}rio Ga{\'u}cho (DG) and
Massa! (MA). From DG, we retained a set of texts with 984.465 words (tokens),
published in 2008, with the spelling used before the Portuguese Language
Orthographic Agreement adopted in 2009. From MA, we examined papers of 2012,
2014 e 2015, with 215.776 words (tokens), all with the new spelling. The
checking involved: a) generating lists of words (types) occurring in DG and MA;
b) comparing them with the entry lists of both versions of DELAF PB; c)
assessing the coverage of this vocabulary; d) proposing ways of incorporating
the items not covered. The results of the work show that an average of 19% of
the types in DG were not found in DELAF PB 2004 or 2015. In MA, this average is
13%. Switching versions of the dictionary affected slightly the performance in
recognizing the words.
| 2,019 | Computation and Language |
Zero-Shot Cross-Lingual Opinion Target Extraction | Aspect-based sentiment analysis involves the recognition of so called opinion
target expressions (OTEs). To automatically extract OTEs, supervised learning
algorithms are usually employed which are trained on manually annotated
corpora. The creation of these corpora is labor-intensive and sufficiently
large datasets are therefore usually only available for a very narrow selection
of languages and domains. In this work, we address the lack of available
annotated data for specific languages by proposing a zero-shot cross-lingual
approach for the extraction of opinion target expressions. We leverage
multilingual word embeddings that share a common vector space across various
languages and incorporate these into a convolutional neural network
architecture for OTE extraction. Our experiments with 5 languages give
promising results: We can successfully train a model on annotated data of a
source language and perform accurate prediction on a target language without
ever using any annotated samples in that target language. Depending on the
source and target language pairs, we reach performances in a zero-shot regime
of up to 77% of a model trained on target language data. Furthermore, we can
increase this performance up to 87% of a baseline model trained on target
language data by performing cross-lingual learning from multiple source
languages.
| 2,019 | Computation and Language |
OpenTapioca: Lightweight Entity Linking for Wikidata | We propose a simple Named Entity Linking system that can be trained from
Wikidata only. This demonstrates the strengths and weaknesses of this data
source for this task and provides an easily reproducible baseline to compare
other systems against. Our model is lightweight to train, to run and to keep
synchronous with Wikidata in real time.
| 2,020 | Computation and Language |
ERNIE: Enhanced Representation through Knowledge Integration | We present a novel language representation model enhanced by knowledge called
ERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the
masking strategy of BERT, ERNIE is designed to learn language representation
enhanced by knowledge masking strategies, which includes entity-level masking
and phrase-level masking. Entity-level strategy masks entities which are
usually composed of multiple words.Phrase-level strategy masks the whole phrase
which is composed of several words standing together as a conceptual
unit.Experimental results show that ERNIE outperforms other baseline methods,
achieving new state-of-the-art results on five Chinese natural language
processing tasks including natural language inference, semantic similarity,
named entity recognition, sentiment analysis and question answering. We also
demonstrate that ERNIE has more powerful knowledge inference capacity on a
cloze test.
| 2,019 | Computation and Language |
Unifying Question Answering, Text Classification, and Regression via
Span Extraction | Even as pre-trained language encoders such as BERT are shared across many
tasks, the output layers of question answering, text classification, and
regression models are significantly different. Span decoders are frequently
used for question answering, fixed-class, classification layers for text
classification, and similarity-scoring layers for regression tasks, We show
that this distinction is not necessary and that all three can be unified as
span extraction. A unified, span-extraction approach leads to superior or
comparable performance in supplementary supervised pre-trained, low-data, and
multi-task learning experiments on several question answering, text
classification, and regression benchmarks.
| 2,019 | Computation and Language |
Mask-Predict: Parallel Decoding of Conditional Masked Language Models | Most machine translation systems generate text autoregressively from left to
right. We, instead, use a masked language modeling objective to train a model
to predict any subset of the target words, conditioned on both the input text
and a partially masked target translation. This approach allows for efficient
iterative decoding, where we first predict all of the target words
non-autoregressively, and then repeatedly mask out and regenerate the subset of
words that the model is least confident about. By applying this strategy for a
constant number of iterations, our model improves state-of-the-art performance
levels for non-autoregressive and parallel decoding translation models by over
4 BLEU on average. It is also able to reach within about 1 BLEU point of a
typical left-to-right transformer model, while decoding significantly faster.
| 2,019 | Computation and Language |
Looking Beyond Label Noise: Shifted Label Distribution Matters in
Distantly Supervised Relation Extraction | In recent years there is a surge of interest in applying distant supervision
(DS) to automatically generate training data for relation extraction (RE). In
this paper, we study the problem what limits the performance of DS-trained
neural models, conduct thorough analyses, and identify a factor that can
influence the performance greatly, shifted label distribution. Specifically, we
found this problem commonly exists in real-world DS datasets, and without
special handing, typical DS-RE models cannot automatically adapt to this shift,
thus achieving deteriorated performance. To further validate our intuition, we
develop a simple yet effective adaptation method for DS-trained models, bias
adjustment, which updates models learned over the source domain (i.e., DS
training set) with a label distribution estimated on the target domain (i.e.,
test set). Experiments demonstrate that bias adjustment achieves consistent
performance gains on DS-trained models, especially on neural models, with an up
to 23% relative F1 improvement, which verifies our assumptions. Our code and
data can be found at
\url{https://github.com/INK-USC/shifted-label-distribution}.
| 2,019 | Computation and Language |
Repurposing Entailment for Multi-Hop Question Answering Tasks | Question Answering (QA) naturally reduces to an entailment problem, namely,
verifying whether some text entails the answer to a question. However, for
multi-hop QA tasks, which require reasoning with multiple sentences, it remains
unclear how best to utilize entailment models pre-trained on large scale
datasets such as SNLI, which are based on sentence pairs. We introduce Multee,
a general architecture that can effectively use entailment models for multi-hop
QA tasks. Multee uses (i) a local module that helps locate important sentences,
thereby avoiding distracting information, and (ii) a global module that
aggregates information by effectively incorporating importance weights.
Importantly, we show that both modules can use entailment functions pre-trained
on a large scale NLI datasets. We evaluate performance on MultiRC and
OpenBookQA, two multihop QA datasets. When using an entailment function
pre-trained on NLI datasets, Multee outperforms QA models trained only on the
target QA datasets and the OpenAI transformer models. The code is available at
https://github.com/StonyBrookNLP/multee.
| 2,019 | Computation and Language |
Self-imitating Feedback Generation Using GAN for Computer-Assisted
Pronunciation Training | Self-imitating feedback is an effective and learner-friendly method for
non-native learners in Computer-Assisted Pronunciation Training. Acoustic
characteristics in native utterances are extracted and transplanted onto
learner's own speech input, and given back to the learner as a corrective
feedback. Previous works focused on speech conversion using prosodic
transplantation techniques based on PSOLA algorithm. Motivated by the visual
differences found in spectrograms of native and non-native speeches, we
investigated applying GAN to generate self-imitating feedback by utilizing
generator's ability through adversarial training. Because this mapping is
highly under-constrained, we also adopt cycle consistency loss to encourage the
output to preserve the global structure, which is shared by native and
non-native utterances. Trained on 97,200 spectrogram images of short utterances
produced by native and non-native speakers of Korean, the generator is able to
successfully transform the non-native spectrogram input to a spectrogram with
properties of self-imitating feedback. Furthermore, the transformed spectrogram
shows segmental corrections that cannot be obtained by prosodic
transplantation. Perceptual test comparing the self-imitating and correcting
abilities of our method with the baseline PSOLA method shows that the
generative approach with cycle consistency loss is promising.
| 2,019 | Computation and Language |
Language Models with Transformers | The Transformer architecture is superior to RNN-based models in computational
efficiency. Recently, GPT and BERT demonstrate the efficacy of Transformer
models on various NLP tasks using pre-trained language models on large-scale
corpora. Surprisingly, these Transformer architectures are suboptimal for
language model itself. Neither self-attention nor the positional encoding in
the Transformer is able to efficiently incorporate the word-level sequential
context crucial to language modeling.
In this paper, we explore effective Transformer architectures for language
model, including adding additional LSTM layers to better capture the sequential
context while still keeping the computation efficient. We propose Coordinate
Architecture Search (CAS) to find an effective architecture through iterative
refinement of the model. Experimental results on the PTB, WikiText-2, and
WikiText-103 show that CAS achieves perplexities between 20.42 and 34.11 on all
problems, i.e. on average an improvement of 12.0 perplexity units compared to
state-of-the-art LSTMs. The source code is publicly available.
| 2,019 | Computation and Language |
Personalized sentence generation using generative adversarial networks
with author-specific word usage | The author-specific word usage is a vital feature to let readers perceive the
writing style of the author. In this work, a personalized sentence generation
method based on generative adversarial networks (GANs) is proposed to cope with
this issue. The frequently used function word and content word are incorporated
not only as the input features but also as the sentence structure constraint
for the GAN training. For the sentence generation with the related topics
decided by the user, the Named Entity Recognition (NER) information of the
input words is also used in the network training. We compared the proposed
method with the GAN-based sentence generation methods, and the experimental
results showed that the generated sentences using our method are more similar
to the original sentences of the same author based on the objective evaluation
such as BLEU and SimHash score.
| 2,019 | Computation and Language |
Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual
Word Embeddings | Distributed representations of words which map each word to a continuous
vector have proven useful in capturing important linguistic information not
only in a single language but also across different languages. Current
unsupervised adversarial approaches show that it is possible to build a mapping
matrix that align two sets of monolingual word embeddings together without high
quality parallel data such as a dictionary or a sentence-aligned corpus.
However, without post refinement, the performance of these methods' preliminary
mapping is not good, leading to poor performance for typologically distant
languages.
In this paper, we propose a weakly-supervised adversarial training method to
overcome this limitation, based on the intuition that mapping across languages
is better done at the concept level than at the word level. We propose a
concept-based adversarial training method which for most languages improves the
performance of previous unsupervised adversarial methods, especially for
typologically distant language pairs.
| 2,019 | Computation and Language |
An Unsupervised Joint System for Text Generation from Knowledge Graphs
and Semantic Parsing | Knowledge graphs (KGs) can vary greatly from one domain to another. Therefore
supervised approaches to both graph-to-text generation and text-to-graph
knowledge extraction (semantic parsing) will always suffer from a shortage of
domain-specific parallel graph-text data; at the same time, adapting a model
trained on a different domain is often impossible due to little or no overlap
in entities and relations. This situation calls for an approach that (1) does
not need large amounts of annotated data and thus (2) does not need to rely on
domain adaptation techniques to work well in different domains. To this end, we
present the first approach to unsupervised text generation from KGs and show
simultaneously how it can be used for unsupervised semantic parsing. We
evaluate our approach on WebNLG v2.1 and a new benchmark leveraging scene
graphs from Visual Genome. Our system outperforms strong baselines for both
text$\leftrightarrow$graph conversion tasks without any manual adaptation from
one dataset to the other. In additional experiments, we investigate the impact
of using different unsupervised objectives.
| 2,020 | Computation and Language |
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for
Natural Language Understanding | This paper explores the use of knowledge distillation to improve a Multi-Task
Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text
representations across multiple natural language understanding tasks. Although
ensemble learning can improve model performance, serving an ensemble of large
DNNs such as MT-DNN can be prohibitively expensive. Here we apply the knowledge
distillation method (Hinton et al., 2015) in the multi-task learning setting.
For each task, we train an ensemble of different MT-DNNs (teacher) that
outperforms any single model, and then train a single MT-DNN (student) via
multi-task learning to \emph{distill} knowledge from these ensemble teachers.
We show that the distilled MT-DNN significantly outperforms the original MT-DNN
on 7 out of 9 GLUE tasks, pushing the GLUE benchmark (single model) to 83.7\%
(1.5\% absolute improvement\footnote{ Based on the GLUE leaderboard at
https://gluebenchmark.com/leaderboard as of April 1, 2019.}). The code and
pre-trained models will be made publicly available at
https://github.com/namisan/mt-dnn.
| 2,019 | Computation and Language |
Energy-based Self-attentive Learning of Abstractive Communities for
Spoken Language Understanding | Abstractive community detection is an important spoken language understanding
task, whose goal is to group utterances in a conversation according to whether
they can be jointly summarized by a common abstractive sentence. This paper
provides a novel approach to this task. We first introduce a neural contextual
utterance encoder featuring three types of self-attention mechanisms. We then
train it using the siamese and triplet energy-based meta-architectures.
Experiments on the AMI corpus show that our system outperforms multiple
energy-based and non-energy based baselines from the state-of-the-art. Code and
data are publicly available.
| 2,019 | Computation and Language |
Few-Shot NLG with Pre-Trained Language Model | Neural-based end-to-end approaches to natural language generation (NLG) from
structured data or knowledge are data-hungry, making their adoption for
real-world applications difficult with limited data. In this work, we propose
the new task of \textit{few-shot natural language generation}. Motivated by how
humans tend to summarize tabular data, we propose a simple yet effective
approach and show that it not only demonstrates strong performance but also
provides good generalization across domains. The design of the model
architecture is based on two aspects: content selection from input data and
language modeling to compose coherent sentences, which can be acquired from
prior knowledge. With just 200 training examples, across multiple domains, we
show that our approach achieves very reasonable performances and outperforms
the strongest baseline by an average of over 8.0 BLEU points improvement. Our
code and data can be found at \url{https://github.com/czyssrs/Few-Shot-NLG}
| 2,020 | Computation and Language |
NeuronBlocks: Building Your NLP DNN Models Like Playing Lego | Deep Neural Networks (DNN) have been widely employed in industry to address
various Natural Language Processing (NLP) tasks. However, many engineers find
it a big overhead when they have to choose from multiple frameworks, compare
different types of models, and understand various optimization mechanisms. An
NLP toolkit for DNN models with both generality and flexibility can greatly
improve the productivity of engineers by saving their learning cost and guiding
them to find optimal solutions to their tasks. In this paper, we introduce
NeuronBlocks\footnote{Code: \url{https://github.com/Microsoft/NeuronBlocks}}
\footnote{Demo: \url{https://youtu.be/x6cOpVSZcdo}}, a toolkit encapsulating a
suite of neural network modules as building blocks to construct various DNN
models with complex architecture. This toolkit empowers engineers to build,
train, and test various NLP models through simple configuration of JSON files.
The experiments on several NLP datasets such as GLUE, WikiQA and CoNLL-2003
demonstrate the effectiveness of NeuronBlocks.
| 2,019 | Computation and Language |
PullNet: Open Domain Question Answering with Iterative Retrieval on
Knowledge Bases and Text | We consider open-domain queston answering (QA) where answers are drawn from
either a corpus, a knowledge base (KB), or a combination of both of these. We
focus on a setting in which a corpus is supplemented with a large but
incomplete KB, and on questions that require non-trivial (e.g., ``multi-hop'')
reasoning. We describe PullNet, an integrated framework for (1) learning what
to retrieve (from the KB and/or corpus) and (2) reasoning with this
heterogeneous information to find the best answer. PullNet uses an {iterative}
process to construct a question-specific subgraph that contains information
relevant to the question. In each iteration, a graph convolutional network
(graph CNN) is used to identify subgraph nodes that should be expanded using
retrieval (or ``pull'') operations on the corpus and/or KB. After the subgraph
is complete, a similar graph CNN is used to extract the answer from the
subgraph. This retrieve-and-reason process allows us to answer multi-hop
questions using large KBs and corpora. PullNet is weakly supervised, requiring
question-answer pairs but not gold inference paths. Experimentally PullNet
improves over the prior state-of-the art, and in the setting where a corpus is
used with incomplete KB these improvements are often dramatic. PullNet is also
often superior to prior systems in a KB-only setting or a text-only setting.
| 2,019 | Computation and Language |
Fact Discovery from Knowledge Base via Facet Decomposition | During the past few decades, knowledge bases (KBs) have experienced rapid
growth. Nevertheless, most KBs still suffer from serious incompletion.
Researchers proposed many tasks such as knowledge base completion and relation
prediction to help build the representation of KBs. However, there are some
issues unsettled towards enriching the KBs. Knowledge base completion and
relation prediction assume that we know two elements of the fact triples and we
are going to predict the missing one. This assumption is too restricted in
practice and prevents it from discovering new facts directly. To address this
issue, we propose a new task, namely, fact discovery from knowledge base. This
task only requires that we know the head entity and the goal is to discover
facts associated with the head entity. To tackle this new problem, we propose a
novel framework that decomposes the discovery problem into several facet
discovery components. We also propose a novel auto-encoder based facet
component to estimate some facets of the fact. Besides, we propose a feedback
learning component to share the information between each facet. We evaluate our
framework using a benchmark dataset and the experimental results show that our
framework achieves promising results. We also conduct extensive analysis of our
framework in discovering different kinds of facts. The source code of this
paper can be obtained from https://github.com/thunlp/FFD.
| 2,019 | Computation and Language |
Good-Enough Compositional Data Augmentation | We propose a simple data augmentation protocol aimed at providing a
compositional inductive bias in conditional and unconditional sequence models.
Under this protocol, synthetic training examples are constructed by taking real
training examples and replacing (possibly discontinuous) fragments with other
fragments that appear in at least one similar environment. The protocol is
model-agnostic and useful for a variety of tasks. Applied to neural
sequence-to-sequence models, it reduces error rate by as much as 87% on
diagnostic tasks from the SCAN dataset and 16% on a semantic parsing task.
Applied to n-gram language models, it reduces perplexity by roughly 1% on small
corpora in several languages.
| 2,020 | Computation and Language |
Understanding the Stability of Medical Concept Embeddings | Frequency is one of the major factors for training quality word embeddings.
Several work has recently discussed the stability of word embeddings in general
domain and suggested factors influencing the stability. In this work, we
conduct a detailed analysis on the stability of concept embeddings in medical
domain, particularly the relation with concept frequency. The analysis reveals
the surprising high stability of low-frequency concepts: low-frequency (<100)
concepts have the same high stability as high-frequency (>1000) concepts. To
develop a deeper understanding of this finding, we propose a new factor, the
noisiness of context words, which influences the stability of medical concept
embeddings, regardless of frequency. We evaluate the proposed factor by showing
the linear correlation with the stability of medical concept embeddings. The
correlations are clear and consistent with various groups of medical concepts.
Based on the linear relations, we make suggestions on ways to adjust the
noisiness of context words for the improvement of stability. Finally, we
demonstrate that the proposed factor extends to the word embedding stability in
general domain.
| 2,023 | Computation and Language |
Obfuscation for Privacy-preserving Syntactic Parsing | The goal of homomorphic encryption is to encrypt data such that another party
can operate on it without being explicitly exposed to the content of the
original data. We introduce an idea for a privacy-preserving transformation on
natural language data, inspired by homomorphic encryption. Our primary tool is
{\em obfuscation}, relying on the properties of natural language. Specifically,
a given English text is obfuscated using a neural model that aims to preserve
the syntactic relationships of the original sentence so that the obfuscated
sentence can be parsed instead of the original one. The model works at the word
level, and learns to obfuscate each word separately by changing it into a new
word that has a similar syntactic role. The text obfuscated by our model leads
to better performance on three syntactic parsers (two dependency and one
constituency parsers) in comparison to an upper-bound random substitution
baseline. More specifically, the results demonstrate that as more terms are
obfuscated (by their part of speech), the substitution upper bound
significantly degrades, while the neural model maintains a relatively high
performing parser. All of this is done without much sacrifice of privacy
compared to the random substitution upper bound. We also further analyze the
results, and discover that the substituted words have similar syntactic
properties, but different semantic content, compared to the original words.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.