Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Constructing Artificial Data for Fine-tuning for Low-Resource Biomedical
Text Tagging with Applications in PICO Annotation | Biomedical text tagging systems are plagued by the dearth of labeled training
data. There have been recent attempts at using pre-trained encoders to deal
with this issue. Pre-trained encoder provides representation of the input text
which is then fed to task-specific layers for classification. The entire
network is fine-tuned on the labeled data from the target task. Unfortunately,
a low-resource biomedical task often has too few labeled instances for
satisfactory fine-tuning. Also, if the label space is large, it contains few or
no labeled instances for majority of the labels. Most biomedical tagging
systems treat labels as indexes, ignoring the fact that these labels are often
concepts expressed in natural language e.g. `Appearance of lesion on brain
imaging'. To address these issues, we propose constructing extra labeled
instances using label-text (i.e. label's name) as input for the corresponding
label-index (i.e. label's index). In fact, we propose a number of strategies
for manufacturing multiple artificial labeled instances from a single label.
The network is then fine-tuned on a combination of real and these newly
constructed artificial labeled instances. We evaluate the proposed approach on
an important low-resource biomedical task called \textit{PICO annotation},
which requires tagging raw text describing clinical trials with labels
corresponding to different aspects of the trial i.e. PICO (Population,
Intervention/Control, Outcome) characteristics of the trial. Our empirical
results show that the proposed method achieves a new state-of-the-art
performance for PICO annotation with very significant improvements over
competitive baselines.
| 2,020 | Computation and Language |
Human-Like Decision Making: Document-level Aspect Sentiment
Classification via Hierarchical Reinforcement Learning | Recently, neural networks have shown promising results on Document-level
Aspect Sentiment Classification (DASC). However, these approaches often offer
little transparency w.r.t. their inner working mechanisms and lack
interpretability. In this paper, to simulating the steps of analyzing aspect
sentiment in a document by human beings, we propose a new Hierarchical
Reinforcement Learning (HRL) approach to DASC. This approach incorporates
clause selection and word selection strategies to tackle the data noise problem
in the task of DASC. First, a high-level policy is proposed to select
aspect-relevant clauses and discard noisy clauses. Then, a low-level policy is
proposed to select sentiment-relevant words and discard noisy words inside the
selected clauses. Finally, a sentiment rating predictor is designed to provide
reward signals to guide both clause and word selection. Experimental results
demonstrate the impressive effectiveness of the proposed approach to DASC over
the state-of-the-art baselines.
| 2,019 | Computation and Language |
Text Matters but Speech Influences: A Computational Analysis of
Syntactic Ambiguity Resolution | Analyzing how human beings resolve syntactic ambiguity has long been an issue
of interest in the field of linguistics. It is, at the same time, one of the
most challenging issues for spoken language understanding (SLU) systems as
well. As syntactic ambiguity is intertwined with issues regarding prosody and
semantics, the computational approach toward speech intention identification is
expected to benefit from the observations of the human language processing
mechanism. In this regard, we address the task with attentive recurrent neural
networks that exploit acoustic and textual features simultaneously and reveal
how the modalities interact with each other to derive sentence meaning.
Utilizing a speech corpus recorded on Korean scripts of syntactically ambiguous
utterances, we revealed that co-attention frameworks, namely multi-hop
attention and cross-attention, show significantly superior performance in
disambiguating speech intention. With further analysis, we demonstrate that the
computational models reflect the internal relationship between auditory and
linguistic processes.
| 2,020 | Computation and Language |
On Semi-Supervised Multiple Representation Behavior Learning | We propose a novel paradigm of semi-supervised learning (SSL)--the
semi-supervised multiple representation behavior learning (SSMRBL). SSMRBL aims
to tackle the difficulty of learning a grammar for natural language parsing
where the data are natural language texts and the 'labels' for marking data are
parsing trees and/or grammar rule pieces. We call such 'labels' as compound
structured labels which require a hard work for training. SSMRBL is an
incremental learning process that can learn more than one representation, which
is an appropriate solution for dealing with the scarce of labeled training data
in the age of big data and with the heavy workload of learning compound
structured labels. We also present a typical example of SSMRBL, regarding
behavior learning in form of a grammatical approach towards domain-based
multiple text summarization (DBMTS). DBMTS works under the framework of
rhetorical structure theory (RST). SSMRBL includes two representations: text
embedding (for representing information contained in the texts) and grammar
model (for representing parsing as a behavior). The first representation was
learned as embedded digital vectors called impacts in a low dimensional space.
The grammar model was learned in an iterative way. Then an automatic
domain-oriented multi-text summarization approach was proposed based on the two
representations discussed above. Experimental results on large-scale Chinese
dataset SogouCA indicate that the proposed method brings a good performance
even if only few labeled texts are used for training with respect to our
defined automated metrics.
| 2,019 | Computation and Language |
Localization of Fake News Detection via Multitask Transfer Learning | The use of the internet as a fast medium of spreading fake news reinforces
the need for computational tools that combat it. Techniques that train fake
news classifiers exist, but they all assume an abundance of resources including
large labeled datasets and expert-curated corpora, which low-resource languages
may not have. In this work, we make two main contributions: First, we alleviate
resource scarcity by constructing the first expertly-curated benchmark dataset
for fake news detection in Filipino, which we call "Fake News Filipino."
Second, we benchmark Transfer Learning (TL) techniques and show that they can
be used to train robust fake news classifiers from little data, achieving 91%
accuracy on our fake news dataset, reducing the error by 14% compared to
established few-shot baselines. Furthermore, lifting ideas from multitask
learning, we show that augmenting transformer-based transfer techniques with
auxiliary language modeling losses improves their performance by adapting to
writing style. Using this, we improve TL performance by 4-6%, achieving an
accuracy of 96% on our best model. Lastly, we show that our method generalizes
well to different types of news articles, including political news,
entertainment news, and opinion articles.
| 2,020 | Computation and Language |
Diversify Your Datasets: Analyzing Generalization via Controlled
Variance in Adversarial Datasets | Phenomenon-specific "adversarial" datasets have been recently designed to
perform targeted stress-tests for particular inference types. Recent work (Liu
et al., 2019a) proposed that such datasets can be utilized for training NLI and
other types of models, often allowing to learn the phenomenon in focus and
improve on the challenge dataset, indicating a "blind spot" in the original
training data. Yet, although a model can improve in such a training process, it
might still be vulnerable to other challenge datasets targeting the same
phenomenon but drawn from a different distribution, such as having a different
syntactic complexity level. In this work, we extend this method to drive
conclusions about a model's ability to learn and generalize a target phenomenon
rather than to "learn" a dataset, by controlling additional aspects in the
adversarial datasets. We demonstrate our approach on two inference phenomena -
dative alternation and numerical reasoning, elaborating, and in some cases
contradicting, the results of Liu et al.. Our methodology enables building
better challenge datasets for creating more robust models, and may yield better
model understanding and subsequent overarching improvements.
| 2,019 | Computation and Language |
A Neural Entity Coreference Resolution Review | Entity Coreference Resolution is the task of resolving all mentions in a
document that refer to the same real world entity and is considered as one of
the most difficult tasks in natural language understanding. It is of great
importance for downstream natural language processing tasks such as entity
linking, machine translation, summarization, chatbots, etc. This work aims to
give a detailed review of current progress on solving Coreference Resolution
using neural-based approaches. It also provides a detailed appraisal of the
datasets and evaluation metrics in the field, as well as the subtask of Pronoun
Resolution that has seen various improvements in the recent years. We highlight
the advantages and disadvantages of the approaches, the challenges of the task,
the lack of agreed-upon standards in the task and propose a way to further
expand the boundaries of the field.
| 2,021 | Computation and Language |
Domain-agnostic Question-Answering with Adversarial Training | Adapting models to new domain without finetuning is a challenging problem in
deep learning. In this paper, we utilize an adversarial training framework for
domain generalization in Question Answering (QA) task. Our model consists of a
conventional QA model and a discriminator. The training is performed in the
adversarial manner, where the two models constantly compete, so that QA model
can learn domain-invariant features. We apply this approach in MRQA Shared Task
2019 and show better performance compared to the baseline model.
| 2,019 | Computation and Language |
Improving Word Representations: A Sub-sampled Unigram Distribution for
Negative Sampling | Word2Vec is the most popular model for word representation and has been
widely investigated in literature. However, its noise distribution for negative
sampling is decided by empirical trials and the optimality has always been
ignored. We suggest that the distribution is a sub-optimal choice, and propose
to use a sub-sampled unigram distribution for better negative sampling. Our
contributions include: (1) proposing the concept of semantics quantification
and deriving a suitable sub-sampling rate for the proposed distribution
adaptive to different training corpora; (2) demonstrating the advantages of our
approach in both negative sampling and noise contrastive estimation by
extensive evaluation tasks; and (3) proposing a semantics weighted model for
the MSR sentence completion task, resulting in considerable improvements. Our
work not only improves the quality of word vectors but also benefits current
understanding of Word2Vec.
| 2,019 | Computation and Language |
The Czech Court Decisions Corpus (CzCDC): Availability as the First Step | In this paper, we describe the Czech Court Decision Corpus (CzCDC). CzCDC is
a dataset of 237,723 decisions published by the Czech apex (or top-tier)
courts, namely the Supreme Court, the Supreme Administrative Court and the
Constitutional Court. All the decisions were published between 1st January 1993
and 30th September 2018.
Court decisions are available on the webpages of the respective courts or via
commercial databases of legal information. This often leads researchers
interested in these decisions to reach either to respective court or to
commercial provider. This leads to delays and additional costs. These are
further exacerbated by a lack of inter-court standard in the terms of the data
format in which courts provide their decisions. Additionally, courts' databases
often lack proper documentation.
Our goal is to make the dataset of court decisions freely available online in
consistent (plain) format to lower the cost associated with obtaining data for
future research. We believe that simplified access to court decisions through
the CzCDC could benefit other researchers.
In this paper, we describe the processing of decisions before their inclusion
into CzCDC and basic statistics of the dataset. This dataset contains plain
texts of court decisions and these texts are not annotated for any grammatical
or syntactical features.
| 2,019 | Computation and Language |
Building Dynamic Knowledge Graphs from Text-based Games | We are interested in learning how to update Knowledge Graphs (KG) from text.
In this preliminary work, we propose a novel Sequence-to-Sequence (Seq2Seq)
architecture to generate elementary KG operations. Furthermore, we introduce a
new dataset for KG extraction built upon text-based game transitions (over 300k
data points). We conduct experiments and discuss the results.
| 2,020 | Computation and Language |
Fine-Tuned Neural Models for Propaganda Detection at the Sentence and
Fragment levels | This paper presents the CUNLP submission for the NLP4IF 2019 shared-task on
FineGrained Propaganda Detection. Our system finished 5th out of 26 teams on
the sentence-level classification task and 5th out of 11 teams on the
fragment-level classification task based on our scores on the blind test set.
We present our models, a discussion of our ablation studies and experiments,
and an analysis of our performance on all eighteen propaganda techniques
present in the corpus of the shared task.
| 2,021 | Computation and Language |
Grammatical Gender, Neo-Whorfianism, and Word Embeddings: A Data-Driven
Approach to Linguistic Relativity | The relation between language and thought has occupied linguists for at least
a century. Neo-Whorfianism, a weak version of the controversial Sapir-Whorf
hypothesis, holds that our thoughts are subtly influenced by the grammatical
structures of our native language. One area of investigation in this vein
focuses on how the grammatical gender of nouns affects the way we perceive the
corresponding objects. For instance, does the fact that key is masculine in
German (der Schl\"ussel), but feminine in Spanish (la llave) change the
speakers' views of those objects? Psycholinguistic evidence presented by
Boroditsky et al. (2003, {\S}4) suggested the answer might be yes: When asked
to produce adjectives that best described a key, German and Spanish speakers
named more stereotypically masculine and feminine ones, respectively. However,
recent attempts to replicate those experiments have failed (Mickan et al.,
2014). In this work, we offer a computational analogue of Boroditsky et al.
(2003, {\S}4)'s experimental design on 9 languages, finding evidence against
neo-Whorfianism.
| 2,019 | Computation and Language |
MRQA 2019 Shared Task: Evaluating Generalization in Reading
Comprehension | We present the results of the Machine Reading for Question Answering (MRQA)
2019 shared task on evaluating the generalization capabilities of reading
comprehension systems. In this task, we adapted and unified 18 distinct
question answering datasets into the same format. Among them, six datasets were
made available for training, six datasets were made available for development,
and the final six were hidden for final evaluation. Ten teams submitted
systems, which explored various ideas including data sampling, multi-task
learning, adversarial training and ensembling. The best system achieved an
average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points
higher than our initial baseline based on BERT.
| 2,019 | Computation and Language |
Fine-grained Fact Verification with Kernel Graph Attention Network | Fact Verification requires fine-grained natural language inference capability
that finds subtle clues to identify the syntactical and semantically correct
but not well-supported claims. This paper presents Kernel Graph Attention
Network (KGAT), which conducts more fine-grained fact verification with
kernel-based attentions. Given a claim and a set of potential evidence
sentences that form an evidence graph, KGAT introduces node kernels, which
better measure the importance of the evidence node, and edge kernels, which
conduct fine-grained evidence propagation in the graph, into Graph Attention
Networks for more accurate fact verification. KGAT achieves a 70.38% FEVER
score and significantly outperforms existing fact verification models on FEVER,
a large-scale benchmark for fact verification. Our analyses illustrate that,
compared to dot-product attentions, the kernel-based attention concentrates
more on relevant evidence sentences and meaningful clues in the evidence graph,
which is the main source of KGAT's effectiveness.
| 2,021 | Computation and Language |
Transformer-based Acoustic Modeling for Hybrid Speech Recognition | We propose and evaluate transformer-based acoustic models (AMs) for hybrid
speech recognition. Several modeling choices are discussed in this work,
including various positional embedding methods and an iterated loss to enable
training deep transformers. We also present a preliminary study of using
limited right context in transformer models, which makes it possible for
streaming applications. We demonstrate that on the widely used Librispeech
benchmark, our transformer-based AM outperforms the best published hybrid
result by 19% to 26% relative when the standard n-gram language model (LM) is
used. Combined with neural network LM for rescoring, our proposed approach
achieves state-of-the-art results on Librispeech. Our findings are also
confirmed on a much larger internal dataset.
| 2,020 | Computation and Language |
Word-level Embeddings for Cross-Task Transfer Learning in Speech
Processing | Recent breakthroughs in deep learning often rely on representation learning
and knowledge transfer. In recent years, unsupervised and self-supervised
techniques for learning speech representation were developed to foster
automatic speech recognition. Up to date, most of these approaches are
task-specific and designed for within-task transfer learning between different
datasets or setups of a particular task. In turn, learning task-independent
representation of speech and cross-task applications of transfer learning
remain less common. Here, we introduce an encoder capturing word-level
representations of speech for cross-task transfer learning. We demonstrate the
application of the pre-trained encoder in four distinct speech and audio
processing tasks: (i) speech enhancement, (ii) language identification, (iii)
speech, noise, and music classification, and (iv) speaker identification. In
each task, we compare the performance of our cross-task transfer learning
approach to task-specific baselines. Our results show that the speech
representation captured by the encoder through the pre-training is transferable
across distinct speech processing tasks and datasets. Notably, even simple
applications of our pre-trained encoder outperformed task-specific methods, or
were comparable, depending on the task.
| 2,021 | Computation and Language |
Automatic Extraction of Personality from Text: Challenges and
Opportunities | In this study, we examined the possibility to extract personality traits from
a text. We created an extensive dataset by having experts annotate personality
traits in a large number of texts from multiple online sources. From these
annotated texts, we selected a sample and made further annotations ending up in
a large low-reliability dataset and a small high-reliability dataset. We then
used the two datasets to train and test several machine learning models to
extract personality from text, including a language model. Finally, we
evaluated our best models in the wild, on datasets from different domains. Our
results show that the models based on the small high-reliability dataset
performed better (in terms of $\textrm{R}^2$) than models based on large
low-reliability dataset. Also, language model based on small high-reliability
dataset performed better than the random baseline. Finally, and more
importantly, the results showed our best model did not perform better than the
random baseline when tested in the wild. Taken together, our results show that
determining personality traits from a text remains a challenge and that no firm
conclusions can be made on model performance before testing in the wild.
| 2,019 | Computation and Language |
Improving Transformer-based Speech Recognition Using Unsupervised
Pre-training | Speech recognition technologies are gaining enormous popularity in various
industrial applications. However, building a good speech recognition system
usually requires large amounts of transcribed data, which is expensive to
collect. To tackle this problem, an unsupervised pre-training method called
Masked Predictive Coding is proposed, which can be applied for unsupervised
pre-training with Transformer based model. Experiments on HKUST show that using
the same training data, we can achieve CER 23.3%, exceeding the best end-to-end
model by over 0.2% absolute CER. With more pre-training data, we can further
reduce the CER to 21.0%, or a 11.8% relative CER reduction over baseline.
| 2,019 | Computation and Language |
Scalable Neural Dialogue State Tracking | A Dialogue State Tracker (DST) is a key component in a dialogue system aiming
at estimating the beliefs of possible user goals at each dialogue turn. Most of
the current DST trackers make use of recurrent neural networks and are based on
complex architectures that manage several aspects of a dialogue, including the
user utterance, the system actions, and the slot-value pairs defined in a
domain ontology. However, the complexity of such neural architectures incurs
into a considerable latency in the dialogue state prediction, which limits the
deployments of the models in real-world applications, particularly when task
scalability (i.e. amount of slots) is a crucial factor. In this paper, we
propose an innovative neural model for dialogue state tracking, named Global
encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue
state with a very low latency time, while maintaining high-level performance.
We report experiments on three different languages (English, Italian, and
German) of the WoZ2.0 dataset, and show that the proposed approach provides
competitive advantages over state-of-art DST systems, both in terms of accuracy
and in terms of time complexity for predictions, being over 15 times faster
than the other systems.
| 2,019 | Computation and Language |
Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda
Detection | We present the shared task on Fine-Grained Propaganda Detection, which was
organized as part of the NLP4IF workshop at EMNLP-IJCNLP 2019. There were two
subtasks. FLC is a fragment-level task that asks for the identification of
propagandist text fragments in a news article and also for the prediction of
the specific propaganda technique used in each such fragment (18-way
classification task). SLC is a sentence-level binary classification task asking
to detect the sentences that contain propaganda. A total of 12 teams submitted
systems for the FLC task, 25 teams did so for the SLC task, and 14 teams
eventually submitted a system description paper. For both subtasks, most
systems managed to beat the baseline by a sizable margin. The leaderboard and
the data from the competition are available at
http://propaganda.qcri.org/nlp4if-shared-task/.
| 2,019 | Computation and Language |
GPU-Accelerated Viterbi Exact Lattice Decoder for Batched Online and
Offline Speech Recognition | We present an optimized weighted finite-state transducer (WFST) decoder
capable of online streaming and offline batch processing of audio using
Graphics Processing Units (GPUs). The decoder is efficient in memory
utilization, input/output (I/O) bandwidth, and uses a novel Viterbi
implementation designed to maximize parallelism. The reduced memory footprint
allows the decoder to process significantly larger graphs than previously
possible, while optimizing I/O increases the number of simultaneous streams
supported. GPU preprocessing of lattice segments enables intermediate lattice
results to be returned to the requestor during streaming inference.
Collectively, the proposed algorithm yields up to a 240x speedup over single
core CPU decoding, and up to 40x faster decoding than the current
state-of-the-art GPU decoder, while returning equivalent results. This decoder
design enables deployment of production-grade ASR models on a large spectrum of
systems, ranging from large data center servers to low-power edge devices.
| 2,020 | Computation and Language |
Depth-Adaptive Transformer | State of the art sequence-to-sequence models for large scale tasks perform a
fixed number of computations for each input sequence regardless of whether it
is easy or hard to process. In this paper, we train Transformer models which
can make output predictions at different stages of the network and we
investigate different ways to predict how much computation is required for a
particular sequence. Unlike dynamic computation in Universal Transformers,
which applies the same set of layers iteratively, we apply different layers at
every step to adjust both the amount of computation as well as the model
capacity. On IWSLT German-English translation our approach matches the accuracy
of a well tuned baseline Transformer while using less than a quarter of the
decoder layers.
| 2,020 | Computation and Language |
Toward estimating personal well-being using voice | Estimating personal well-being draws increasing attention particularly from
healthcare and pharmaceutical industries. We propose an approach to estimate
personal well-being in terms of various measurements such as anxiety, sleep
quality and mood using voice. With clinically validated questionnaires to score
those measurements in a self-assessed way, we extract salient features from
voice and train regression models with deep neural networks. Experiments with
the collected database of 219 subjects show promising results in predicting the
well-being related measurements; concordance correlation coefficients (CCC)
between self-assessed scores and predicted scores are 0.41 for anxiety, 0.44
for sleep quality and 0.38 for mood.
| 2,019 | Computation and Language |
Universal Decompositional Semantic Parsing | We introduce a transductive model for parsing into Universal Decompositional
Semantics (UDS) representations, which jointly learns to map natural language
utterances into UDS graph structures and annotate the graph with
decompositional semantic attribute scores. We also introduce a strong pipeline
model for parsing into the UDS graph structure, and show that our transductive
parser performs comparably while additionally performing attribute prediction.
By analyzing the attribute prediction errors, we find the model captures
natural relationships between attribute groups.
| 2,020 | Computation and Language |
Robust Neural Machine Translation for Clean and Noisy Speech Transcripts | Neural machine translation models have shown to achieve high quality when
trained and fed with well structured and punctuated input texts. Unfortunately,
the latter condition is not met in spoken language translation, where the input
is generated by an automatic speech recognition (ASR) system. In this paper, we
study how to adapt a strong NMT system to make it robust to typical ASR errors.
As in our application scenarios transcripts might be post-edited by human
experts, we propose adaptation strategies to train a single system that can
translate either clean or noisy input with no supervision on the input type.
Our experimental results on a public speech translation data set show that
adapting a model on a significant amount of parallel data including ASR
transcripts is beneficial with test data of the same type, but produces a small
degradation when translating clean text. Adapting on both clean and noisy
variants of the same data leads to the best results on both input types.
| 2,019 | Computation and Language |
Capturing Greater Context for Question Generation | Automatic question generation can benefit many applications ranging from
dialogue systems to reading comprehension. While questions are often asked with
respect to long documents, there are many challenges with modeling such long
documents. Many existing techniques generate questions by effectively looking
at one sentence at a time, leading to questions that are easy and not
reflective of the human process of question generation. Our goal is to
incorporate interactions across multiple sentences to generate realistic
questions for long documents. In order to link a broad document context to the
target answer, we represent the relevant context via a multi-stage attention
mechanism, which forms the foundation of a sequence to sequence model. We
outperform state-of-the-art methods on question generation on three
question-answering datasets -- SQuAD, MS MARCO and NewsQA.
| 2,019 | Computation and Language |
A Search-based Neural Model for Biomedical Nested and Overlapping Event
Detection | We tackle the nested and overlapping event detection task and propose a novel
search-based neural network (SBNN) structured prediction model that treats the
task as a search problem on a relation graph of trigger-argument structures.
Unlike existing structured prediction tasks such as dependency parsing, the
task targets to detect DAG structures, which constitute events, from the
relation graph. We define actions to construct events and use all the beams in
a beam search to detect all event structures that may be overlapping and
nested. The search process constructs events in a bottom-up manner while
modelling the global properties for nested and overlapping structures
simultaneously using neural networks. We show that the model achieves
performance comparable to the state-of-the-art model Turku Event Extraction
System (TEES) on the BioNLP Cancer Genetics (CG) Shared Task 2013 without the
use of any syntactic and hand-engineered features. Further analyses on the
development set show that our model is more computationally efficient while
yielding higher F1-score performance.
| 2,019 | Computation and Language |
RNN based Incremental Online Spoken Language Understanding | Spoken Language Understanding (SLU) typically comprises of an automatic
speech recognition (ASR) followed by a natural language understanding (NLU)
module. The two modules process signals in a blocking sequential fashion, i.e.,
the NLU often has to wait for the ASR to finish processing on an utterance
basis, potentially leading to high latencies that render the spoken interaction
less natural. In this paper, we propose recurrent neural network (RNN) based
incremental processing towards the SLU task of intent detection. The proposed
methodology offers lower latencies than a typical SLU system, without any
significant reduction in system accuracy. We introduce and analyze different
recurrent neural network architectures for incremental and online processing of
the ASR transcripts and compare it to the existing offline systems. A lexical
End-of-Sentence (EOS) detector is proposed for segmenting the stream of
transcript into sentences for intent classification. Intent detection
experiments are conducted on benchmark ATIS, Snips and Facebook's multilingual
task oriented dialog datasets modified to emulate a continuous incremental
stream of words with no utterance demarcation. We also analyze the prospects of
early intent detection, before EOS, with our proposed system.
| 2,020 | Computation and Language |
Location-Relative Attention Mechanisms For Robust Long-Form Speech
Synthesis | Despite the ability to produce human-level speech for in-domain text,
attention-based end-to-end text-to-speech (TTS) systems suffer from text
alignment failures that increase in frequency for out-of-domain text. We show
that these failures can be addressed using simple location-relative attention
mechanisms that do away with content-based query/key comparisons. We compare
two families of attention mechanisms: location-relative GMM-based mechanisms
and additive energy-based mechanisms. We suggest simple modifications to
GMM-based attention that allow it to align quickly and consistently during
training, and introduce a new location-relative attention mechanism to the
additive energy-based family, called Dynamic Convolution Attention (DCA). We
compare the various mechanisms in terms of alignment speed and consistency
during training, naturalness, and ability to generalize to long utterances, and
conclude that GMM attention and DCA can generalize to very long utterances,
while preserving naturalness for shorter, in-domain utterances.
| 2,020 | Computation and Language |
Deja-vu: Double Feature Presentation and Iterated Loss in Deep
Transformer Networks | Deep acoustic models typically receive features in the first layer of the
network, and process increasingly abstract representations in the subsequent
layers. Here, we propose to feed the input features at multiple depths in the
acoustic model. As our motivation is to allow acoustic models to re-examine
their input features in light of partial hypotheses we introduce intermediate
model heads and loss function. We study this architecture in the context of
deep Transformer networks, and we use an attention mechanism over both the
previous layer activations and the input features. To train this model's
intermediate output hypothesis, we apply the objective function at each layer
right before feature re-use. We find that the use of such iterated loss
significantly improves performance by itself, as well as enabling input feature
re-use. We present results on both Librispeech, and a large scale video
dataset, with relative improvements of 10 - 20% for Librispeech and 3.2 - 13%
for videos.
| 2,020 | Computation and Language |
Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention
Networks | Self-attention network (SAN) can benefit significantly from the
bi-directional representation learning through unsupervised pretraining
paradigms such as BERT and XLNet. In this paper, we present an XLNet-like
pretraining scheme "Speech-XLNet" for unsupervised acoustic model pretraining
to learn speech representations with SAN. The pretrained SAN is finetuned under
the hybrid SAN/HMM framework. We conjecture that by shuffling the speech frame
orders, the permutation in Speech-XLNet serves as a strong regularizer to
encourage the SAN to make inferences by focusing on global structures through
its attention weights. In addition, Speech-XLNet also allows the model to
explore the bi-directional contexts for effective speech representation
learning. Experiments on TIMIT and WSJ demonstrate that Speech-XLNet greatly
improves the SAN/HMM performance in terms of both convergence speed and
recognition accuracy compared to the one trained from randomly initialized
weights. Our best systems achieve a relative improvement of 11.9% and 8.3% on
the TIMIT and WSJ tasks respectively. In particular, the best system achieves a
phone error rate (PER) of 13.3% on the TIMIT test set, which to our best
knowledge, is the lowest PER obtained from a single system.
| 2,020 | Computation and Language |
Controlling the Output Length of Neural Machine Translation | The recent advances introduced by neural machine translation (NMT) are
rapidly expanding the application fields of machine translation, as well as
reshaping the quality level to be targeted. In particular, if translations have
to fit some given layout, quality should not only be measured in terms of
adequacy and fluency, but also length. Exemplary cases are the translation of
document files, subtitles, and scripts for dubbing, where the output length
should ideally be as close as possible to the length of the input text. This
paper addresses for the first time, to the best of our knowledge, the problem
of controlling the output length in NMT. We investigate two methods for biasing
the output length with a transformer architecture: i) conditioning the output
to a given target-source length-ratio class and ii) enriching the transformer
positional embedding with length information. Our experiments show that both
methods can induce the network to generate shorter translations, as well as
acquiring interpretable linguistic skills.
| 2,019 | Computation and Language |
XL-Editor: Post-editing Sentences with XLNet | While neural sequence generation models achieve initial success for many NLP
applications, the canonical decoding procedure with left-to-right generation
order (i.e., autoregressive) in one-pass can not reflect the true nature of
human revising a sentence to obtain a refined result. In this work, we propose
XL-Editor, a novel training framework that enables state-of-the-art generalized
autoregressive pretraining methods, XLNet specifically, to revise a given
sentence by the variable-length insertion probability. Concretely, XL-Editor
can (1) estimate the probability of inserting a variable-length sequence into a
specific position of a given sentence; (2) execute post-editing operations such
as insertion, deletion, and replacement based on the estimated variable-length
insertion probability; (3) complement existing sequence-to-sequence models to
refine the generated sequences. Empirically, we first demonstrate better
post-editing capabilities of XL-Editor over XLNet on the text insertion and
deletion tasks, which validates the effectiveness of our proposed framework.
Furthermore, we extend XL-Editor to the unpaired text style transfer task,
where transferring the target style onto a given sentence can be naturally
viewed as post-editing the sentence into the target style. XL-Editor achieves
significant improvement in style transfer accuracy and also maintains coherent
semantic of the original sentence, showing the broad applicability of our
method.
| 2,019 | Computation and Language |
Fully Quantized Transformer for Machine Translation | State-of-the-art neural machine translation methods employ massive amounts of
parameters. Drastically reducing computational costs of such methods without
affecting performance has been up to this point unsuccessful. To this end, we
propose FullyQT: an all-inclusive quantization strategy for the Transformer. To
the best of our knowledge, we are the first to show that it is possible to
avoid any loss in translation quality with a fully quantized Transformer.
Indeed, compared to full-precision, our 8-bit models score greater or equal
BLEU on most tasks. Comparing ourselves to all previously proposed methods, we
achieve state-of-the-art quantization results.
| 2,020 | Computation and Language |
Does Gender Matter? Towards Fairness in Dialogue Systems | Recently there are increasing concerns about the fairness of Artificial
Intelligence (AI) in real-world applications such as computer vision and
recommendations. For example, recognition algorithms in computer vision are
unfair to black people such as poorly detecting their faces and inappropriately
identifying them as "gorillas". As one crucial application of AI, dialogue
systems have been extensively applied in our society. They are usually built
with real human conversational data; thus they could inherit some fairness
issues which are held in the real world. However, the fairness of dialogue
systems has not been well investigated. In this paper, we perform a pioneering
study about the fairness issues in dialogue systems. In particular, we
construct a benchmark dataset and propose quantitative measures to understand
fairness in dialogue models. Our studies demonstrate that popular dialogue
models show significant prejudice towards different genders and races. Besides,
to mitigate the bias in dialogue systems, we propose two simple but effective
debiasing methods. Experiments show that our methods can reduce the bias in
dialogue systems significantly. The dataset and the implementation are released
to foster fairness research in dialogue systems.
| 2,020 | Computation and Language |
Memory-Augmented Recurrent Networks for Dialogue Coherence | Recent dialogue approaches operate by reading each word in a conversation
history, and aggregating accrued dialogue information into a single state. This
fixed-size vector is not expandable and must maintain a consistent format over
time. Other recent approaches exploit an attention mechanism to extract useful
information from past conversational utterances, but this introduces an
increased computational complexity. In this work, we explore the use of the
Neural Turing Machine (NTM) to provide a more permanent and flexible storage
mechanism for maintaining dialogue coherence. Specifically, we introduce two
separate dialogue architectures based on this NTM design. The first design
features a sequence-to-sequence architecture with two separate NTM modules, one
for each participant in the conversation. The second memory architecture
incorporates a single NTM module, which stores parallel context information for
both speakers. This second design also replaces the sequence-to-sequence
architecture with a neural language model, to allow for longer context of the
NTM and greater understanding of the dialogue history. We report perplexity
performance for both models, and compare them to existing baselines.
| 2,019 | Computation and Language |
Automated Text Summarization for the Enhancement of Public Services | Natural language processing and machine learning algorithms have been shown
to be effective in a variety of applications. In this work, we contribute to
the area of AI adoption in the public sector. We present an automated system
that was used to process textual information, generate important keywords, and
automatically summarize key elements of the Meadville community statements. We
also describe the process of collaboration with My Meadville administrators
during the development of our system. My Meadville, a community initiative,
supported by the city of Meadville conducted a large number of interviews with
the residents of Meadville during the community events and transcribed these
interviews into textual data files. Their goal was to uncover the issues of
importance to the Meadville residents in an attempt to enhance public services.
Our AI system cleans and pre-processes the interview data, then using machine
learning algorithms it finds important keywords and key excerpts from each
interview. It also provides searching functionality to find excerpts from
relevant interviews based on specific keywords. Our automated system allowed
the city to save over 300 hours of human labor that would have taken to read
all interviews and highlight important points. Our findings are being used by
My Meadville initiative to locate important information from the collected data
set for ongoing community enhancement projects, to highlight relevant community
assets, and to assist in identifying the steps to be taken based on the
concerns and areas of improvement identified by the community members.
| 2,019 | Computation and Language |
Estimator Vectors: OOV Word Embeddings based on Subword and Context Clue
Estimates | Semantic representations of words have been successfully extracted from
unlabeled corpuses using neural network models like word2vec. These
representations are generally high quality and are computationally inexpensive
to train, making them popular. However, these approaches generally fail to
approximate out of vocabulary (OOV) words, a task humans can do quite easily,
using word roots and context clues. This paper proposes a neural network model
that learns high quality word representations, subword representations, and
context clue representations jointly. Learning all three types of
representations together enhances the learning of each, leading to enriched
word vectors, along with strong estimates for OOV words, via the combination of
the corresponding context clue and subword embeddings. Our model, called
Estimator Vectors (EV), learns strong word embeddings and is competitive with
state of the art methods for OOV estimation.
| 2,019 | Computation and Language |
Question Classification with Deep Contextualized Transformer | The latest work for Question and Answer problems is to use the Stanford Parse
Tree. We build on prior work and develop a new method to handle the Question
and Answer problem with the Deep Contextualized Transformer to manage some
aberrant expressions. We also conduct extensive evaluations of the SQuAD and
SwDA dataset and show significant improvement over QA problem classification of
industry needs. We also investigate the impact of different models for the
accuracy and efficiency of the problem answers. It shows that our new method is
more effective for solving QA problems with higher accuracy
| 2,021 | Computation and Language |
IPOD: An Industrial and Professional Occupations Dataset and its
Applications to Occupational Data Mining and Analysis | Occupational data mining and analysis is an important task in understanding
today's industry and job market. Various machine learning techniques are
proposed and gradually deployed to improve companies' operations for upstream
tasks, such as employee churn prediction, career trajectory modelling and
automated interview. Job titles analysis and embedding, as the fundamental
building blocks, are crucial upstream tasks to address these occupational data
mining and analysis problems. In this work, we present the Industrial and
Professional Occupations Dataset (IPOD), which consists of over 190,000 job
titles crawled from over 56,000 profiles from Linkedin. We also illustrate the
usefulness of IPOD by addressing two challenging upstream tasks, including: (i)
proposing Title2vec, a contextual job title vector representation using a
bidirectional Language Model (biLM) approach; and (ii) addressing the important
occupational Named Entity Recognition problem using Conditional Random Fields
(CRF) and bidirectional Long Short-Term Memory with CRF (LSTM-CRF). Both CRF
and LSTM-CRF outperform human and baselines in both exact-match accuracy and F1
scores. The dataset and pre-trained embeddings are available at
https://www.github.com/junhua/ipod.
| 2,020 | Computation and Language |
Opinion aspect extraction in Dutch childrens diary entries | Aspect extraction can be used in dialogue systems to understand the topic of
opinionated text. Expressing an empathetic reaction to an opinion can
strengthen the bond between a human and, for example, a robot. The aim of this
study is three-fold: 1. create a new annotated dataset for both aspect
extraction and opinion words for Dutch childrens language, 2. acquire aspect
extraction results for this task and 3. improve current results for aspect
extraction in Dutch reviews. This was done by training a deep learning Gated
Recurrent Unit (GRU) model, originally developed for an English review dataset,
on Dutch restaurant review data to classify both opinion words and their
respective aspects. We obtained state-of-the-art performance on the Dutch
restaurant review dataset. Additionally, we acquired aspect extraction results
for the Dutch childrens dataset. Since the model was trained on standardised
language, these results are quite promising.
| 2,019 | Computation and Language |
Speaker Adaptive Training using Model Agnostic Meta-Learning | Speaker adaptive training (SAT) of neural network acoustic models learns
models in a way that makes them more suitable for adaptation to test
conditions. Conventionally, model-based speaker adaptive training is performed
by having a set of speaker dependent parameters that are jointly optimised with
speaker independent parameters in order to remove speaker variation. However,
this does not scale well if all neural network weights are to be adapted to the
speaker. In this paper we formulate speaker adaptive training as a
meta-learning task, in which an adaptation process using gradient descent is
encoded directly into the training of the model. We compare our approach with
test-only adaptation of a standard baseline model and a SAT-LHUC model with a
learned speaker adaptation schedule and demonstrate that the meta-learning
approach achieves comparable results.
| 2,019 | Computation and Language |
Instance-Based Model Adaptation For Direct Speech Translation | Despite recent technology advancements, the effectiveness of neural
approaches to end-to-end speech-to-text translation is still limited by the
paucity of publicly available training corpora. We tackle this limitation with
a method to improve data exploitation and boost the system's performance at
inference time. Our approach allows us to customize "on the fly" an existing
model to each incoming translation request. At its core, it exploits an
instance selection procedure to retrieve, from a given pool of data, a small
set of samples similar to the input query in terms of latent properties of its
audio signal. The retrieved samples are then used for an instance-specific
fine-tuning of the model. We evaluate our approach in three different
scenarios. In all data conditions (different languages, in/out-of-domain
adaptation), our instance-based adaptation yields coherent performance gains
over static models.
| 2,019 | Computation and Language |
Efficient Dynamic WFST Decoding for Personalized Language Models | We propose a two-layer cache mechanism to speed up dynamic WFST decoding with
personalized language models. The first layer is a public cache that stores
most of the static part of the graph. This is shared globally among all users.
A second layer is a private cache that caches the graph that represents the
personalized language model, which is only shared by the utterances from a
particular user. We also propose two simple yet effective pre-initialization
methods, one based on breadth-first search, and another based on a data-driven
exploration of decoder states using previous utterances. Experiments with a
calling speech recognition task using a personalized contact list demonstrate
that the proposed public cache reduces decoding time by factor of three
compared to decoding without pre-initialization. Using the private cache
provides additional efficiency gains, reducing the decoding time by a factor of
five.
| 2,019 | Computation and Language |
A practical two-stage training strategy for multi-stream end-to-end
speech recognition | The multi-stream paradigm of audio processing, in which several sources are
simultaneously considered, has been an active research area for information
fusion. Our previous study offered a promising direction within end-to-end
automatic speech recognition, where parallel encoders aim to capture diverse
information followed by a stream-level fusion based on attention mechanisms to
combine the different views. However, with an increasing number of streams
resulting in an increasing number of encoders, the previous approach could
require substantial memory and massive amounts of parallel data for joint
training. In this work, we propose a practical two-stage training scheme.
Stage-1 is to train a Universal Feature Extractor (UFE), where encoder outputs
are produced from a single-stream model trained with all data. Stage-2
formulates a multi-stream scheme intending to solely train the attention fusion
module using the UFE features and pretrained components from Stage-1.
Experiments have been conducted on two datasets, DIRHA and AMI, as a
multi-stream scenario. Compared with our previous method, this strategy
achieves relative word error rate reductions of 8.2--32.4%, while consistently
outperforming several conventional combination methods.
| 2,019 | Computation and Language |
Correction of Automatic Speech Recognition with Transformer
Sequence-to-sequence Model | In this work, we introduce a simple yet efficient post-processing model for
automatic speech recognition (ASR). Our model has Transformer-based
encoder-decoder architecture which "translates" ASR model output into
grammatically and semantically correct text. We investigate different
strategies for regularizing and optimizing the model and show that extensive
data augmentation and the initialization with pre-trained weights are required
to achieve good performance. On the LibriSpeech benchmark, our method
demonstrates significant improvement in word error rate over the baseline
acoustic model with greedy decoding, especially on much noisier dev-other and
test-other portions of the evaluation dataset. Our model also outperforms
baseline with 6-gram language model re-scoring and approaches the performance
of re-scoring with Transformer-XL neural language model.
| 2,019 | Computation and Language |
Analyzing ASR pretraining for low-resource speech-to-text translation | Previous work has shown that for low-resource source languages, automatic
speech-to-text translation (AST) can be improved by pretraining an end-to-end
model on automatic speech recognition (ASR) data from a high-resource language.
However, it is not clear what factors --e.g., language relatedness or size of
the pretraining data-- yield the biggest improvements, or whether pretraining
can be effectively combined with other methods such as data augmentation. Here,
we experiment with pretraining on datasets of varying sizes, including
languages related and unrelated to the AST source language. We find that the
best predictor of final AST performance is the word error rate of the
pretrained ASR model, and that differences in ASR/AST performance correlate
with how phonetic information is encoded in the later RNN layers of our model.
We also show that pretraining and data augmentation yield complementary
benefits for AST.
| 2,020 | Computation and Language |
Hierarchical Transformers for Long Document Classification | BERT, which stands for Bidirectional Encoder Representations from
Transformers, is a recently introduced language representation model based upon
the transfer learning paradigm. We extend its fine-tuning procedure to address
one of its major limitations - applicability to inputs longer than a few
hundred words, such as transcripts of human call conversations. Our method is
conceptually simple. We segment the input into smaller chunks and feed each of
them into the base model. Then, we propagate each output through a single
recurrent layer, or another transformer, followed by a softmax activation. We
obtain the final classification decision after the last segment has been
consumed. We show that both BERT extensions are quick to fine-tune and converge
after as little as 1 epoch of training on a small, domain-specific data set. We
successfully apply them in three different tasks involving customer call
satisfaction prediction and topic classification, and obtain a significant
improvement over the baseline models in two of them.
| 2,019 | Computation and Language |
Emergent Properties of Finetuned Language Representation Models | Large, self-supervised transformer-based language representation models have
recently received significant amounts of attention, and have produced
state-of-the-art results across a variety of tasks simply by scaling up
pre-training on larger and larger corpora. Such models usually produce high
dimensional vectors, on top of which additional task-specific layers and
architectural modifications are added to adapt them to specific downstream
tasks. Though there exists ample evidence that such models work well, we aim to
understand what happens when they work well. We analyze the redundancy and
location of information contained in output vectors for one such language
representation model -- BERT. We show empirical evidence that the [CLS]
embedding in BERT contains highly redundant information, and can be compressed
with minimal loss of accuracy, especially for finetuned models, dovetailing
into open threads in the field about the role of over-parameterization in
learning. We also shed light on the existence of specific output dimensions
which alone give very competitive results when compared to using all dimensions
of output vectors.
| 2,019 | Computation and Language |
Relation Module for Non-answerable Prediction on Question Answering | Machine reading comprehension(MRC) has attracted significant amounts of
research attention recently, due to an increase of challenging reading
comprehension datasets. In this paper, we aim to improve a MRC model's ability
to determine whether a question has an answer in a given context (e.g. the
recently proposed SQuAD 2.0 task). Our solution is a relation module that is
adaptable to any MRC model. The relation module consists of both semantic
extraction and relational information. We first extract high level semantics as
objects from both question and context with multi-head self-attentive pooling.
These semantic objects are then passed to a relation network, which generates
relationship scores for each object pair in a sentence. These scores are used
to determine whether a question is non-answerable. We test the relation module
on the SQuAD 2.0 dataset using both BiDAF and BERT models as baseline readers.
We obtain 1.8% gain of F1 on top of the BiDAF reader, and 1.0% on top of the
BERT base model. These results show the effectiveness of our relation module on
MRC
| 2,019 | Computation and Language |
GF + MMT = GLF -- From Language to Semantics through LF | These days, vast amounts of knowledge are available online, most of it in
written form. Search engines help us access this knowledge, but aggregating,
relating and reasoning with it is still a predominantly human effort. One of
the key challenges for automated reasoning based on natural-language texts is
the need to extract meaning (semantics) from texts. Natural language
understanding (NLU) systems describe the conversion from a set of natural
language utterances to terms in a particular logic. Tools for the
co-development of grammar and target logic are currently largely missing.
We will describe the Grammatical Logical Framework (GLF), a combination of
two existing frameworks, in which large parts of a symbolic, rule-based NLU
system can be developed and implemented: the Grammatical Framework (GF) and
MMT. GF is a tool for syntactic analysis, generation, and translation with
complex natural language grammars and MMT can be used to specify logical
systems and to represent knowledge in them. Combining these tools is possible,
because they are based on compatible logical frameworks: Martin-L\"of type
theory and LF. The flexibility of logical frameworks is needed, as NLU research
has not settled on a particular target logic for meaning representation.
Instead, new logics are developed all the time to handle various language
phenomena. GLF allows users to develop the logic and the language parsing
components in parallel, and to connect them for experimentation with the entire
pipeline.
| 2,019 | Computation and Language |
Selective Attention Based Graph Convolutional Networks for Aspect-Level
Sentiment Classification | Aspect-level sentiment classification aims to identify the sentiment polarity
towards a specific aspect term in a sentence. Most current approaches mainly
consider the semantic information by utilizing attention mechanisms to capture
the interactions between the context and the aspect term. In this paper, we
propose to employ graph convolutional networks (GCNs) on the dependency tree to
learn syntax-aware representations of aspect terms. GCNs often show the best
performance with two layers, and deeper GCNs do not bring additional gain due
to over-smoothing problem. However, in some cases, important context words
cannot be reached within two hops on the dependency tree. Therefore we design a
selective attention based GCN block (SA-GCN) to find the most important context
words, and directly aggregate these information into the aspect-term
representation. We conduct experiments on the SemEval 2014 Task 4 datasets. Our
experimental results show that our model outperforms the current
state-of-the-art.
| 2,021 | Computation and Language |
Combining Acoustics, Content and Interaction Features to Find Hot Spots
in Meetings | Involvement hot spots have been proposed as a useful concept for meeting
analysis and studied off and on for over 15 years. These are regions of
meetings that are marked by high participant involvement, as judged by human
annotators. However, prior work was either not conducted in a formal machine
learning setting, or focused on only a subset of possible meeting features or
downstream applications (such as summarization). In this paper we investigate
to what extent various acoustic, linguistic and pragmatic aspects of the
meetings, both in isolation and jointly, can help detect hot spots. In this
context, the openSMILE toolkit is to used to extract features based on
acoustic-prosodic cues, BERT word embeddings are used for encoding the lexical
content, and a variety of statistics based on speech activity are used to
describe the verbal interaction among participants. In experiments on the
annotated ICSI meeting corpus, we find that the lexical model is the most
informative, with incremental contributions from interaction and
acoustic-prosodic model components.
| 2,020 | Computation and Language |
Low-Resource Sequence Labeling via Unsupervised Multilingual
Contextualized Representations | Previous work on cross-lingual sequence labeling tasks either requires
parallel data or bridges the two languages through word-byword matching. Such
requirements and assumptions are infeasible for most languages, especially for
languages with large linguistic distances, e.g., English and Chinese. In this
work, we propose a Multilingual Language Model with deep semantic Alignment
(MLMA) to generate language-independent representations for cross-lingual
sequence labeling. Our methods require only monolingual corpora with no
bilingual resources at all and take advantage of deep contextualized
representations. Experimental results show that our approach achieves new
state-of-the-art NER and POS performance across European languages, and is also
effective on distant language pairs such as English and Chinese.
| 2,019 | Computation and Language |
ESPnet-TTS: Unified, Reproducible, and Integratable Open Source
End-to-End Text-to-Speech Toolkit | This paper introduces a new end-to-end text-to-speech (E2E-TTS) toolkit named
ESPnet-TTS, which is an extension of the open-source speech processing toolkit
ESPnet. The toolkit supports state-of-the-art E2E-TTS models, including
Tacotron~2, Transformer TTS, and FastSpeech, and also provides recipes inspired
by the Kaldi automatic speech recognition (ASR) toolkit. The recipes are based
on the design unified with the ESPnet ASR recipe, providing high
reproducibility. The toolkit also provides pre-trained models and samples of
all of the recipes so that users can use it as a baseline. Furthermore, the
unified design enables the integration of ASR functions with TTS, e.g.,
ASR-based objective evaluation and semi-supervised learning with both ASR and
TTS models. This paper describes the design of the toolkit and experimental
evaluation in comparison with other toolkits. The experimental results show
that our models can achieve state-of-the-art performance comparable to the
other latest toolkits, resulting in a mean opinion score (MOS) of 4.25 on the
LJSpeech dataset. The toolkit is publicly available at
https://github.com/espnet/espnet.
| 2,020 | Computation and Language |
Pun-GAN: Generative Adversarial Network for Pun Generation | In this paper, we focus on the task of generating a pun sentence given a pair
of word senses. A major challenge for pun generation is the lack of large-scale
pun corpus to guide the supervised learning. To remedy this, we propose an
adversarial generative network for pun generation (Pun-GAN), which does not
require any pun corpus. It consists of a generator to produce pun sentences,
and a discriminator to distinguish between the generated pun sentences and the
real sentences with specific word senses. The output of the discriminator is
then used as a reward to train the generator via reinforcement learning,
encouraging it to produce pun sentences that can support two word senses
simultaneously. Experiments show that the proposed Pun-GAN can generate
sentences that are more ambiguous and diverse in both automatic and human
evaluation.
| 2,019 | Computation and Language |
Wasserstein distances for evaluating cross-lingual embeddings | Word embeddings are high dimensional vector representations of words that
capture their semantic similarity in the vector space. There exist several
algorithms for learning such embeddings both for a single language as well as
for several languages jointly. In this work we propose to evaluate collections
of embeddings by adapting downstream natural language tasks to the optimal
transport framework. We show how the family of Wasserstein distances can be
used to solve cross-lingual document retrieval and the cross-lingual document
classification problems. We argue on the advantages of this approach compared
to more traditional evaluation methods of embeddings like bilingual lexical
induction. Our experimental results suggest that using Wasserstein distances on
these problems out-performs several strong baselines and performs on par with
state-of-the-art models.
| 2,019 | Computation and Language |
Diversifying Topic-Coherent Response Generation for Natural Multi-turn
Conversations | Although response generation (RG) diversification for single-turn dialogs has
been well developed, it is less investigated for natural multi-turn
conversations. Besides, past work focused on diversifying responses without
considering topic coherence to the context, producing uninformative replies. In
this paper, we propose the Topic-coherent Hierarchical Recurrent
Encoder-Decoder model (THRED) to diversify the generated responses without
deviating the contextual topics for multi-turn conversations. In overall, we
build a sequence-to-sequence net (Seq2Seq) to model multi-turn conversations.
And then we resort to the latent Variable Hierarchical Recurrent
Encoder-Decoder model (VHRED) to learn global contextual distribution of
dialogs. Besides, we construct a dense topic matrix which implies word-level
correlations of the conversation corpora. The topic matrix is used to learn
local topic distribution of the contextual utterances. By incorporating both
the global contextual distribution and the local topic distribution, THRED
produces both diversified and topic-coherent replies. In addition, we propose
an explicit metric (\emph{TopicDiv}) to measure the topic divergence between
the post and generated response, and we also propose an overall metric
combining the diversification metric (\emph{Distinct}) and \emph{TopicDiv}. We
evaluate our model comparing with three baselines (Seq2Seq, HRED and VHRED) on
two real-world corpora, respectively, and demonstrate its outstanding
performance in both diversification and topic coherence.
| 2,019 | Computation and Language |
Syntax-Enhanced Self-Attention-Based Semantic Role Labeling | As a fundamental NLP task, semantic role labeling (SRL) aims to discover the
semantic roles for each predicate within one sentence. This paper investigates
how to incorporate syntactic knowledge into the SRL task effectively. We
present different approaches of encoding the syntactic information derived from
dependency trees of different quality and representations; we propose a
syntax-enhanced self-attention model and compare it with other two strong
baseline methods; and we conduct experiments with newly published deep
contextualized word representations as well. The experiment results demonstrate
that with proper incorporation of the high quality syntactic information, our
model achieves a new state-of-the-art performance for the Chinese SRL task on
the CoNLL-2009 dataset.
| 2,019 | Computation and Language |
Promoting the Knowledge of Source Syntax in Transformer NMT Is Not
Needed | The utility of linguistic annotation in neural machine translation seemed to
had been established in past papers. The experiments were however limited to
recurrent sequence-to-sequence architectures and relatively small data
settings. We focus on the state-of-the-art Transformer model and use comparably
larger corpora. Specifically, we try to promote the knowledge of source-side
syntax using multi-task learning either through simple data manipulation
techniques or through a dedicated model component. In particular, we train one
of Transformer attention heads to produce source-side dependency tree. Overall,
our results cast some doubt on the utility of multi-task setups with linguistic
information. The data manipulation techniques, recommended in previous works,
prove ineffective in large data settings. The treatment of self-attention as
dependencies seems much more promising: it helps in translation and reveals
that Transformer model can very easily grasp the syntactic structure. An
important but curious result is, however, that identical gains are obtained by
using trivial "linear trees" instead of true dependencies. The reason for the
gain thus may not be coming from the added linguistic knowledge but from some
simpler regularizing effect we induced on self-attention matrices.
| 2,019 | Computation and Language |
Rethinking Exposure Bias In Language Modeling | Exposure bias describes the phenomenon that a language model trained under
the teacher forcing schema may perform poorly at the inference stage when its
predictions are conditioned on its previous predictions unseen from the
training corpus. Recently, several generative adversarial networks (GANs) and
reinforcement learning (RL) methods have been introduced to alleviate this
problem. Nonetheless, a common issue in RL and GANs training is the sparsity of
reward signals. In this paper, we adopt two simple strategies, multi-range
reinforcing, and multi-entropy sampling, to amplify and denoise the reward
signal. Our model produces an improvement over competing models with regards to
BLEU scores and road exam, a new metric we designed to measure the robustness
against exposure bias in language models.
| 2,020 | Computation and Language |
Interpretable Text Classification Using CNN and Max-pooling | Deep neural networks have been widely used in text classification. However,
it is hard to interpret the neural models due to the complicate mechanisms. In
this work, we study the interpretability of a variant of the typical text
classification model which is based on convolutional operation and max-pooling
layer. Two mechanisms: convolution attribution and n-gram feature analysis are
proposed to analyse the process procedure for the CNN model. The
interpretability of the model is reflected by providing posterior
interpretation for neural network predictions. Besides, a multi-sentence
strategy is proposed to enable the model to beused in multi-sentence situation
without loss of performance and interpret ability. We evaluate the performance
of the model on several classification tasks and justify the interpretable
performance with some case studies.
| 2,019 | Computation and Language |
Healthcare NER Models Using Language Model Pretraining | In this paper, we present our approach to extracting structured information
from unstructured Electronic Health Records (EHR) [2] which can be used to, for
example, study adverse drug reactions in patients due to chemicals in their
products. Our solution uses a combination of Natural Language Processing (NLP)
techniques and a web-based annotation tool to optimize the performance of a
custom Named Entity Recognition (NER) [1] model trained on a limited amount of
EHR training data. This work was presented at the first Health Search and Data
Mining Workshop (HSDM 2020) [26]. We showcase a combination of tools and
techniques leveraging the recent advancements in NLP aimed at targeting domain
shifts by applying transfer learning and language model pre-training techniques
[3]. We present a comparison of our technique to the current popular approaches
and show the effective increase in performance of the NER model and the
reduction in time to annotate data.A key observation of the results presented
is that the F1 score of model (0.734) trained with our approach with just 50%
of available training data outperforms the F1 score of the blank spaCy model
without language model component (0.704) trained with 100% of the available
training data. We also demonstrate an annotation tool to minimize domain expert
time and the manual effort required to generate such a training dataset.
Further, we plan to release the annotated dataset as well as the pre-trained
model to the community to further research in medical health records.
| 2,020 | Computation and Language |
A context sensitive real-time Spell Checker with language adaptability | We present a novel language adaptable spell checking system which detects
spelling errors and suggests context sensitive corrections in real-time. We
show that our system can be extended to new languages with minimal
language-specific processing. Available literature majorly discusses spell
checkers for English but there are no publicly available systems which can be
extended to work for other languages out of the box. Most of the systems do not
work in real-time. We explain the process of generating a language's word
dictionary and n-gram probability dictionaries using Wikipedia-articles data
and manually curated video subtitles. We present the results of generating a
list of suggestions for a misspelled word. We also propose three approaches to
create noisy channel datasets of real-world typographic errors. We compare our
system with industry-accepted spell checker tools for 11 languages. Finally, we
show the performance of our system on synthetic datasets for 24 languages.
| 2,019 | Computation and Language |
Conversational Emotion Analysis via Attention Mechanisms | Different from the emotion recognition in individual utterances, we propose a
multimodal learning framework using relation and dependencies among the
utterances for conversational emotion analysis. The attention mechanism is
applied to the fusion of the acoustic and lexical features. Then these fusion
representations are fed into the self-attention based bi-directional gated
recurrent unit (GRU) layer to capture long-term contextual information. To
imitate real interaction patterns of different speakers, speaker embeddings are
also utilized as additional inputs to distinguish the speaker identities during
conversational dialogs. To verify the effectiveness of the proposed method, we
conduct experiments on the IEMOCAP database. Experimental results demonstrate
that our method shows absolute 2.42% performance improvement over the
state-of-the-art strategies.
| 2,019 | Computation and Language |
Predicting In-game Actions from Interviews of NBA Players | Sports competitions are widely researched in computer and social science,
with the goal of understanding how players act under uncertainty. While there
is an abundance of computational work on player metrics prediction based on
past performance, very few attempts to incorporate out-of-game signals have
been made. Specifically, it was previously unclear whether linguistic signals
gathered from players' interviews can add information which does not appear in
performance metrics. To bridge that gap, we define text classification tasks of
predicting deviations from mean in NBA players' in-game actions, which are
associated with strategic choices, player behavior and risk, using their choice
of language prior to the game. We collected a dataset of transcripts from key
NBA players' pre-game interviews and their in-game performance metrics,
totalling in 5,226 interview-metric pairs. We design neural models for players'
action prediction based on increasingly more complex aspects of the language
signals in their open-ended interviews. Our models can make their predictions
based on the textual signal alone, or on a combination with signals from
past-performance metrics. Our text-based models outperform strong baselines
trained on performance metrics only, demonstrating the importance of language
usage for action prediction. Moreover, the models that employ both textual
input and past-performance metrics produced the best results. Finally, as
neural networks are notoriously difficult to interpret, we propose a method for
gaining further insight into what our models have learned. Particularly, we
present an LDA-based analysis, where we interpret model predictions in terms of
correlated topics. We find that our best performing textual model is most
associated with topics that are intuitively related to each prediction task and
that better models yield higher correlation with more informative topics.
| 2,020 | Computation and Language |
\'UFAL MRPipe at MRP 2019: UDPipe Goes Semantic in the Meaning
Representation Parsing Shared Task | We present a system description of our contribution to the CoNLL 2019 shared
task, Cross-Framework Meaning Representation Parsing (MRP 2019). The proposed
architecture is our first attempt towards a semantic parsing extension of the
UDPipe 2.0, a lemmatization, POS tagging and dependency parsing pipeline.
For the MRP 2019, which features five formally and linguistically different
approaches to meaning representation (DM, PSD, EDS, UCCA and AMR), we propose a
uniform, language and framework agnostic graph-to-graph neural network
architecture. Without any knowledge about the graph structure, and specifically
without any linguistically or framework motivated features, our system
implicitly models the meaning representation graphs.
After fixing a human error (we used earlier incorrect version of provided
test set analyses), our submission would score third in the competition
evaluation. The source code of our system is available at
https://github.com/ufal/mrpipe-conll2019.
| 2,019 | Computation and Language |
Cross-Lingual Vision-Language Navigation | Commanding a robot to navigate with natural language instructions is a
long-term goal for grounded language understanding and robotics. But the
dominant language is English, according to previous studies on vision-language
navigation (VLN). To go beyond English and serve people speaking different
languages, we collect a bilingual Room-to-Room (BL-R2R) dataset, extending the
original benchmark with new Chinese instructions. Based on this newly
introduced dataset, we study how an agent can be trained on existing English
instructions but navigate effectively with another language under a zero-shot
learning scenario. Without any training data of the target language, our model
shows competitive results even compared to a model with full access to the
target language training data. Moreover, we investigate the transferring
ability of our model when given a certain amount of target language training
data.
| 2,020 | Computation and Language |
Detecting gender differences in perception of emotion in crowdsourced
data | Do men and women perceive emotions differently? Popular convictions place
women as more emotionally perceptive than men. Empirical findings, however,
remain inconclusive. Most prior studies focus on visual modalities. In
addition, almost all of the studies are limited to experiments within
controlled environments. Generalizability and scalability of these studies has
not been sufficiently established. In this paper, we study the differences in
perception of emotion between genders from speech data in the wild, annotated
through crowdsourcing. While we limit ourselves to a single modality (i.e.
speech), our framework is applicable to studies of emotion perception from all
such loosely annotated data in general. Our paper addresses multiple serious
challenges related to making statistically viable conclusions from crowdsourced
data. Overall, the contributions of this paper are two fold: a reliable novel
framework for perceptual studies from crowdsourced data; and the demonstration
of statistically significant differences in speech-based emotion perception
between genders.
| 2,019 | Computation and Language |
Comparison of Quality Indicators in User-generated Content Using Social
Media and Scholarly Text | Predicting the quality of a text document is a critical task when presented
with the problem of measuring the performance of a document before its release.
In this work, we evaluate various features including those extracted from the
text content (textual) and those describing higher-level characteristics of the
text (meta) features that are not directly available from the text, and show
how these features inform prediction of document quality in different ways.
Moreover, we also compare our methods on both social user-generated data such
as tweets, and scholarly user-generated data such as academic articles, showing
how the same features differently influence prediction of quality across these
disparate domains.
| 2,019 | Computation and Language |
Multi-Document Summarization with Determinantal Point Processes and
Contextualized Representations | Emerged as one of the best performing techniques for extractive
summarization, determinantal point processes select the most probable set of
sentences to form a summary according to a probability measure defined by
modeling sentence prominence and pairwise repulsion. Traditionally, these
aspects are modelled using shallow and linguistically informed features, but
the rise of deep contextualized representations raises an interesting question
of whether, and to what extent, contextualized representations can be used to
improve DPP modeling. Our findings suggest that, despite the success of deep
representations, it remains necessary to combine them with surface indicators
for effective identification of summary sentences.
| 2,019 | Computation and Language |
Capacity, Bandwidth, and Compositionality in Emergent Language Learning | Many recent works have discussed the propensity, or lack thereof, for
emergent languages to exhibit properties of natural languages. A favorite in
the literature is learning compositionality. We note that most of those works
have focused on communicative bandwidth as being of primary importance. While
important, it is not the only contributing factor. In this paper, we
investigate the learning biases that affect the efficacy and compositionality
of emergent languages. Our foremost contribution is to explore how capacity of
a neural network impacts its ability to learn a compositional language. We
additionally introduce a set of evaluation metrics with which we analyze the
learned languages. Our hypothesis is that there should be a specific range of
model capacity and channel bandwidth that induces compositional structure in
the resulting language and consequently encourages systematic generalization.
While we empirically see evidence for the bottom of this range, we curiously do
not find evidence for the top part of the range and believe that this is an
open question for the community.
| 2,020 | Computation and Language |
An Empirical Study of Efficient ASR Rescoring with Transformers | Neural language models (LMs) have been proved to significantly outperform
classical n-gram LMs for language modeling due to their superior abilities to
model long-range dependencies in text and handle data sparsity problems. And
recently, well configured deep Transformers have exhibited superior performance
over shallow stack of recurrent neural network layers for language modeling.
However, these state-of-the-art deep Transformer models were mostly engineered
to be deep with high model capacity, which makes it computationally inefficient
and challenging to be deployed into large-scale real-world applications.
Therefore, it is important to develop Transformer LMs that have relatively
small model sizes, while still retaining good performance of those much larger
models. In this paper, we aim to conduct empirical study on training
Transformers with small parameter sizes in the context of ASR rescoring. By
combining techniques including subword units, adaptive softmax, large-scale
model pre-training, and knowledge distillation, we show that we are able to
successfully train small Transformer LMs with significant relative word error
rate reductions (WERR) through n-best rescoring. In particular, our experiments
on a video speech recognition dataset show that we are able to achieve WERRs
ranging from 6.46% to 7.17% while only with 5.5% to 11.9% parameter sizes of
the well-known large GPT model [1], whose WERR with rescoring on the same
dataset is 7.58%.
| 2,019 | Computation and Language |
A Survey on Recent Advances in Named Entity Recognition from Deep
Learning models | Named Entity Recognition (NER) is a key component in NLP systems for question
answering, information retrieval, relation extraction, etc. NER systems have
been studied and developed widely for decades, but accurate systems using deep
neural networks (NN) have only been introduced in the last few years. We
present a comprehensive survey of deep neural network architectures for NER,
and contrast them with previous approaches to NER based on feature engineering
and other supervised or semi-supervised learning algorithms. Our results
highlight the improvements achieved by neural networks, and show how
incorporating some of the lessons learned from past work on feature-based NER
systems can yield further improvements.
| 2,019 | Computation and Language |
Machine Translation from Natural Language to Code using Long-Short Term
Memory | Making computer programming language more understandable and easy for the
human is a longstanding problem. From assembly language to present day's
object-oriented programming, concepts came to make programming easier so that a
programmer can focus on the logic and the architecture rather than the code and
language itself. To go a step further in this journey of removing
human-computer language barrier, this paper proposes machine learning approach
using Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) to
convert human language into programming language code. The programmer will
write expressions for codes in layman's language, and the machine learning
model will translate it to the targeted programming language. The proposed
approach yields result with 74.40% accuracy. This can be further improved by
incorporating additional techniques, which are also discussed in this paper.
| 2,019 | Computation and Language |
QASC: A Dataset for Question Answering via Sentence Composition | Composing knowledge from multiple pieces of texts is a key challenge in
multi-hop question answering. We present a multi-hop reasoning dataset,
Question Answering via Sentence Composition(QASC), that requires retrieving
facts from a large corpus and composing them to answer a multiple-choice
question. QASC is the first dataset to offer two desirable properties: (a) the
facts to be composed are annotated in a large corpus, and (b) the decomposition
into these facts is not evident from the question itself. The latter makes
retrieval challenging as the system must introduce new concepts or relations in
order to discover potential decompositions. Further, the reasoning model must
then learn to identify valid compositions of these retrieved facts using
common-sense reasoning. To help address these challenges, we provide annotation
for supporting facts as well as their composition. Guided by these annotations,
we present a two-step approach to mitigate the retrieval challenges. We use
other multiple-choice datasets as additional training data to strengthen the
reasoning model. Our proposed approach improves over current state-of-the-art
language models by 11% (absolute). The reasoning and retrieval problems,
however, remain unsolved as this model still lags by 20% behind human
performance.
| 2,020 | Computation and Language |
A Unified MRC Framework for Named Entity Recognition | The task of named entity recognition (NER) is normally divided into nested
NER and flat NER depending on whether named entities are nested or not. Models
are usually separately developed for the two tasks, since sequence labeling
models, the most widely used backbone for flat NER, are only able to assign a
single label to a particular token, which is unsuitable for nested NER where a
token may be assigned several labels.
In this paper, we propose a unified framework that is capable of handling
both flat and nested NER tasks. Instead of treating the task of NER as a
sequence labeling problem, we propose to formulate it as a machine reading
comprehension (MRC) task. For example, extracting entities with the
\textsc{per} label is formalized as extracting answer spans to the question
"{\it which person is mentioned in the text?}". This formulation naturally
tackles the entity overlapping issue in nested NER: the extraction of two
overlapping entities for different categories requires answering two
independent questions. Additionally, since the query encodes informative prior
knowledge, this strategy facilitates the process of entity extraction, leading
to better performances for not only nested NER, but flat NER.
We conduct experiments on both {\em nested} and {\em flat} NER datasets.
Experimental results demonstrate the effectiveness of the proposed formulation.
We are able to achieve vast amount of performance boost over current SOTA
models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively
on ACE04, ACE05, GENIA and KBP17, along with SOTA results on flat NER datasets,
i.e.,+0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English
OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0.
| 2,022 | Computation and Language |
Generating a Common Question from Multiple Documents using Multi-source
Encoder-Decoder Models | Ambiguous user queries in search engines result in the retrieval of documents
that often span multiple topics. One potential solution is for the search
engine to generate multiple refined queries, each of which relates to a subset
of the documents spanning the same topic. A preliminary step towards this goal
is to generate a question that captures common concepts of multiple documents.
We propose a new task of generating common question from multiple documents and
present simple variant of an existing multi-source encoder-decoder framework,
called the Multi-Source Question Generator (MSQG). We first train an RNN-based
single encoder-decoder generator from (single document, question) pairs. At
test time, given multiple documents, the 'Distribute' step of our MSQG model
predicts target word distributions for each document using the trained model.
The 'Aggregate' step aggregates these distributions to generate a common
question. This simple yet effective strategy significantly outperforms several
existing baseline models applied to the new task when evaluated using automated
metrics and human judgments on the MS-MARCO-QA dataset.
| 2,019 | Computation and Language |
Attention Optimization for Abstractive Document Summarization | Attention plays a key role in the improvement of sequence-to-sequence-based
document summarization models. To obtain a powerful attention helping with
reproducing the most salient information and avoiding repetitions, we augment
the vanilla attention model from both local and global aspects. We propose an
attention refinement unit paired with local variance loss to impose supervision
on the attention model at each decoding step, and a global variance loss to
optimize the attention distributions of all decoding steps from the global
perspective. The performances on the CNN/Daily Mail dataset verify the
effectiveness of our methods.
| 2,019 | Computation and Language |
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and
Cross-Lingual Transfer for Inflection | The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual
analysis in morphology examined transfer learning of inflection between 100
language pairs, as well as contextual lemmatization and morphosyntactic
description in 66 languages. The first task evolves past years' inflection
tasks by examining transfer of morphological inflection knowledge from a
high-resource language to a low-resource language. This year also presents a
new second challenge on lemmatization and morphological feature analysis in
context. All submissions featured a neural component and built on either this
year's strong baselines or highly ranked systems from previous years' shared
tasks. Every participating team improved in accuracy over the baselines for the
inflection task (though not Levenshtein distance), and every team in the
contextual analysis task improved on both state-of-the-art neural and
non-neural baselines.
| 2,019 | Computation and Language |
L2RS: A Learning-to-Rescore Mechanism for Automatic Speech Recognition | Modern Automatic Speech Recognition (ASR) systems primarily rely on scores
from an Acoustic Model (AM) and a Language Model (LM) to rescore the N-best
lists. With the abundance of recent natural language processing advances, the
information utilized by current ASR for evaluating the linguistic and semantic
legitimacy of the N-best hypotheses is rather limited. In this paper, we
propose a novel Learning-to-Rescore (L2RS) mechanism, which is specialized for
utilizing a wide range of textual information from the state-of-the-art NLP
models and automatically deciding their weights to rescore the N-best lists for
ASR systems. Specifically, we incorporate features including BERT sentence
embedding, topic vector, and perplexity scores produced by n-gram LM, topic
modeling LM, BERT LM and RNNLM to train a rescoring model. We conduct extensive
experiments based on a public dataset, and experimental results show that L2RS
outperforms not only traditional rescoring methods but also its deep neural
network counterparts by a substantial improvement of 20.67% in terms of
NDCG@10. L2RS paves the way for developing more effective rescoring models for
ASR.
| 2,019 | Computation and Language |
Stem-driven Language Models for Morphologically Rich Languages | Neural language models (LMs) have shown to benefit significantly from
enhancing word vectors with subword-level information, especially for
morphologically rich languages. This has been mainly tackled by providing
subword-level information as an input; using subword units in the output layer
has been far less explored. In this work, we propose LMs that are cognizant of
the underlying stems in each word. We derive stems for words using a simple
unsupervised technique for stem identification. We experiment with different
architectures involving multi-task learning and mixture models over words and
stems. We focus on four morphologically complex languages -- Hindi, Tamil,
Kannada and Finnish -- and observe significant perplexity gains with using our
stem-driven LMs when compared with other competitive baseline models.
| 2,019 | Computation and Language |
SpeechBERT: An Audio-and-text Jointly Learned Language Model for
End-to-end Spoken Question Answering | While various end-to-end models for spoken language understanding tasks have
been explored recently, this paper is probably the first known attempt to
challenge the very difficult task of end-to-end spoken question answering
(SQA). Learning from the very successful BERT model for various text processing
tasks, here we proposed an audio-and-text jointly learned SpeechBERT model.
This model outperformed the conventional approach of cascading ASR with the
following text question answering (TQA) model on datasets including ASR errors
in answer spans, because the end-to-end model was shown to be able to extract
information out of audio data before ASR produced errors. When ensembling the
proposed end-to-end model with the cascade architecture, even better
performance was achieved. In addition to the potential of end-to-end SQA, the
SpeechBERT can also be considered for many other spoken language understanding
tasks just as BERT for many text processing tasks.
| 2,020 | Computation and Language |
Meta-Learning with Dynamic-Memory-Based Prototypical Network for
Few-Shot Event Detection | Event detection (ED), a sub-task of event extraction, involves identifying
triggers and categorizing event mentions. Existing methods primarily rely upon
supervised learning and require large-scale labeled event datasets which are
unfortunately not readily available in many real-life applications. In this
paper, we consider and reformulate the ED task with limited labeled data as a
Few-Shot Learning problem. We propose a Dynamic-Memory-Based Prototypical
Network (DMB-PN), which exploits Dynamic Memory Network (DMN) to not only learn
better prototypes for event types, but also produce more robust sentence
encodings for event mentions. Differing from vanilla prototypical networks
simply computing event prototypes by averaging, which only consume event
mentions once, our model is more robust and is capable of distilling contextual
information from event mentions for multiple times due to the multi-hop
mechanism of DMNs. The experiments show that DMB-PN not only deals with sample
scarcity better than a series of baseline models but also performs more
robustly when the variety of event types is relatively large and the instance
quantity is extremely small.
| 2,023 | Computation and Language |
Improving Diarization Robustness using Diversification, Randomization
and the DOVER Algorithm | Speaker diarization based on bottom-up clustering of speech segments by
acoustic similarity is often highly sensitive to the choice of hyperparameters,
such as the initial number of clusters and feature weighting. Optimizing these
hyperparameters is difficult and often not robust across different data sets.
We recently proposed the DOVER algorithm for combining multiple diarization
hypotheses by voting. Here we propose to mitigate the robustness problem in
diarization by using DOVER to average across different parameter choices. We
also investigate the combination of diverse outputs obtained by following
different merge choices pseudo-randomly in the course of clustering, thereby
mitigating the greediness of best-first clustering. We show on two conference
meeting data sets drawn from NIST evaluations that the proposed methods indeed
yield more robust, and in several cases overall improved, results.
| 2,020 | Computation and Language |
Exploring Multilingual Syntactic Sentence Representations | We study methods for learning sentence embeddings with syntactic structure.
We focus on methods of learning syntactic sentence-embeddings by using a
multilingual parallel-corpus augmented by Universal Parts-of-Speech tags. We
evaluate the quality of the learned embeddings by examining sentence-level
nearest neighbours and functional dissimilarity in the embedding space. We also
evaluate the ability of the method to learn syntactic sentence-embeddings for
low-resource languages and demonstrate strong evidence for transfer learning.
Our results show that syntactic sentence-embeddings can be learned while using
less training data, fewer model parameters, and resulting in better evaluation
metrics than state-of-the-art language models.
| 2,019 | Computation and Language |
DENS: A Dataset for Multi-class Emotion Analysis | We introduce a new dataset for multi-class emotion analysis from long-form
narratives in English. The Dataset for Emotions of Narrative Sequences (DENS)
was collected from both classic literature available on Project Gutenberg and
modern online narratives available on Wattpad, annotated using Amazon
Mechanical Turk. A number of statistics and baseline benchmarks are provided
for the dataset. Of the tested techniques, we find that the fine-tuning of a
pre-trained BERT model achieves the best results, with an average micro-F1
score of 60.4%. Our results show that the dataset provides a novel opportunity
in emotion analysis that requires moving beyond existing sentence-level
techniques.
| 2,019 | Computation and Language |
Measuring Conversational Fluidity in Automated Dialogue Agents | We present an automated evaluation method to measure fluidity in
conversational dialogue systems. The method combines various state of the art
Natural Language tools into a classifier, and human ratings on these dialogues
to train an automated judgment model. Our experiments show that the results are
an improvement on existing metrics for measuring fluidity.
| 2,019 | Computation and Language |
Evaluation of Sentence Representations in Polish | Methods for learning sentence representations have been actively developed in
recent years. However, the lack of pre-trained models and datasets annotated at
the sentence level has been a problem for low-resource languages such as Polish
which led to less interest in applying these methods to language-specific
tasks. In this study, we introduce two new Polish datasets for evaluating
sentence embeddings and provide a comprehensive evaluation of eight sentence
representation methods including Polish and multilingual models. We consider
classic word embedding models, recently developed contextual embeddings and
multilingual sentence encoders, showing strengths and weaknesses of specific
approaches. We also examine different methods of aggregating word vectors into
a single sentence vector.
| 2,020 | Computation and Language |
On the Cross-lingual Transferability of Monolingual Representations | State-of-the-art unsupervised multilingual models (e.g., multilingual BERT)
have been shown to generalize in a zero-shot cross-lingual setting. This
generalization ability has been attributed to the use of a shared subword
vocabulary and joint training across multiple languages giving rise to deep
multilingual abstractions. We evaluate this hypothesis by designing an
alternative approach that transfers a monolingual model to new languages at the
lexical level. More concretely, we first train a transformer-based masked
language model on one language, and transfer it to a new language by learning a
new embedding matrix with the same masked language modeling objective, freezing
parameters of all other layers. This approach does not rely on a shared
vocabulary or joint training. However, we show that it is competitive with
multilingual BERT on standard cross-lingual classification benchmarks and on a
new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict
common beliefs of the basis of the generalization ability of multilingual
models and suggest that deep monolingual models learn some abstractions that
generalize across languages. We also release XQuAD as a more comprehensive
cross-lingual benchmark, which comprises 240 paragraphs and 1190
question-answer pairs from SQuAD v1.1 translated into ten languages by
professional translators.
| 2,021 | Computation and Language |
Current Limitations in Cyberbullying Detection: on Evaluation Criteria,
Reproducibility, and Data Scarcity | The detection of online cyberbullying has seen an increase in societal
importance, popularity in research, and available open data. Nevertheless,
while computational power and affordability of resources continue to increase,
the access restrictions on high-quality data limit the applicability of
state-of-the-art techniques. Consequently, much of the recent research uses
small, heterogeneous datasets, without a thorough evaluation of applicability.
In this paper, we further illustrate these issues, as we (i) evaluate many
publicly available resources for this task and demonstrate difficulties with
data collection. These predominantly yield small datasets that fail to capture
the required complex social dynamics and impede direct comparison of progress.
We (ii) conduct an extensive set of experiments that indicate a general lack of
cross-domain generalization of classifiers trained on these sources, and openly
provide this framework to replicate and extend our evaluation criteria.
Finally, we (iii) present an effective crowdsourcing method: simulating
real-life bullying scenarios in a lab setting generates plausible data that can
be effectively used to enrich real data. This largely circumvents the
restrictions on data that can be collected, and increases classifier
performance. We believe these contributions can aid in improving the empirical
practices of future research in the field.
| 2,021 | Computation and Language |
Exploring Author Context for Detecting Intended vs Perceived Sarcasm | We investigate the impact of using author context on textual sarcasm
detection. We define author context as the embedded representation of their
historical posts on Twitter and suggest neural models that extract these
representations. We experiment with two tweet datasets, one labelled manually
for sarcasm, and the other via tag-based distant supervision. We achieve
state-of-the-art performance on the second dataset, but not on the one labelled
manually, indicating a difference between intended sarcasm, captured by distant
supervision, and perceived sarcasm, captured by manual labelling.
| 2,019 | Computation and Language |
FineText: Text Classification via Attention-based Language Model
Fine-tuning | Training deep neural networks from scratch on natural language processing
(NLP) tasks requires significant amount of manually labeled text corpus and
substantial time to converge, which usually cannot be satisfied by the
customers. In this paper, we aim to develop an effective transfer learning
algorithm by fine-tuning a pre-trained language model. The goal is to provide
expressive and convenient-to-use feature extractors for downstream NLP tasks,
and achieve improvement in terms of accuracy, data efficiency, and
generalization to new domains. Therefore, we propose an attention-based
fine-tuning algorithm that automatically selects relevant contextualized
features from the pre-trained language model and uses those features on
downstream text classification tasks. We test our methods on six widely-used
benchmarking datasets, and achieve new state-of-the-art performance on all of
them. Moreover, we then introduce an alternative multi-task learning approach,
which is an end-to-end algorithm given the pre-trained model. By doing
multi-task learning, one can largely reduce the total training time by trading
off some classification accuracy.
| 2,019 | Computation and Language |
Yall should read this! Identifying Plurality in Second-Person Personal
Pronouns in English Texts | Distinguishing between singular and plural "you" in English is a challenging
task which has potential for downstream applications, such as machine
translation or coreference resolution. While formal written English does not
distinguish between these cases, other languages (such as Spanish), as well as
other dialects of English (via phrases such as "yall"), do make this
distinction. We make use of this to obtain distantly-supervised labels for the
task on a large-scale in two domains. Following, we train a model to
distinguish between the single/plural you, finding that although in-domain
training achieves reasonable accuracy (over 77%), there is still a lot of room
for improvement, especially in the domain-transfer scenario, which proves
extremely challenging. Our code and data are publicly available.
| 2,019 | Computation and Language |
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word
Embeddings and Layered Attention | Despite detection of suicidal ideation on social media has made great
progress in recent years, people's implicitly and anti-real contrarily
expressed posts still remain as an obstacle, constraining the detectors to
acquire higher satisfactory performance. Enlightened by the hidden "tree holes"
phenomenon on microblog, where people at suicide risk tend to disclose their
inner real feelings and thoughts to the microblog space whose authors have
committed suicide, we explore the use of tree holes to enhance microblog-based
suicide risk detection from the following two perspectives. (1) We build
suicide-oriented word embeddings based on tree hole contents to strength the
sensibility of suicide-related lexicons and context based on tree hole
contents. (2) A two-layered attention mechanism is deployed to grasp
intermittently changing points from individual's open blog streams, revealing
one's inner emotional world more or less. Our experimental results show that
with suicide-oriented word embeddings and attention, microblog-based suicide
risk detection can achieve over 91\% accuracy. A large-scale well-labelled
suicide data set is also reported in the paper.
| 2,019 | Computation and Language |
Disinformation Detection: A review of linguistic feature selection and
classification models in news veracity assessments | Over the past couple of years, the topic of "fake news" and its influence
over people's opinions has become a growing cause for concern. Although the
spread of disinformation on the Internet is not a new phenomenon, the
widespread use of social media has exacerbated its effects, providing more
channels for dissemination and the potential to "go viral." Nowhere was this
more evident than during the 2016 United States Presidential Election. Although
the current of disinformation spread via trolls, bots, and hyperpartisan media
outlets likely reinforced existing biases rather than sway undecided voters,
the effects of this deluge of disinformation are by no means trivial. The
consequences range in severity from an overall distrust in news media, to an
ill-informed citizenry, and in extreme cases, provocation of violent action. It
is clear that human ability to discern lies from truth is flawed at best. As
such, greater attention has been given towards applying machine learning
approaches to detect deliberately deceptive news articles. This paper looks at
the work that has already been done in this area.
| 2,019 | Computation and Language |
ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain
Conversation | The uptake of deep learning in natural language generation (NLG) led to the
release of both small and relatively large parallel corpora for training neural
models. The existing data-to-text datasets are, however, aimed at task-oriented
dialogue systems, and often thus limited in diversity and versatility. They are
typically crowdsourced, with much of the noise left in them. Moreover, current
neural NLG models do not take full advantage of large training data, and due to
their strong generalizing properties produce sentences that look template-like
regardless. We therefore present a new corpus of 7K samples, which (1) is clean
despite being crowdsourced, (2) has utterances of 9 generalizable and
conversational dialogue act types, making it more suitable for open-domain
dialogue systems, and (3) explores the domain of video games, which is new to
dialogue systems despite having excellent potential for supporting rich
conversations.
| 2,019 | Computation and Language |
SoulMate: Short-text author linking through Multi-aspect
temporal-textual embedding | Linking authors of short-text contents has important usages in many
applications, including Named Entity Recognition (NER) and human community
detection. However, certain challenges lie ahead. Firstly, the input short-text
contents are noisy, ambiguous, and do not follow the grammatical rules.
Secondly, traditional text mining methods fail to effectively extract concepts
through words and phrases. Thirdly, the textual contents are temporally skewed,
which can affect the semantic understanding by multiple time facets. Finally,
using the complementary knowledge-bases makes the results biased to the content
of the external database and deviates the understanding and interpretation away
from the real nature of the given short text corpus. To overcome these
challenges, we devise a neural network-based temporal-textual framework that
generates the tightly connected author subgraphs from microblog short-text
contents. Our approach, on the one hand, computes the relevance score (edge
weight) between the authors through considering a portmanteau of contents and
concepts, and on the other hand, employs a stack-wise graph cutting algorithm
to extract the communities of the related authors. Experimental results show
that compared to other knowledge-centered competitors, our multi-aspect vector
space model can achieve a higher performance in linking short-text authors.
Additionally, given the author linking task, the more comprehensive the dataset
is, the higher the significance of the extracted concepts will be.
| 2,019 | Computation and Language |
Word-level Textual Adversarial Attacking as Combinatorial Optimization | Adversarial attacks are carried out to reveal the vulnerability of deep
neural networks. Textual adversarial attacking is challenging because text is
discrete and a small perturbation can bring significant change to the original
input. Word-level attacking, which can be regarded as a combinatorial
optimization problem, is a well-studied class of textual attack methods.
However, existing word-level attack models are far from perfect, largely
because unsuitable search space reduction methods and inefficient optimization
algorithms are employed. In this paper, we propose a novel attack model, which
incorporates the sememe-based word substitution method and particle swarm
optimization-based search algorithm to solve the two problems separately. We
conduct exhaustive experiments to evaluate our attack model by attacking BiLSTM
and BERT on three benchmark datasets. Experimental results demonstrate that our
model consistently achieves much higher attack success rates and crafts more
high-quality adversarial examples as compared to baseline methods. Also,
further experiments show our model has higher transferability and can bring
more robustness enhancement to victim models by adversarial training. All the
code and data of this paper can be obtained on
https://github.com/thunlp/SememePSO-Attack.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.