Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Improving Pre-Trained Multilingual Models with Vocabulary Expansion | Recently, pre-trained language models have achieved remarkable success in a
broad range of natural language processing tasks. However, in multilingual
setting, it is extremely resource-consuming to pre-train a deep language model
over large-scale corpora for each language. Instead of exhaustively
pre-training monolingual language models independently, an alternative solution
is to pre-train a powerful multilingual deep language model over large-scale
corpora in hundreds of languages. However, the vocabulary size for each
language in such a model is relatively small, especially for low-resource
languages. This limitation inevitably hinders the performance of these
multilingual models on tasks such as sequence labeling, wherein in-depth
token-level or sentence-level understanding is essential.
In this paper, inspired by previous methods designed for monolingual
settings, we investigate two approaches (i.e., joint mapping and mixture
mapping) based on a pre-trained multilingual model BERT for addressing the
out-of-vocabulary (OOV) problem on a variety of tasks, including part-of-speech
tagging, named entity recognition, machine translation quality estimation, and
machine reading comprehension. Experimental results show that using mixture
mapping is more promising. To the best of our knowledge, this is the first work
that attempts to address and discuss the OOV issue in multilingual settings.
| 2,019 | Computation and Language |
Multi-Modal Citizen Science: From Disambiguation to Transcription of
Classical Literature | The engagement of citizens in the research projects, including Digital
Humanities projects, has risen in prominence in recent years. This type of
engagement not only leads to incidental learning of participants but also
indicates the added value of corpus enrichment via different types of
annotations undertaken by users generating so-called smart texts. Our work
focuses on the continuous task of adding new layers of annotation to Classical
Literature. We aim to provide more extensive tools for readers of smart texts,
enhancing their reading comprehension and at the same time empowering the
language learning by introducing intellectual tasks, i.e., linking, tagging,
and disambiguation. The current study adds a new mode of annotation-audio
annotations-to the extensively annotated corpus of poetry by the Persian poet
Hafiz. By proposing tasks with three different difficulty levels, we estimate
the users' ability of providing correct annotations in order to rate their
answers in further stages of the project, where no ground truth data is
available. While proficiency in Persian is beneficial, annotators with no
knowledge of Persian are also able to add annotations to the corpus.
| 2,019 | Computation and Language |
End-to-End Code-Switching ASR for Low-Resourced Language Pairs | Despite the significant progress in end-to-end (E2E) automatic speech
recognition (ASR), E2E ASR for low resourced code-switching (CS) speech has not
been well studied. In this work, we describe an E2E ASR pipeline for the
recognition of CS speech in which a low-resourced language is mixed with a high
resourced language. Low-resourcedness in acoustic data hinders the performance
of E2E ASR systems more severely than the conventional ASR systems.~To mitigate
this problem in the transcription of archives with code-switching Frisian-Dutch
speech, we integrate a designated decoding scheme and perform rescoring with
neural network-based language models to enable better utilization of the
available textual resources. We first incorporate a multi-graph decoding
approach which creates parallel search spaces for each monolingual and mixed
recognition tasks to maximize the utilization of the textual resources from
each language. Further, language model rescoring is performed using a recurrent
neural network pre-trained with cross-lingual embedding and further adapted
with the limited amount of in-domain CS text. The ASR experiments demonstrate
the effectiveness of the described techniques in improving the recognition
performance of an E2E CS ASR system in a low-resourced scenario.
| 2,019 | Computation and Language |
On the use of BERT for Neural Machine Translation | Exploiting large pretrained models for various NMT tasks have gained a lot of
visibility recently. In this work we study how BERT pretrained models could be
exploited for supervised Neural Machine Translation. We compare various ways to
integrate pretrained BERT model with NMT model and study the impact of the
monolingual data used for BERT training on the final translation quality. We
use WMT-14 English-German, IWSLT15 English-German and IWSLT14 English-Russian
datasets for these experiments. In addition to standard task test set
evaluation, we perform evaluation on out-of-domain test sets and noise injected
test sets, in order to assess how BERT pretrained representations affect model
robustness.
| 2,019 | Computation and Language |
Improving Semantic Parsing with Neural Generator-Reranker Architecture | Semantic parsing is the problem of deriving machine interpretable meaning
representations from natural language utterances. Neural models with
encoder-decoder architectures have recently achieved substantial improvements
over traditional methods. Although neural semantic parsers appear to have
relatively high recall using large beam sizes, there is room for improvement
with respect to one-best precision. In this work, we propose a
generator-reranker architecture for semantic parsing. The generator produces a
list of potential candidates and the reranker, which consists of a
pre-processing step for the candidates followed by a novel critic network,
reranks these candidates based on the similarity between each candidate and the
input sentence. We show the advantages of this approach along with how it
improves the parsing performance through extensive analysis. We experiment our
model on three semantic parsing datasets (GEO, ATIS, and OVERNIGHT). The
overall architecture achieves the state-of-the-art results in all three
datasets.
| 2,019 | Computation and Language |
Automatically Learning Data Augmentation Policies for Dialogue Tasks | Automatic data augmentation (AutoAugment) (Cubuk et al., 2019) searches for
optimal perturbation policies via a controller trained using performance
rewards of a sampled policy on the target task, hence reducing data-level model
bias. While being a powerful algorithm, their work has focused on computer
vision tasks, where it is comparatively easy to apply imperceptible
perturbations without changing an image's semantic meaning. In our work, we
adapt AutoAugment to automatically discover effective perturbation policies for
natural language processing (NLP) tasks such as dialogue generation. We start
with a pool of atomic operations that apply subtle semantic-preserving
perturbations to the source inputs of a dialogue task (e.g., different POS-tag
types of stopword dropout, grammatical errors, and paraphrasing). Next, we
allow the controller to learn more complex augmentation policies by searching
over the space of the various combinations of these atomic operations.
Moreover, we also explore conditioning the controller on the source inputs of
the target task, since certain strategies may not apply to inputs that do not
contain that strategy's required linguistic features. Empirically, we
demonstrate that both our input-agnostic and input-aware controllers discover
useful data augmentation policies, and achieve significant improvements over
the previous state-of-the-art, including trained on manually-designed policies.
| 2,019 | Computation and Language |
Decomposing Textual Information For Style Transfer | This paper focuses on latent representations that could effectively decompose
different aspects of textual information. Using a framework of style transfer
for texts, we propose several empirical methods to assess information
decomposition quality. We validate these methods with several state-of-the-art
textual style transfer methods. Higher quality of information decomposition
corresponds to higher performance in terms of bilingual evaluation understudy
(BLEU) between output and human-written reformulations.
| 2,019 | Computation and Language |
Part of speech tagging for code switched data | We address the problem of Part of Speech tagging (POS) in the context of
linguistic code switching (CS). CS is the phenomenon where a speaker switches
between two languages or variants of the same language within or across
utterances, known as intra-sentential or inter-sentential CS, respectively.
Processing CS data is especially challenging in intra-sentential data given
state of the art monolingual NLP technology since such technology is geared
toward the processing of one language at a time. In this paper we explore
multiple strategies of applying state of the art POS taggers to CS data. We
investigate the landscape in two CS language pairs, Spanish-English and Modern
Standard Arabic-Arabic dialects. We compare the use of two POS taggers vs. a
unified tagger trained on CS data. Our results show that applying a machine
learning framework using two state of the art POS taggers achieves better
performance compared to all other approaches that we investigate.
| 2,019 | Computation and Language |
WASA: A Web Application for Sequence Annotation | Data annotation is an important and necessary task for all NLP applications.
Designing and implementing a web-based application that enables many annotators
to annotate and enter their input into one central database is not a trivial
task. These kinds of web-based applications require a consistent and robust
backup for the underlying database and support to enhance the efficiency and
speed of the annotation. Also, they need to ensure that the annotations are
stored with a minimal amount of redundancy in order to take advantage of the
available resources(e.g, storage space). In this paper, we introduce WASA, a
web-based annotation system for managing large-scale multilingual Code
Switching (CS) data annotation. Although WASA has the ability to perform the
annotation for any token sequence with arbitrary tag sets, we will focus on how
WASA is used for CS annotation. The system supports concurrent annotation,
handles multiple encodings, allows for several levels of management control,
and enables quality control measures while seamlessly reporting annotation
statistics from various perspectives and at different levels of granularity.
Moreover, the system is integrated with a robust language specific date
prepossessing tool to enhance the speed and efficiency of the annotation. We
describe the annotation and the administration interfaces as well as the
backend engine.
| 2,018 | Computation and Language |
Creating a Large Multi-Layered Representational Repository of Linguistic
Code Switched Arabic Data | We present our effort to create a large Multi-Layered representational
repository of Linguistic Code-Switched Arabic data. The process involves
developing clear annotation standards and Guidelines, streamlining the
annotation process, and implementing quality control measures. We used two main
protocols for annotation: in-lab gold annotations and crowd sourcing
annotations. We developed a web-based annotation tool to facilitate the
management of the annotation process. The current version of the repository
contains a total of 886,252 tokens that are tagged into one of sixteen
code-switching tags. The data exhibits code switching between Modern Standard
Arabic and Egyptian Dialectal Arabic representing three data genres: Tweets,
commentaries, and discussion fora. The overall Inter-Annotator Agreement is
93.1%.
| 2,019 | Computation and Language |
Overview for the Second Shared Task on Language Identification in
Code-Switched Data | We present an overview of the second shared task on language identification
in code-switched data. For the shared task, we had code-switched data from two
different language pairs: Modern Standard Arabic-Dialectal Arabic (MSA-DA) and
Spanish-English (SPA-ENG). We had a total of nine participating teams, with all
teams submitting a system for SPA-ENG and four submitting for MSA-DA. Through
evaluation, we found that once again language identification is more difficult
for the language pair that is more closely related. We also found that this
year's systems performed better overall than the systems from the previous
shared task indicating overall progress in the state of the art for this task.
| 2,019 | Computation and Language |
OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction | OpenNRE is an open-source and extensible toolkit that provides a unified
framework to implement neural models for relation extraction (RE).
Specifically, by implementing typical RE methods, OpenNRE not only allows
developers to train custom models to extract structured relational facts from
the plain text but also supports quick model validation for researchers.
Besides, OpenNRE provides various functional RE modules based on both
TensorFlow and PyTorch to maintain sufficient modularity and extensibility,
making it becomes easy to incorporate new models into the framework. Besides
the toolkit, we also release an online system to meet real-time extraction
without any training and deploying. Meanwhile, the online system can extract
facts in various scenarios as well as aligning the extracted facts to Wikidata,
which may benefit various downstream knowledge-driven applications (e.g.,
information retrieval and question answering). More details of the toolkit and
online system can be obtained from http://github.com/thunlp/OpenNRE.
| 2,019 | Computation and Language |
Attention-based method for categorizing different types of online
harassment language | In the era of social media and networking platforms, Twitter has been doomed
for abuse and harassment toward users specifically women. Monitoring the
contents including sexism and sexual harassment in traditional media is easier
than monitoring on the online social media platforms like Twitter, because of
the large amount of user generated content in these media. So, the research
about the automated detection of content containing sexual or racist harassment
is an important issue and could be the basis for removing that content or
flagging it for human evaluation. Previous studies have been focused on
collecting data about sexism and racism in very broad terms. However, there is
no much study focusing on different types of online harassment attracting
natural language processing techniques. In this work, we present an
multi-attention based approach for the detection of different types of
harassment in tweets. Our approach is based on the Recurrent Neural Networks
and particularly we are using a deep, classification specific multi-attention
mechanism. Moreover, we tackle the problem of imbalanced data, using a
back-translation method. Finally, we present a comparison between different
approaches based on the Recurrent Neural Networks.
| 2,020 | Computation and Language |
Integrated Triaging for Fast Reading Comprehension | Although according to several benchmarks automatic machine reading
comprehension (MRC) systems have recently reached super-human performance, less
attention has been paid to their computational efficiency. However, efficiency
is of crucial importance for training and deployment in real world
applications. This paper introduces Integrated Triaging, a framework that
prunes almost all context in early layers of a network, leaving the remaining
(deep) layers to scan only a tiny fraction of the full corpus. This pruning
drastically increases the efficiency of MRC models and further prevents the
later layers from overfitting to prevalent short paragraphs in the training
set. Our framework is extremely flexible and naturally applicable to a wide
variety of models. Our experiment on doc-SQuAD and TriviaQA tasks demonstrates
its effectiveness in consistently improving both speed and quality of several
diverse MRC models.
| 2,019 | Computation and Language |
The Source-Target Domain Mismatch Problem in Machine Translation | While we live in an increasingly interconnected world, different places still
exhibit strikingly different cultures and many events we experience in our
every day life pertain only to the specific place we live in. As a result,
people often talk about different things in different parts of the world. In
this work we study the effect of local context in machine translation and
postulate that particularly in low resource settings this causes the domains of
the source and target language to greatly mismatch, as the two languages are
often spoken in further apart regions of the world with more distinctive
cultural traits and unrelated local events. We first formalize the concept of
source-target domain mismatch, propose a metric to quantify it, and provide
empirical evidence corroborating our intuition that organic text produced by
people speaking very different languages exhibits the most dramatic
differences. We conclude with an empirical study of how source-target domain
mismatch affects training of machine translation systems for low resource
language pairs. In particular, we find that it severely affects
back-translation, but the degradation can be alleviated by combining
back-translation with self-training and by increasing the relative amount of
target side monolingual data.
| 2,020 | Computation and Language |
Generalized Zero-shot ICD Coding | The International Classification of Diseases (ICD) is a list of
classification codes for the diagnoses. Automatic ICD coding is in high demand
as the manual coding can be labor-intensive and error-prone. It is a
multi-label text classification task with extremely long-tailed label
distribution, making it difficult to perform fine-grained classification on
both frequent and zero-shot codes at the same time. In this paper, we propose a
latent feature generation framework for generalized zero-shot ICD coding, where
we aim to improve the prediction on codes that have no labeled data without
compromising the performance on seen codes. Our framework generates pseudo
features conditioned on the ICD code descriptions and exploits the ICD code
hierarchical structure. To guarantee the semantic consistency between the
generated features and real features, we reconstruct the keywords in the input
documents that are related to the conditioned ICD codes. To the best of our
knowledge, this works represents the first one that proposes an adversarial
generative model for the generalized zero-shot learning on multi-label text
classification. Extensive experiments demonstrate the effectiveness of our
approach. On the public MIMIC-III dataset, our methods improve the F1 score
from nearly 0 to 20.91% for the zero-shot codes, and increase the AUC score by
3% (absolute improvement) from previous state of the art. We also show that the
framework improves the performance on few-shot codes.
| 2,019 | Computation and Language |
Translation, Sentiment and Voices: A Computational Model to Translate
and Analyse Voices from Real-Time Video Calling | With internet quickly becoming an easy access to many, voice calling over
internet is slowly gaining momentum. Individuals has been engaging in video
communication across the world in different languages. The decade saw the
emergence of language translation using neural networks as well. With more data
being generated in audio and visual forms, there has become a need and a
challenge to analyse such information for many researchers from academia and
industry. The availability of video chat corpora is limited as organizations
protect user privacy and ensure data security. For this reason, an audio-visual
communication system (VidALL) has been developed and audio-speeches were
extracted. To understand human nature while answering a video call, an analysis
was conducted where polarity and vocal intensity were considered as parameters.
Simultaneously, a translation model using a neural approach was developed to
translate English sentences to French. Simple RNN-based and Embedded-RNN based
models were used for the translation model. BLEU score and target sentence
comparators were used to check sentence correctness. Embedded-RNN showed an
accuracy of 88.71 percentage and predicted correct sentences. A key finding
suggest that polarity is a good estimator to understand human emotion.
| 2,019 | Computation and Language |
Towards Zero-resource Cross-lingual Entity Linking | Cross-lingual entity linking (XEL) grounds named entities in a source
language to an English Knowledge Base (KB), such as Wikipedia. XEL is
challenging for most languages because of limited availability of requisite
resources. However, much previous work on XEL has been on simulated settings
that actually use significant resources (e.g. source language Wikipedia,
bilingual entity maps, multilingual embeddings) that are unavailable in truly
low-resource languages. In this work, we first examine the effect of these
resource assumptions and quantify how much the availability of these resource
affects overall quality of existing XEL systems. Next, we propose three
improvements to both entity candidate generation and disambiguation that make
better use of the limited data we do have in resource-scarce scenarios. With
experiments on four extremely low-resource languages, we show that our model
results in gains of 6-23% in end-to-end linking accuracy.
| 2,019 | Computation and Language |
Towards Automatic Bot Detection in Twitter for Health-related Tasks | With the increasing use of social media data for health-related research, the
credibility of the information from this source has been questioned as the
posts may originate from automated accounts or "bots". While automatic bot
detection approaches have been proposed, there are none that have been
evaluated on users posting health-related information. In this paper, we extend
an existing bot detection system and customize it for health-related research.
Using a dataset of Twitter users, we first show that the system, which was
designed for political bot detection, underperforms when applied to
health-related Twitter users. We then incorporate additional features and a
statistical machine learning classifier to significantly improve bot detection
performance. Our approach obtains F_1 scores of 0.7 for the "bot" class,
representing improvements of 0.339. Our approach is customizable and
generalizable for bot detection in other health-related social media cohorts.
| 2,019 | Computation and Language |
A Pilot Study for Chinese SQL Semantic Parsing | The task of semantic parsing is highly useful for dialogue and question
answering systems. Many datasets have been proposed to map natural language
text into SQL, among which the recent Spider dataset provides cross-domain
samples with multiple tables and complex queries. We build a Spider dataset for
Chinese, which is currently a low-resource language in this task area.
Interesting research questions arise from the uniqueness of the language, which
requires word segmentation, and also from the fact that SQL keywords and
columns of DB tables are typically written in English. We compare character-
and word-based encoders for a semantic parser, and different embedding schemes.
Results show that word-based semantic parser is subject to segmentation errors
and cross-lingual word embeddings are useful for text-to-SQL.
| 2,019 | Computation and Language |
Controllable Data Synthesis Method for Grammatical Error Correction | Due to the lack of parallel data in current Grammatical Error Correction
(GEC) task, models based on Sequence to Sequence framework cannot be adequately
trained to obtain higher performance. We propose two data synthesis methods
which can control the error rate and the ratio of error types on synthetic
data. The first approach is to corrupt each word in the monolingual corpus with
a fixed probability, including replacement, insertion and deletion. Another
approach is to train error generation models and further filtering the decoding
results of the models. The experiments on different synthetic data show that
the error rate is 40% and the ratio of error types is the same can improve the
model performance better. Finally, we synthesize about 100 million data and
achieve comparable performance as the state of the art, which uses twice as
much data as we use.
| 2,021 | Computation and Language |
Recent Advances in End-to-End Spoken Language Understanding | This work investigates spoken language understanding (SLU) systems in the
scenario when the semantic information is extracted directly from the speech
signal by means of a single end-to-end neural network model. Two SLU tasks are
considered: named entity recognition (NER) and semantic slot filling (SF). For
these tasks, in order to improve the model performance, we explore various
techniques including speaker adaptation, a modification of the connectionist
temporal classification (CTC) training criterion, and sequential pretraining.
| 2,019 | Computation and Language |
Language-Agnostic Syllabification with Neural Sequence Labeling | The identification of syllables within phonetic sequences is known as
syllabification. This task is thought to play an important role in natural
language understanding, speech production, and the development of speech
recognition systems. The concept of the syllable is cross-linguistic, though
formal definitions are rarely agreed upon, even within a language. In response,
data-driven syllabification methods have been developed to learn from
syllabified examples. These methods often employ classical machine learning
sequence labeling models. In recent years, recurrence-based neural networks
have been shown to perform increasingly well for sequence labeling tasks such
as named entity recognition (NER), part of speech (POS) tagging, and chunking.
We present a novel approach to the syllabification problem which leverages
modern neural network techniques. Our network is constructed with long
short-term memory (LSTM) cells, a convolutional component, and a conditional
random field (CRF) output layer. Existing syllabification approaches are rarely
evaluated across multiple language families. To demonstrate cross-linguistic
generalizability, we show that the network is competitive with state of the art
systems in syllabifying English, Dutch, Italian, French, Manipuri, and Basque
datasets.
| 2,019 | Computation and Language |
A Simple and Effective Model for Answering Multi-span Questions | Models for reading comprehension (RC) commonly restrict their output space to
the set of all single contiguous spans from the input, in order to alleviate
the learning problem and avoid the need for a model that generates text
explicitly. However, forcing an answer to be a single span can be restrictive,
and some recent datasets also include multi-span questions, i.e., questions
whose answer is a set of non-contiguous spans in the text. Naturally, models
that return single spans cannot answer these questions. In this work, we
propose a simple architecture for answering multi-span questions by casting the
task as a sequence tagging problem, namely, predicting for each input token
whether it should be part of the output or not. Our model substantially
improves performance on span extraction questions from DROP and Quoref by 9.9
and 5.5 EM points respectively.
| 2,020 | Computation and Language |
Augmenting Non-Collaborative Dialog Systems with Explicit Semantic and
Strategic Dialog History | We study non-collaborative dialogs, where two agents have a conflict of
interest but must strategically communicate to reach an agreement (e.g.,
negotiation). This setting poses new challenges for modeling dialog history
because the dialog's outcome relies not only on the semantic intent, but also
on tactics that convey the intent. We propose to model both semantic and tactic
history using finite state transducers (FSTs). Unlike RNN, FSTs can explicitly
represent dialog history through all the states traversed, facilitating
interpretability of dialog structure. We train FSTs on a set of strategies and
tactics used in negotiation dialogs. The trained FSTs show plausible tactic
structure and can be generalized to other non-collaborative domains (e.g.,
persuasion). We evaluate the FSTs by incorporating them in an automated
negotiating system that attempts to sell products and a persuasion system that
persuades people to donate to a charity. Experiments show that explicitly
modeling both semantic and tactic history is an effective way to improve both
dialog policy planning and generation performance.
| 2,019 | Computation and Language |
A Dynamic Strategy Coach for Effective Negotiation | Negotiation is a complex activity involving strategic reasoning, persuasion,
and psychology. An average person is often far from an expert in negotiation.
Our goal is to assist humans to become better negotiators through a
machine-in-the-loop approach that combines machine's advantage at data-driven
decision-making and human's language generation ability. We consider a
bargaining scenario where a seller and a buyer negotiate the price of an item
for sale through a text-based dialog. Our negotiation coach monitors messages
between them and recommends tactics in real time to the seller to get a better
deal (e.g., "reject the proposal and propose a price", "talk about your
personal experience with the product"). The best strategy and tactics largely
depend on the context (e.g., the current price, the buyer's attitude).
Therefore, we first identify a set of negotiation tactics, then learn to
predict the best strategy and tactics in a given dialog context from a set of
human-human bargaining dialogs. Evaluation on human-human dialogs shows that
our coach increases the profits of the seller by almost 60%.
| 2,019 | Computation and Language |
Generating Diverse Story Continuations with Controllable Semantics | We propose a simple and effective modeling framework for controlled
generation of multiple, diverse outputs. We focus on the setting of generating
the next sentence of a story given its context. As controllable dimensions, we
consider several sentence attributes, including sentiment, length, predicates,
frames, and automatically-induced clusters. Our empirical results demonstrate:
(1) our framework is accurate in terms of generating outputs that match the
target control values; (2) our model yields increased maximum metric scores
compared to standard n-best list generation via beam search; (3) controlling
generation with semantic frames leads to a stronger combination of diversity
and quality than other control variables as measured by automatic metrics. We
also conduct a human evaluation to assess the utility of providing multiple
suggestions for creative writing, demonstrating promising results for the
potential of controllable, diverse generation in a collaborative writing
system.
| 2,020 | Computation and Language |
Regressing Word and Sentence Embeddings for Regularization of Neural
Machine Translation | In recent years, neural machine translation (NMT) has become the dominant
approach in automated translation. However, like many other deep learning
approaches, NMT suffers from overfitting when the amount of training data is
limited. This is a serious issue for low-resource language pairs and many
specialized translation domains that are inherently limited in the amount of
available supervised data. For this reason, in this paper we propose regressing
word (ReWE) and sentence (ReSE) embeddings at training time as a way to
regularize NMT models and improve their generalization. During training, our
models are trained to jointly predict categorical (words in the vocabulary) and
continuous (word and sentence embeddings) outputs. An extensive set of
experiments over four language pairs of variable training set size has showed
that ReWE and ReSE can outperform strong state-of-the-art baseline models, with
an improvement that is larger for smaller training sets (e.g., up to +5:15 BLEU
points in Basque-English translation). Visualizations of the decoder's output
space show that the proposed regularizers improve the clustering of unique
words, facilitating correct predictions. In a final experiment on unsupervised
NMT, we show that ReWE and ReSE are also able to improve the quality of machine
translation when no parallel data are available.
| 2,019 | Computation and Language |
A Critique of the Smooth Inverse Frequency Sentence Embeddings | We critically review the smooth inverse frequency sentence embedding method
of Arora, Liang, and Ma (2017), and show inconsistencies in its setup,
derivation, and evaluation.
| 2,019 | Computation and Language |
Embeddings for DNN speaker adaptive training | In this work, we investigate the use of embeddings for speaker-adaptive
training of DNNs (DNN-SAT) focusing on a small amount of adaptation data per
speaker. DNN-SAT can be viewed as learning a mapping from each embedding to
transformation parameters that are applied to the shared parameters of the DNN.
We investigate different approaches to applying these transformations, and find
that with a good training strategy, a multi-layer adaptation network applied to
all hidden layers is no more effective than a single linear layer acting on the
embeddings to transform the input features. In the second part of our work, we
evaluate different embeddings (i-vectors, x-vectors and deep CNN embeddings) in
an additional speaker recognition task in order to gain insight into what
should characterize an embedding for DNN-SAT. We find the performance for
speaker recognition of a given representation is not correlated with its ASR
performance; in fact, ability to capture more speech attributes than just
speaker identity was the most important characteristic of the embeddings for
efficient DNN-SAT ASR. Our best models achieved relative WER gains of 4% and 9%
over DNN baselines using speaker-level cepstral mean normalisation (CMN), and a
fully speaker-independent model, respectively.
| 2,019 | Computation and Language |
A Hybrid Persian Sentiment Analysis Framework: Integrating Dependency
Grammar Based Rules and Deep Neural Networks | Social media hold valuable, vast and unstructured information on public
opinion that can be utilized to improve products and services. The automatic
analysis of such data, however, requires a deep understanding of natural
language. Current sentiment analysis approaches are mainly based on word
co-occurrence frequencies, which are inadequate in most practical cases. In
this work, we propose a novel hybrid framework for concept-level sentiment
analysis in Persian language, that integrates linguistic rules and deep
learning to optimize polarity detection. When a pattern is triggered, the
framework allows sentiments to flow from words to concepts based on symbolic
dependency relations. When no pattern is triggered, the framework switches to
its subsymbolic counterpart and leverages deep neural networks (DNN) to perform
the classification. The proposed framework outperforms state-of-the-art
approaches (including support vector machine, and logistic regression) and DNN
classifiers (long short-term memory, and Convolutional Neural Networks) with a
margin of 10-15% and 3-4% respectively, using benchmark Persian product and
hotel reviews corpora.
| 2,019 | Computation and Language |
On the Importance of the Kullback-Leibler Divergence Term in Variational
Autoencoders for Text Generation | Variational Autoencoders (VAEs) are known to suffer from learning
uninformative latent representation of the input due to issues such as
approximated posterior collapse, or entanglement of the latent space. We impose
an explicit constraint on the Kullback-Leibler (KL) divergence term inside the
VAE objective function. While the explicit constraint naturally avoids
posterior collapse, we use it to further understand the significance of the KL
term in controlling the information transmitted through the VAE channel. Within
this framework, we explore different properties of the estimated posterior
distribution, and highlight the trade-off between the amount of information
encoded in a latent code during training, and the generative capacity of the
model.
| 2,019 | Computation and Language |
A Closer Look at Data Bias in Neural Extractive Summarization Models | In this paper, we take stock of the current state of summarization datasets
and explore how different factors of datasets influence the generalization
behaviour of neural extractive summarization models. Specifically, we first
propose several properties of datasets, which matter for the generalization of
summarization models. Then we build the connection between priors residing in
datasets and model designs, analyzing how different properties of datasets
influence the choices of model structure design and training methods. Finally,
by taking a typical dataset as an example, we rethink the process of the model
design based on the experience of the above analysis. We demonstrate that when
we have a deep understanding of the characteristics of datasets, a simple
approach can bring significant improvements to the existing state-of-the-art
model.A
| 2,019 | Computation and Language |
Retrieval-based Goal-Oriented Dialogue Generation | Most research on dialogue has focused either on dialogue generation for
openended chit chat or on state tracking for goal-directed dialogue. In this
work, we explore a hybrid approach to goal-oriented dialogue generation that
combines retrieval from past history with a hierarchical, neural
encoder-decoder architecture. We evaluate this approach in the customer support
domain using the Multiwoz dataset (Budzianowski et al., 2018). We show that
adding this retrieval step to a hierarchical, neural encoder-decoder
architecture leads to significant improvements, including responses that are
rated more appropriate and fluent by human evaluators. Finally, we compare our
retrieval-based model to various semantically conditioned models explicitly
using past dialog act information, and find that our proposed model is
competitive with the current state of the art (Chen et al., 2019), while not
requiring explicit labels about past machine acts.
| 2,019 | Computation and Language |
Incremental processing of noisy user utterances in the spoken language
understanding task | The state-of-the-art neural network architectures make it possible to create
spoken language understanding systems with high quality and fast processing
time. One major challenge for real-world applications is the high latency of
these systems caused by triggered actions with high executions times. If an
action can be separated into subactions, the reaction time of the systems can
be improved through incremental processing of the user utterance and starting
subactions while the utterance is still being uttered. In this work, we present
a model-agnostic method to achieve high quality in processing incrementally
produced partial utterances. Based on clean and noisy versions of the ATIS
dataset, we show how to create datasets with our method to create low-latency
natural language understanding components. We get improvements of up to 47.91
absolute percentage points in the metric F1-score.
| 2,019 | Computation and Language |
Towards Diverse Paraphrase Generation Using Multi-Class Wasserstein GAN | Paraphrase generation is an important and challenging natural language
processing (NLP) task. In this work, we propose a deep generative model to
generate paraphrase with diversity. Our model is based on an encoder-decoder
architecture. An additional transcoder is used to convert a sentence into its
paraphrasing latent code. The transcoder takes an explicit pattern embedding
variable as condition, so diverse paraphrase can be generated by sampling on
the pattern embedding variable. We use a Wasserstein GAN to align the
distributions of the real and generated paraphrase samples. We propose a
multi-class extension to the Wasserstein GAN, which allows our generative model
to learn from both positive and negative samples. The generated paraphrase
distribution is forced to get closer to the positive real distribution, and be
pushed away from the negative distribution in Wasserstein distance. We test our
model in two datasets with both automatic metrics and human evaluation. Results
show that our model can generate fluent and reliable paraphrase samples that
outperform the state-of-art results, while also provides reasonable variability
and diversity.
| 2,019 | Computation and Language |
Automatic Fact-guided Sentence Modification | Online encyclopediae like Wikipedia contain large amounts of text that need
frequent corrections and updates. The new information may contradict existing
content in encyclopediae. In this paper, we focus on rewriting such dynamically
changing articles. This is a challenging constrained generation task, as the
output must be consistent with the new information and fit into the rest of the
existing document. To this end, we propose a two-step solution: (1) We identify
and remove the contradicting components in a target text for a given claim,
using a neutralizing stance model; (2) We expand the remaining text to be
consistent with the given claim, using a novel two-encoder sequence-to-sequence
model with copy attention. Applied to a Wikipedia fact update dataset, our
method successfully generates updated sentences for new claims, achieving the
highest SARI score. Furthermore, we demonstrate that generating synthetic data
through such rewritten sentences can successfully augment the FEVER
fact-checking training dataset, leading to a relative error reduction of 13%.
| 2,019 | Computation and Language |
The Universal Decompositional Semantics Dataset and Decomp Toolkit | We present the Universal Decompositional Semantics (UDS) dataset (v1.0),
which is bundled with the Decomp toolkit (v0.1). UDS1.0 unifies five
high-quality, decompositional semantics-aligned annotation sets within a single
semantic graph specification---with graph structures defined by the predicative
patterns produced by the PredPatt tool and real-valued node and edge attributes
constructed using sophisticated normalization procedures. The Decomp toolkit
provides a suite of Python 3 tools for querying UDS graphs using SPARQL. Both
UDS1.0 and Decomp0.1 are publicly available at http://decomp.io.
| 2,019 | Computation and Language |
Simple and Effective Paraphrastic Similarity from Parallel Translations | We present a model and methodology for learning paraphrastic sentence
embeddings directly from bitext, removing the time-consuming intermediate step
of creating paraphrase corpora. Further, we show that the resulting model can
be applied to cross-lingual tasks where it both outperforms and is orders of
magnitude faster than more complex state-of-the-art baselines.
| 2,019 | Computation and Language |
Semantic Graph Parsing with Recurrent Neural Network DAG Grammars | Semantic parses are directed acyclic graphs (DAGs), so semantic parsing
should be modeled as graph prediction. But predicting graphs presents difficult
technical challenges, so it is simpler and more common to predict the
linearized graphs found in semantic parsing datasets using well-understood
sequence models. The cost of this simplicity is that the predicted strings may
not be well-formed graphs. We present recurrent neural network DAG grammars, a
graph-aware sequence model that ensures only well-formed graphs while
sidestepping many difficulties in graph prediction. We test our model on the
Parallel Meaning Bank---a multilingual semantic graphbank. Our approach yields
competitive results in English and establishes the first results for German,
Italian and Dutch.
| 2,019 | Computation and Language |
Multi-Head Attention with Diversity for Learning Grounded Multilingual
Multimodal Representations | With the aim of promoting and understanding the multilingual version of image
search, we leverage visual object detection and propose a model with diverse
multi-head attention to learn grounded multilingual multimodal representations.
Specifically, our model attends to different types of textual semantics in two
languages and visual objects for fine-grained alignments between sentences and
images. We introduce a new objective function which explicitly encourages
attention diversity to learn an improved visual-semantic embedding space. We
evaluate our model in the German-Image and English-Image matching tasks on the
Multi30K dataset, and in the Semantic Textual Similarity task with the English
descriptions of visual content. Results show that our model yields a
significant performance gain over other methods in all of the three tasks.
| 2,019 | Computation and Language |
Lexical Features Are More Vulnerable, Syntactic Features Have More
Predictive Power | Understanding the vulnerability of linguistic features extracted from noisy
text is important for both developing better health text classification models
and for interpreting vulnerabilities of natural language models. In this paper,
we investigate how generic language characteristics, such as syntax or the
lexicon, are impacted by artificial text alterations. The vulnerability of
features is analysed from two perspectives: (1) the level of feature value
change, and (2) the level of change of feature predictive power as a result of
text modifications. We show that lexical features are more sensitive to text
modifications than syntactic ones. However, we also demonstrate that these
smaller changes of syntactic features have a stronger influence on
classification performance downstream, compared to the impact of changes to
lexical features. Results are validated across three datasets representing
different text-classification tasks, with different levels of lexical and
syntactic complexity of both conversational and written language.
| 2,019 | Computation and Language |
Interrogating the Explanatory Power of Attention in Neural Machine
Translation | Attention models have become a crucial component in neural machine
translation (NMT). They are often implicitly or explicitly used to justify the
model's decision in generating a specific token but it has not yet been
rigorously established to what extent attention is a reliable source of
information in NMT. To evaluate the explanatory power of attention for NMT, we
examine the possibility of yielding the same prediction but with counterfactual
attention models that modify crucial aspects of the trained attention model.
Using these counterfactual attention mechanisms we assess the extent to which
they still preserve the generation of function and content words in the
translation process. Compared to a state of the art attention model, our
counterfactual attention models produce 68% of function words and 21% of
content words in our German-English dataset. Our experiments demonstrate that
attention models by themselves cannot reliably explain the decisions made by a
NMT model.
| 2,019 | Computation and Language |
Specializing Word Embeddings (for Parsing) by Information Bottleneck | Pre-trained word embeddings like ELMo and BERT contain rich syntactic and
semantic information, resulting in state-of-the-art performance on various
tasks. We propose a very fast variational information bottleneck (VIB) method
to nonlinearly compress these embeddings, keeping only the information that
helps a discriminative parser. We compress each word embedding to either a
discrete tag or a continuous vector. In the discrete version, our automatically
compressed tags form an alternative tag set: we show experimentally that our
tags capture most of the information in traditional POS tag annotations, but
our tag sequences can be parsed more accurately at the same level of tag
granularity. In the continuous version, we show experimentally that moderately
compressing the word embeddings by our method yields a more accurate parser in
8 of 9 languages, unlike simple dimensionality reduction.
| 2,019 | Computation and Language |
Writing habits and telltale neighbors: analyzing clinical concept usage
patterns with sublanguage embeddings | Natural language processing techniques are being applied to increasingly
diverse types of electronic health records, and can benefit from in-depth
understanding of the distinguishing characteristics of medical document types.
We present a method for characterizing the usage patterns of clinical concepts
among different document types, in order to capture semantic differences beyond
the lexical level. By training concept embeddings on clinical documents of
different types and measuring the differences in their nearest neighborhood
structures, we are able to measure divergences in concept usage while
correcting for noise in embedding learning. Experiments on the MIMIC-III corpus
demonstrate that our approach captures clinically-relevant differences in
concept usage and provides an intuitive way to explore semantic characteristics
of clinical document collections.
| 2,019 | Computation and Language |
Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word
Representations | Contextualized word representations are able to give different
representations for the same word in different contexts, and they have been
shown to be effective in downstream natural language processing tasks, such as
question answering, named entity recognition, and sentiment analysis. However,
evaluation on word sense disambiguation (WSD) in prior work shows that using
contextualized word representations does not outperform the state-of-the-art
approach that makes use of non-contextualized word embeddings. In this paper,
we explore different strategies of integrating pre-trained contextualized word
representations and our best strategy achieves accuracies exceeding the best
prior published accuracies by significant margins on multiple benchmark WSD
datasets. We make the source code available at
https://github.com/nusnlp/contextemb-wsd.
| 2,020 | Computation and Language |
Analyzing Sentence Fusion in Abstractive Summarization | While recent work in abstractive summarization has resulted in higher scores
in automatic metrics, there is little understanding on how these systems
combine information taken from multiple document sentences. In this paper, we
analyze the outputs of five state-of-the-art abstractive summarizers, focusing
on summary sentences that are formed by sentence fusion. We ask assessors to
judge the grammaticality, faithfulness, and method of fusion for summary
sentences. Our analysis reveals that system sentences are mostly grammatical,
but often fail to remain faithful to the original article.
| 2,019 | Computation and Language |
Multilingual End-to-End Speech Translation | In this paper, we propose a simple yet effective framework for multilingual
end-to-end speech translation (ST), in which speech utterances in source
languages are directly translated to the desired target languages with a
universal sequence-to-sequence architecture. While multilingual models have
shown to be useful for automatic speech recognition (ASR) and machine
translation (MT), this is the first time they are applied to the end-to-end ST
problem. We show the effectiveness of multilingual end-to-end ST in two
scenarios: one-to-many and many-to-many translations with publicly available
data. We experimentally confirm that multilingual end-to-end ST models
significantly outperform bilingual ones in both scenarios. The generalization
of multilingual training is also evaluated in a transfer learning scenario to a
very low-resource language pair. All of our codes and the database are publicly
available to encourage further research in this emergent multilingual ST topic.
| 2,019 | Computation and Language |
Bad Form: Comparing Context-Based and Form-Based Few-Shot Learning in
Distributional Semantic Models | Word embeddings are an essential component in a wide range of natural
language processing applications. However, distributional semantic models are
known to struggle when only a small number of context sentences are available.
Several methods have been proposed to obtain higher-quality vectors for these
words, leveraging both this context information and sometimes the word forms
themselves through a hybrid approach. We show that the current tasks do not
suffice to evaluate models that use word-form information, as such models can
easily leverage word forms in the training data that are related to word forms
in the test data. We introduce 3 new tasks, allowing for a more balanced
comparison between models. Furthermore, we show that hyperparameters that have
largely been ignored in previous work can consistently improve the performance
of both baseline and advanced models, achieving a new state of the art on 4 out
of 6 tasks.
| 2,019 | Computation and Language |
When and Why is Document-level Context Useful in Neural Machine
Translation? | Document-level context has received lots of attention for compensating neural
machine translation (NMT) of isolated sentences. However, recent advances in
document-level NMT focus on sophisticated integration of the context,
explaining its improvement with only a few selected examples or targeted test
sets. We extensively quantify the causes of improvements by a document-level
model in general test sets, clarifying the limit of the usefulness of
document-level context in NMT. We show that most of the improvements are not
interpretable as utilizing the context. We also show that a minimal encoding is
sufficient for the context modeling and very long context is not helpful for
NMT.
| 2,019 | Computation and Language |
TMLab: Generative Enhanced Model (GEM) for adversarial attacks | We present our Generative Enhanced Model (GEM) that we used to create samples
awarded the first prize on the FEVER 2.0 Breakers Task. GEM is the extended
language model developed upon GPT-2 architecture. The addition of novel target
vocabulary input to the already existing context input enabled controlled text
generation. The training procedure resulted in creating a model that inherited
the knowledge of pretrained GPT-2, and therefore was ready to generate
natural-like English sentences in the task domain with some additional control.
As a result, GEM generated malicious claims that mixed facts from various
articles, so it became difficult to classify their truthfulness.
| 2,019 | Computation and Language |
Grammatical Error Correction in Low-Resource Scenarios | Grammatical error correction in English is a long studied problem with many
existing systems and datasets. However, there has been only a limited research
on error correction of other languages. In this paper, we present a new dataset
AKCES-GEC on grammatical error correction for Czech. We then make experiments
on Czech, German and Russian and show that when utilizing synthetic parallel
corpus, Transformer neural machine translation model can reach new
state-of-the-art results on these datasets. AKCES-GEC is published under CC
BY-NC-SA 4.0 license at https://hdl.handle.net/11234/1-3057 and the source code
of the GEC model is available at
https://github.com/ufal/low-resource-gec-wnut2019.
| 2,019 | Computation and Language |
Application of Low-resource Machine Translation Techniques to
Russian-Tatar Language Pair | Neural machine translation is the current state-of-the-art in machine
translation. Although it is successful in a resource-rich setting, its
applicability for low-resource language pairs is still debatable. In this
paper, we explore the effect of different techniques to improve machine
translation quality when a parallel corpus is as small as 324 000 sentences,
taking as an example previously unexplored Russian-Tatar language pair. We
apply such techniques as transfer learning and semi-supervised learning to the
base Transformer model, and empirically show that the resulting models improve
Russian to Tatar and Tatar to Russian translation quality by +2.57 and +3.66
BLEU, respectively.
| 2,019 | Computation and Language |
A Survey of Methods to Leverage Monolingual Data in Low-resource Neural
Machine Translation | Neural machine translation has become the state-of-the-art for language pairs
with large parallel corpora. However, the quality of machine translation for
low-resource languages leaves much to be desired. There are several approaches
to mitigate this problem, such as transfer learning, semi-supervised and
unsupervised learning techniques. In this paper, we review the existing
methods, where the main idea is to exploit the power of monolingual data,
which, compared to parallel, is usually easier to obtain and significantly
greater in amount.
| 2,019 | Computation and Language |
Latent-Variable Generative Models for Data-Efficient Text Classification | Generative classifiers offer potential advantages over their discriminative
counterparts, namely in the areas of data efficiency, robustness to data shift
and adversarial examples, and zero-shot learning (Ng and Jordan,2002; Yogatama
et al., 2017; Lewis and Fan,2019). In this paper, we improve generative text
classifiers by introducing discrete latent variables into the generative story,
and explore several graphical model configurations. We parameterize the
distributions using standard neural architectures used in conditional language
modeling and perform learning by directly maximizing the log marginal
likelihood via gradient-based optimization, which avoids the need to do
expectation-maximization. We empirically characterize the performance of our
models on six text classification datasets. The choice of where to include the
latent variable has a significant impact on performance, with the strongest
results obtained when using the latent variable as an auxiliary conditioning
variable in the generation of the textual input. This model consistently
outperforms both the generative and discriminative classifiers in small-data
settings. We analyze our model by using it for controlled generation, finding
that the latent variable captures interpretable properties of the data, even
with very small training sets.
| 2,019 | Computation and Language |
Global Voices: Crossing Borders in Automatic News Summarization | We construct Global Voices, a multilingual dataset for evaluating
cross-lingual summarization methods. We extract social-network descriptions of
Global Voices news articles to cheaply collect evaluation data for into-English
and from-English summarization in 15 languages. Especially, for the
into-English summarization task, we crowd-source a high-quality evaluation
dataset based on guidelines that emphasize accuracy, coverage, and
understandability. To ensure the quality of this dataset, we collect human
ratings to filter out bad summaries, and conduct a survey on humans, which
shows that the remaining summaries are preferred over the social-network
summaries. We study the effect of translation quality in cross-lingual
summarization, comparing a translate-then-summarize approach with several
baselines. Our results highlight the limitations of the ROUGE metric that are
overlooked in monolingual summarization. Our dataset is available for download
at https://forms.gle/gpkJDT6RJWHM1Ztz9 .
| 2,020 | Computation and Language |
MMM: Multi-stage Multi-task Learning for Multi-choice Reading
Comprehension | Machine Reading Comprehension (MRC) for question answering (QA), which aims
to answer a question given the relevant context passages, is an important way
to test the ability of intelligence systems to understand human language.
Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it
often requires more advanced reading comprehension skills such as logical
reasoning, summarization, and arithmetic operations, compared to the extractive
counterpart where answers are usually spans of text within given passages.
Moreover, most existing MCQA datasets are small in size, making the learning
task even harder. We introduce MMM, a Multi-stage Multi-task learning framework
for Multi-choice reading comprehension. Our method involves two sequential
stages: coarse-tuning stage using out-of-domain datasets and multi-task
learning stage using a larger in-domain dataset to help model generalize better
with limited data. Furthermore, we propose a novel multi-step attention network
(MAN) as the top-level classifier for this task. We demonstrate MMM
significantly advances the state-of-the-art on four representative MCQA
datasets.
| 2,019 | Computation and Language |
Machine Translation for Machines: the Sentiment Classification Use Case | We propose a neural machine translation (NMT) approach that, instead of
pursuing adequacy and fluency ("human-oriented" quality criteria), aims to
generate translations that are best suited as input to a natural language
processing component designed for a specific downstream task (a
"machine-oriented" criterion). Towards this objective, we present a
reinforcement learning technique based on a new candidate sampling strategy,
which exploits the results obtained on the downstream task as weak feedback.
Experiments in sentiment classification of Twitter data in German and Italian
show that feeding an English classifier with machine-oriented translations
significantly improves its performance. Classification results outperform those
obtained with translations produced by general-purpose NMT models as well as by
an approach based on reinforcement learning. Moreover, our results on both
languages approximate the classification accuracy computed on gold standard
English tweets.
| 2,019 | Computation and Language |
Dialogue Transformers | We introduce a dialogue policy based on a transformer architecture, where the
self-attention mechanism operates over the sequence of dialogue turns. Recent
work has used hierarchical recurrent neural networks to encode multiple
utterances in a dialogue context, but we argue that a pure self-attention
mechanism is more suitable. By default, an RNN assumes that every item in a
sequence is relevant for producing an encoding of the full sequence, but a
single conversation can consist of multiple overlapping discourse segments as
speakers interleave multiple topics. A transformer picks which turns to include
in its encoding of the current dialogue state, and is naturally suited to
selectively ignoring or attending to dialogue history. We compare the
performance of the Transformer Embedding Dialogue (TED) policy to an LSTM and
to the REDP, which was specifically designed to overcome this limitation of
RNNs.
| 2,020 | Computation and Language |
Detecting Alzheimer's Disease by estimating attention and elicitation
path through the alignment of spoken picture descriptions with the picture
prompt | Cognitive decline is a sign of Alzheimer's disease (AD), and there is
evidence that tracking a person's eye movement, using eye tracking devices, can
be used for the automatic identification of early signs of cognitive decline.
However, such devices are expensive and may not be easy-to-use for people with
cognitive problems. In this paper, we present a new way of capturing similar
visual features, by using the speech of people describing the Cookie Theft
picture - a common cognitive testing task - to identify regions in the picture
prompt that will have caught the speaker's attention and elicited their speech.
After aligning the automatically recognised words with different regions of the
picture prompt, we extract information inspired by eye tracking metrics such as
coordinates of the area of interests (AOI)s, time spent in AOI, time to reach
the AOI, and the number of AOI visits. Using the DementiaBank dataset we train
a binary classifier (AD vs. healthy control) using 10-fold cross-validation and
achieve an 80% F1-score using the timing information from the forced alignments
of the automatic speech recogniser (ASR); this achieved around 72% using the
timing information from the ASR outputs.
| 2,019 | Computation and Language |
BillSum: A Corpus for Automatic Summarization of US Legislation | Automatic summarization methods have been studied on a variety of domains,
including news and scientific articles. Yet, legislation has not previously
been considered for this task, despite US Congress and state governments
releasing tens of thousands of bills every year. In this paper, we introduce
BillSum, the first dataset for summarization of US Congressional and California
state bills (https://github.com/FiscalNote/BillSum). We explain the properties
of the dataset that make it more challenging to process than other domains.
Then, we benchmark extractive methods that consider neural sentence
representations and traditional contextual features. Finally, we demonstrate
that models built on Congressional bills can be used to summarize California
bills, thus, showing that methods developed on this dataset can transfer to
states without human-written summaries.
| 2,019 | Computation and Language |
Type-aware Convolutional Neural Networks for Slot Filling | The slot filling task aims at extracting answers for queries about entities
from text, such as "Who founded Apple". In this paper, we focus on the relation
classification component of a slot filling system. We propose type-aware
convolutional neural networks to benefit from the mutual dependencies between
entity and relation classification. In particular, we explore different ways of
integrating the named entity types of the relation arguments into a neural
network for relation classification, including a joint training and a
structured prediction approach. To the best of our knowledge, this is the first
study on type-aware neural networks for slot filling. The type-aware models
lead to the best results of our slot filling pipeline. Joint training performs
comparable to structured prediction. To understand the impact of the different
components of the slot filling pipeline, we perform a recall analysis, a manual
error analysis and several ablation studies. Such analyses are of particular
importance to other slot filling researchers since the official slot filling
evaluations only assess pipeline outputs. The analyses show that especially
coreference resolution and our convolutional neural networks have a large
positive impact on the final performance of the slot filling pipeline. The
presented models, the source code of our system as well as our coreference
resource is publicy available.
| 2,019 | Computation and Language |
Better Document-Level Machine Translation with Bayes' Rule | We show that Bayes' rule provides an effective mechanism for creating
document translation models that can be learned from only parallel sentences
and monolingual documents---a compelling benefit as parallel documents are not
always available. In our formulation, the posterior probability of a candidate
translation is the product of the unconditional (prior) probability of the
candidate output document and the "reverse translation probability" of
translating the candidate output back into the source language. Our proposed
model uses a powerful autoregressive language model as the prior on target
language documents, but it assumes that each sentence is translated
independently from the target to the source language. Crucially, at test time,
when a source document is observed, the document language model prior induces
dependencies between the translations of the source sentences in the posterior.
The model's independence assumption not only enables efficient use of available
data, but it additionally admits a practical left-to-right beam-search
algorithm for carrying out inference. Experiments show that our model benefits
from using cross-sentence context in the language model, and it outperforms
existing document translation approaches.
| 2,020 | Computation and Language |
DyKgChat: Benchmarking Dialogue Generation Grounding on Dynamic
Knowledge Graphs | Data-driven, knowledge-grounded neural conversation models are capable of
generating more informative responses. However, these models have not yet
demonstrated that they can zero-shot adapt to updated, unseen knowledge graphs.
This paper proposes a new task about how to apply dynamic knowledge graphs in
neural conversation model and presents a novel TV series conversation corpus
(DyKgChat) for the task. Our new task and corpus aids in understanding the
influence of dynamic knowledge graphs on responses generation. Also, we propose
a preliminary model that selects an output from two networks at each time step:
a sequence-to-sequence model (Seq2Seq) and a multi-hop reasoning model, in
order to support dynamic knowledge graphs. To benchmark this new task and
evaluate the capability of adaptation, we introduce several evaluation metrics
and the experiments show that our proposed approach outperforms previous
knowledge-grounded conversation models. The proposed corpus and model can
motivate the future research directions.
| 2,019 | Computation and Language |
Essentia: Mining Domain-Specific Paraphrases with Word-Alignment Graphs | Paraphrases are important linguistic resources for a wide variety of NLP
applications. Many techniques for automatic paraphrase mining from general
corpora have been proposed. While these techniques are successful at
discovering generic paraphrases, they often fail to identify domain-specific
paraphrases (e.g., {staff, concierge} in the hospitality domain). This is
because current techniques are often based on statistical methods, while
domain-specific corpora are too small to fit statistical methods. In this
paper, we present an unsupervised graph-based technique to mine paraphrases
from a small set of sentences that roughly share the same topic or intent. Our
system, Essentia, relies on word-alignment techniques to create a
word-alignment graph that merges and organizes tokens from input sentences. The
resulting graph is then used to generate candidate paraphrases. We demonstrate
that our system obtains high-quality paraphrases, as evaluated by crowd
workers. We further show that the majority of the identified paraphrases are
domain-specific and thus complement existing paraphrase databases.
| 2,019 | Computation and Language |
Learning to estimate label uncertainty for automatic radiology report
parsing | Bootstrapping labels from radiology reports has become the scalable
alternative to provide inexpensive ground truth for medical imaging. Because of
the domain specific nature, state-of-the-art report labeling tools are
predominantly rule-based. These tools, however, typically yield a binary 0 or 1
prediction that indicates the presence or absence of abnormalities. These hard
targets are then used as ground truth to train image models in the downstream,
forcing models to express high degree of certainty even on cases where
specificity is low. This could negatively impact the statistical efficiency of
image models. We address such an issue by training a Bidirectional Long-Short
Term Memory Network to augment heuristic-based discrete labels of X-ray reports
from all body regions and achieve performance comparable or better than
domain-specific NLP, but with additional uncertainty estimates which enable
finer downstream image model training.
| 2,019 | Computation and Language |
State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention
With Dilated 1D Convolutions | Self-attention has been a huge success for many downstream tasks in NLP,
which led to exploration of applying self-attention to speech problems as well.
The efficacy of self-attention in speech applications, however, seems not fully
blown yet since it is challenging to handle highly correlated speech frames in
the context of self-attention. In this paper we propose a new neural network
model architecture, namely multi-stream self-attention, to address the issue
thus make the self-attention mechanism more effective for speech recognition.
The proposed model architecture consists of parallel streams of self-attention
encoders, and each stream has layers of 1D convolutions with dilated kernels
whose dilation rates are unique given stream, followed by a self-attention
layer. The self-attention mechanism in each stream pays attention to only one
resolution of input speech frames and the attentive computation can be more
efficient. In a later stage, outputs from all the streams are concatenated then
linearly projected to the final embedding. By stacking the proposed
multi-stream self-attention encoder blocks and rescoring the resultant lattices
with neural network language models, we achieve the word error rate of 2.2% on
the test-clean dataset of the LibriSpeech corpus, the best number reported thus
far on the dataset.
| 2,019 | Computation and Language |
Speech-to-speech Translation between Untranscribed Unknown Languages | In this paper, we explore a method for training speech-to-speech translation
tasks without any transcription or linguistic supervision. Our proposed method
consists of two steps: First, we train and generate discrete representation
with unsupervised term discovery with a discrete quantized autoencoder. Second,
we train a sequence-to-sequence model that directly maps the source language
speech to the target language's discrete representation. Our proposed method
can directly generate target speech without any auxiliary or pre-training steps
with a source or target transcription. To the best of our knowledge, this is
the first work that performed pure speech-to-speech translation between
untranscribed unknown languages.
| 2,019 | Computation and Language |
Abstractive Dialog Summarization with Semantic Scaffolds | The demand for abstractive dialog summary is growing in real-world
applications. For example, customer service center or hospitals would like to
summarize customer service interaction and doctor-patient interaction. However,
few researchers explored abstractive summarization on dialogs due to the lack
of suitable datasets. We propose an abstractive dialog summarization dataset
based on MultiWOZ. If we directly apply previous state-of-the-art document
summarization methods on dialogs, there are two significant drawbacks: the
informative entities such as restaurant names are difficult to preserve, and
the contents from different dialog domains are sometimes mismatched. To address
these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the
existing annotation on speaker role, semantic slot and dialog domain. SPNet
incorporates these semantic scaffolds for dialog summarization. Since ROUGE
cannot capture the two drawbacks mentioned, we also propose a new evaluation
metric that considers critical informative entities in the text. On MultiWOZ,
our proposed SPNet outperforms state-of-the-art abstractive summarization
methods on all the automatic and human evaluation metrics.
| 2,019 | Computation and Language |
BookQA: Stories of Challenges and Opportunities | We present a system for answering questions based on the full text of books
(BookQA), which first selects book passages given a question at hand, and then
uses a memory network to reason and predict an answer. To improve
generalization, we pretrain our memory network using artificial questions
generated from book sentences. We experiment with the recently published
NarrativeQA corpus, on the subset of Who questions, which expect book
characters as answers. We experimentally show that BERT-based retrieval and
pretraining improve over baseline results significantly. At the same time, we
confirm that NarrativeQA is a highly challenging data set, and that there is
need for novel research in order to achieve high-precision BookQA results. We
analyze some of the bottlenecks of the current approach, and we argue that more
research is needed on text representation, retrieval of relevant passages, and
reasoning, including commonsense knowledge.
| 2,019 | Computation and Language |
Clinical Text Generation through Leveraging Medical Concept and
Relations | With a neural sequence generation model, this study aims to develop a method
of writing the patient clinical texts given a brief medical history. As a
proof-of-a-concept, we have demonstrated that it can be workable to use medical
concept embedding in clinical text generation. Our model was based on the
Sequence-to-Sequence architecture and trained with a large set of de-identified
clinical text data. The quantitative result shows that our concept embedding
method decreased the perplexity of the baseline architecture. Also, we discuss
the analyzed results from a human evaluation performed by medical doctors.
| 2,019 | Computation and Language |
Exploiting BERT for End-to-End Aspect-based Sentiment Analysis | In this paper, we investigate the modeling power of contextualized embeddings
from pre-trained language models, e.g. BERT, on the E2E-ABSA task.
Specifically, we build a series of simple yet insightful neural baselines to
deal with E2E-ABSA. The experimental results show that even with a simple
linear classification layer, our BERT-based architecture can outperform
state-of-the-art works. Besides, we also standardize the comparative study by
consistently utilizing a hold-out validation dataset for model selection, which
is largely ignored by previous works. Therefore, our work can serve as a
BERT-based benchmark for E2E-ABSA.
| 2,019 | Computation and Language |
Hierarchical Multi-Task Natural Language Understanding for Cross-domain
Conversational AI: HERMIT NLU | We present a new neural architecture for wide-coverage Natural Language
Understanding in Spoken Dialogue Systems. We develop a hierarchical multi-task
architecture, which delivers a multi-layer representation of sentence meaning
(i.e., Dialogue Acts and Frame-like structures). The architecture is a
hierarchy of self-attention mechanisms and BiLSTM encoders followed by CRF
tagging layers. We describe a variety of experiments, showing that our approach
obtains promising results on a dataset annotated with Dialogue Acts and Frame
Semantics. Moreover, we demonstrate its applicability to a different, publicly
available NLU dataset annotated with domain-specific intents and corresponding
semantic roles, providing overall performance higher than state-of-the-art
tools such as RASA, Dialogflow, LUIS, and Watson. For example, we show an
average 4.45% improvement in entity tagging F-score over Rasa, Dialogflow and
LUIS.
| 2,019 | Computation and Language |
A CCG-based Compositional Semantics and Inference System for
Comparatives | Comparative constructions play an important role in natural language
inference. However, attempts to study semantic representations and logical
inferences for comparatives from the computational perspective are not well
developed, due to the complexity of their syntactic structures and inference
patterns. In this study, using a framework based on Combinatory Categorial
Grammar (CCG), we present a compositional semantics that maps various
comparative constructions in English to semantic representations and introduces
an inference system that effectively handles logical inference with
comparatives, including those involving numeral adjectives, antonyms, and
quantification. We evaluate the performance of our system on the FraCaS test
suite and show that the system can handle a variety of complex logical
inferences with comparatives.
| 2,019 | Computation and Language |
SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic
Auto-Encoders | We propose an end-to-end neural model for zero-shot abstractive text
summarization of paragraphs, and introduce a benchmark task, ROCSumm, based on
ROCStories, a subset for which we collected human summaries. In this task,
five-sentence stories (paragraphs) are summarized with one sentence, using
human summaries only for evaluation. We show results for extractive and human
baselines to demonstrate a large abstractive gap in performance. Our model,
SummAE, consists of a denoising auto-encoder that embeds sentences and
paragraphs in a common space, from which either can be decoded. Summaries for
paragraphs are generated by decoding a sentence from the paragraph
representations. We find that traditional sequence-to-sequence auto-encoders
fail to produce good summaries and describe how specific architectural choices
and pre-training techniques can significantly improve performance,
outperforming extractive baselines. The data, training, evaluation code, and
best model weights are open-sourced.
| 2,019 | Computation and Language |
Neural Word Decomposition Models for Abusive Language Detection | User generated text on social media often suffers from a lot of undesired
characteristics including hatespeech, abusive language, insults etc. that are
targeted to attack or abuse a specific group of people. Often such text is
written differently compared to traditional text such as news involving either
explicit mention of abusive words, obfuscated words and typological errors or
implicit abuse i.e., indicating or targeting negative stereotypes. Thus,
processing this text poses several robustness challenges when we apply natural
language processing techniques developed for traditional text. For example,
using word or token based models to process such text can treat two spelling
variants of a word as two different words. Following recent work, we analyze
how character, subword and byte pair encoding (BPE) models can be aid some of
the challenges posed by user generated text. In our work, we analyze the
effectiveness of each of the above techniques, compare and contrast various
word decomposition techniques when used in combination with others. We
experiment with finetuning large pretrained language models, and demonstrate
their robustness to domain shift by studying Wikipedia attack, toxicity and
Twitter hatespeech datasets
| 2,019 | Computation and Language |
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and
lighter | As Transfer Learning from large-scale pre-trained models becomes more
prevalent in Natural Language Processing (NLP), operating these large models in
on-the-edge and/or under constrained computational training or inference
budgets remains challenging. In this work, we propose a method to pre-train a
smaller general-purpose language representation model, called DistilBERT, which
can then be fine-tuned with good performances on a wide range of tasks like its
larger counterparts. While most prior work investigated the use of distillation
for building task-specific models, we leverage knowledge distillation during
the pre-training phase and show that it is possible to reduce the size of a
BERT model by 40%, while retaining 97% of its language understanding
capabilities and being 60% faster. To leverage the inductive biases learned by
larger models during pre-training, we introduce a triple loss combining
language modeling, distillation and cosine-distance losses. Our smaller, faster
and lighter model is cheaper to pre-train and we demonstrate its capabilities
for on-device computations in a proof-of-concept experiment and a comparative
on-device study.
| 2,020 | Computation and Language |
Cracking the Contextual Commonsense Code: Understanding Commonsense
Reasoning Aptitude of Deep Contextual Representations | Pretrained deep contextual representations have advanced the state-of-the-art
on various commonsense NLP tasks, but we lack a concrete understanding of the
capability of these models. Thus, we investigate and challenge several aspects
of BERT's commonsense representation abilities. First, we probe BERT's ability
to classify various object attributes, demonstrating that BERT shows a strong
ability in encoding various commonsense features in its embedding space, but is
still deficient in many areas. Next, we show that, by augmenting BERT's
pretraining data with additional data related to the deficient attributes, we
are able to improve performance on a downstream commonsense reasoning task
while using a minimal amount of data. Finally, we develop a method of
fine-tuning knowledge graphs embeddings alongside BERT and show the continued
importance of explicit knowledge graphs.
| 2,019 | Computation and Language |
Identifying Nuances in Fake News vs. Satire: Using Semantic and
Linguistic Cues | The blurry line between nefarious fake news and protected-speech satire has
been a notorious struggle for social media platforms. Further to the efforts of
reducing exposure to misinformation on social media, purveyors of fake news
have begun to masquerade as satire sites to avoid being demoted. In this work,
we address the challenge of automatically classifying fake news versus satire.
Previous work have studied whether fake news and satire can be distinguished
based on language differences. Contrary to fake news, satire stories are
usually humorous and carry some political or social message. We hypothesize
that these nuances could be identified using semantic and linguistic cues.
Consequently, we train a machine learning method using semantic representation,
with a state-of-the-art contextual language model, and with linguistic features
based on textual coherence metrics. Empirical evaluation attests to the merits
of our approach compared to the language-based baseline and sheds light on the
nuances between fake news and satire. As avenues for future work, we consider
studying additional linguistic features related to the humor aspect, and
enriching the data with current news events, to help identify a political or
social message.
| 2,019 | Computation and Language |
Linking artificial and human neural representations of language | What information from an act of sentence understanding is robustly
represented in the human brain? We investigate this question by comparing
sentence encoding models on a brain decoding task, where the sentence that an
experimental participant has seen must be predicted from the fMRI signal evoked
by the sentence. We take a pre-trained BERT architecture as a baseline sentence
encoding model and fine-tune it on a variety of natural language understanding
(NLU) tasks, asking which lead to improvements in brain-decoding performance.
We find that none of the sentence encoding tasks tested yield significant
increases in brain decoding performance. Through further task ablations and
representational analyses, we find that tasks which produce syntax-light
representations yield significant improvements in brain decoding performance.
Our results constrain the space of NLU models that could best account for human
neural representations of language, but also suggest limits on the possibility
of decoding fine-grained syntactic information from fMRI human neuroimaging.
| 2,019 | Computation and Language |
Extracting UMLS Concepts from Medical Text Using General and
Domain-Specific Deep Learning Models | Entity recognition is a critical first step to a number of clinical NLP
applications, such as entity linking and relation extraction. We present the
first attempt to apply state-of-the-art entity recognition approaches on a
newly released dataset, MedMentions. This dataset contains over 4000 biomedical
abstracts, annotated for UMLS semantic types. In comparison to existing
datasets, MedMentions contains a far greater number of entity types, and thus
represents a more challenging but realistic scenario in a real-world setting.
We explore a number of relevant dimensions, including the use of contextual
versus non-contextual word embeddings, general versus domain-specific
unsupervised pre-training, and different deep learning architectures. We
contrast our results against the well-known i2b2 2010 entity recognition
dataset, and propose a new method to combine general and domain-specific
information. While producing a state-of-the-art result for the i2b2 2010 task
(F1 = 0.90), our results on MedMentions are significantly lower (F1 = 0.63),
suggesting there is still plenty of opportunity for improvement on this new
data.
| 2,019 | Computation and Language |
Neural Zero-Inflated Quality Estimation Model For Automatic Speech
Recognition System | The performances of automatic speech recognition (ASR) systems are usually
evaluated by the metric word error rate (WER) when the manually transcribed
data are provided, which are, however, expensively available in the real
scenario. In addition, the empirical distribution of WER for most ASR systems
usually tends to put a significant mass near zero, making it difficult to
simulate with a single continuous distribution. In order to address the two
issues of ASR quality estimation (QE), we propose a novel neural zero-inflated
model to predict the WER of the ASR result without transcripts. We design a
neural zero-inflated beta regression on top of a bidirectional transformer
language model conditional on speech features (speech-BERT). We adopt the
pre-training strategy of token level mask language modeling for speech-BERT as
well, and further fine-tune with our zero-inflated layer for the mixture of
discrete and continuous outputs. The experimental results show that our
approach achieves better performance on WER prediction in the metrics of
Pearson and MAE, compared with most existed quality estimation algorithms for
ASR or machine translation.
| 2,020 | Computation and Language |
Hitachi at MRP 2019: Unified Encoder-to-Biaffine Network for
Cross-Framework Meaning Representation Parsing | This paper describes the proposed system of the Hitachi team for the
Cross-Framework Meaning Representation Parsing (MRP 2019) shared task. In this
shared task, the participating systems were asked to predict nodes, edges and
their attributes for five frameworks, each with different order of
"abstraction" from input tokens. We proposed a unified encoder-to-biaffine
network for all five frameworks, which effectively incorporates a shared
encoder to extract rich input features, decoder networks to generate anchorless
nodes in UCCA and AMR, and biaffine networks to predict edges. Our system was
ranked fifth with the macro-averaged MRP F1 score of 0.7604, and outperformed
the baseline unified transition-based MRP. Furthermore, post-evaluation
experiments showed that we can boost the performance of the proposed system by
incorporating multi-task learning, whereas the baseline could not. These imply
efficacy of incorporating the biaffine network to the shared architecture for
MRP and that learning heterogeneous meaning representations at once can boost
the system performance.
| 2,019 | Computation and Language |
Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge
Transfer Networks | Goal-oriented dialogue systems are now being widely adopted in industry where
it is of key importance to maintain a rapid prototyping cycle for new products
and domains. Data-driven dialogue system development has to be adapted to meet
this requirement --- therefore, reducing the amount of data and annotations
necessary for training such systems is a central research problem.
In this paper, we present the Dialogue Knowledge Transfer Network (DiKTNet),
a state-of-the-art approach to goal-oriented dialogue generation which only
uses a few example dialogues (i.e. few-shot learning), none of which has to be
annotated. We achieve this by performing a 2-stage training. Firstly, we
perform unsupervised dialogue representation pre-training on a large source of
goal-oriented dialogues in multiple domains, the MetaLWOz corpus. Secondly, at
the transfer stage, we train DiKTNet using this representation together with 2
other textual knowledge sources with different levels of generality: ELMo
encoder and the main dataset's source domains.
Our main dataset is the Stanford Multi-Domain dialogue corpus. We evaluate
our model on it in terms of BLEU and Entity F1 scores, and show that our
approach significantly and consistently improves upon a series of baseline
models as well as over the previous state-of-the-art dialogue generation model,
ZSDG. The improvement upon the latter --- up to 10% in Entity F1 and the
average of 3% in BLEU score --- is achieved using only the equivalent of 10% of
ZSDG's in-domain training data.
| 2,019 | Computation and Language |
Topic-aware Pointer-Generator Networks for Summarizing Spoken
Conversations | Due to the lack of publicly available resources, conversation summarization
has received far less attention than text summarization. As the purpose of
conversations is to exchange information between at least two interlocutors,
key information about a certain topic is often scattered and spanned across
multiple utterances and turns from different speakers. This phenomenon is more
pronounced during spoken conversations, where speech characteristics such as
backchanneling and false-starts might interrupt the topical flow. Moreover,
topic diffusion and (intra-utterance) topic drift are also more common in
human-to-human conversations. Such linguistic characteristics of dialogue
topics make sentence-level extractive summarization approaches used in spoken
documents ill-suited for summarizing conversations. Pointer-generator networks
have effectively demonstrated its strength at integrating extractive and
abstractive capabilities through neural modeling in text summarization. To the
best of our knowledge, to date no one has adopted it for summarizing
conversations. In this work, we propose a topic-aware architecture to exploit
the inherent hierarchical structure in conversations to further adapt the
pointer-generator model. Our approach significantly outperforms competitive
baselines, achieves more efficient learning outcomes, and attains more robust
performance.
| 2,019 | Computation and Language |
TexTrolls: Identifying Russian Trolls on Twitter from a Textual
Perspective | The online new emerging suspicious users, that usually are called trolls, are
one of the main sources of hate, fake, and deceptive online messages. Some
agendas are utilizing these harmful users to spread incitement tweets, and as a
consequence, the audience get deceived. The challenge in detecting such
accounts is that they conceal their identities which make them disguised in
social media, adding more difficulty to identify them using just their social
network information. Therefore, in this paper, we propose a text-based approach
to detect the online trolls such as those that were discovered during the US
2016 presidential elections. Our approach is mainly based on textual features
which utilize thematic information, and profiling features to identify the
accounts from their way of writing tweets. We deduced the thematic information
in a unsupervised way and we show that coupling them with the textual features
enhanced the performance of the proposed model. In addition, we find that the
proposed profiling features perform the best comparing to the textual features.
| 2,019 | Computation and Language |
Mapping (Dis-)Information Flow about the MH17 Plane Crash | Digital media enables not only fast sharing of information, but also
disinformation. One prominent case of an event leading to circulation of
disinformation on social media is the MH17 plane crash. Studies analysing the
spread of information about this event on Twitter have focused on small,
manually annotated datasets, or used proxys for data annotation. In this work,
we examine to what extent text classifiers can be used to label data for
subsequent content analysis, in particular we focus on predicting pro-Russian
and pro-Ukrainian Twitter content related to the MH17 plane crash. Even though
we find that a neural classifier improves over a hashtag based baseline,
labeling pro-Russian and pro-Ukrainian content with high precision remains a
challenging problem. We provide an error analysis underlining the difficulty of
the task and identify factors that might help improve classification in future
work. Finally, we show how the classifier can facilitate the annotation task
for human annotators.
| 2,019 | Computation and Language |
Can Sentiment Analysis Reveal Structure in a Plotless Novel? | Modernist novels are thought to break with traditional plot structure. In
this paper, we test this theory by applying Sentiment Analysis to one of the
most famous modernist novels, To the Lighthouse by Virginia Woolf. We first
assess Sentiment Analysis in light of the critique that it cannot adequately
account for literary language: we use a unique statistical comparison to
demonstrate that even simple lexical approaches to Sentiment Analysis are
surprisingly effective. We then use the Syuzhet.R package to explore
similarities and differences across modeling methods. This comparative
approach, when paired with literary close reading, can offer interpretive
clues. To our knowledge, we are the first to undertake a hybrid model that
fully leverages the strengths of both computational analysis and close reading.
This hybrid model raises new questions for the literary critic, such as how to
interpret relative versus absolute emotional valence and how to take into
account subjective identification. Our finding is that while To the Lighthouse
does not replicate a plot centered around a traditional hero, it does reveal an
underlying emotional structure distributed between characters - what we term a
distributed heroine model. This finding is innovative in the field of modernist
and narrative studies and demonstrates that a hybrid method can yield
significant discoveries.
| 2,020 | Computation and Language |
Towards Understanding of Medical Randomized Controlled Trials by
Conclusion Generation | Randomized controlled trials (RCTs) represent the paramount evidence of
clinical medicine. Using machines to interpret the massive amount of RCTs has
the potential of aiding clinical decision-making. We propose a RCT conclusion
generation task from the PubMed 200k RCT sentence classification dataset to
examine the effectiveness of sequence-to-sequence models on understanding RCTs.
We first build a pointer-generator baseline model for conclusion generation.
Then we fine-tune the state-of-the-art GPT-2 language model, which is
pre-trained with general domain data, for this new medical domain task. Both
automatic and human evaluation show that our GPT-2 fine-tuned models achieve
improved quality and correctness in the generated conclusions compared to the
baseline pointer-generator model. Further inspection points out the limitations
of this current approach and future directions to explore.
| 2,019 | Computation and Language |
Complex networks based word embeddings | Most of the time, the first step to learn word embeddings is to build a word
co-occurrence matrix. As such matrices are equivalent to graphs, complex
networks theory can naturally be used to deal with such data. In this paper, we
consider applying community detection, a main tool of this field, to the
co-occurrence matrix corresponding to a huge corpus. Community structure is
used as a way to reduce the dimensionality of the initial space. Using this
community structure, we propose a method to extract word embeddings that are
comparable to the state-of-the-art approaches.
| 2,019 | Computation and Language |
Modeling Color Terminology Across Thousands of Languages | There is an extensive history of scholarship into what constitutes a "basic"
color term, as well as a broadly attested acquisition sequence of basic color
terms across many languages, as articulated in the seminal work of Berlin and
Kay (1969). This paper employs a set of diverse measures on massively
cross-linguistic data to operationalize and critique the Berlin and Kay color
term hypotheses. Collectively, the 14 empirically-grounded computational
linguistic metrics we design---as well as their aggregation---correlate
strongly with both the Berlin and Kay basic/secondary color term partition
(gamma=0.96) and their hypothesized universal acquisition sequence. The
measures and result provide further empirical evidence from computational
linguistics in support of their claims, as well as additional nuance: they
suggest treating the partition as a spectrum instead of a dichotomy.
| 2,019 | Computation and Language |
Semi-Supervised Generative Modeling for Controllable Speech Synthesis | We present a novel generative model that combines state-of-the-art neural
text-to-speech (TTS) with semi-supervised probabilistic latent variable models.
By providing partial supervision to some of the latent variables, we are able
to force them to take on consistent and interpretable purposes, which
previously hasn't been possible with purely unsupervised TTS models. We
demonstrate that our model is able to reliably discover and control important
but rarely labelled attributes of speech, such as affect and speaking rate,
with as little as 1% (30 minutes) supervision. Even at such low supervision
levels we do not observe a degradation of synthesis quality compared to a
state-of-the-art baseline. Audio samples are available on the web.
| 2,019 | Computation and Language |
Character Feature Engineering for Japanese Word Segmentation | On word segmentation problems, machine learning architecture engineering
often draws attention. The problem representation itself, however, has remained
almost static as either word lattice ranking or character sequence tagging, for
at least two decades. The latter of-ten shows stronger predictive power than
the former for out-of-vocabulary (OOV) issue. When the issue escalating to
rapid adaptation, which is a common scenario for industrial applications,
active learning of partial annotations or re-training with additional lexical
re-sources is usually applied, however, from a somewhat word-based perspective.
Not only it is uneasy for end-users to comply with linguistically consistent
word boundary decisions, but also the risk/cost of forking models permanently
with estimated weights is seldom affordable. To overcome the obstacle, this
work provides an alternative, which uses linguistic intuition about character
compositions, such that a sophisticated feature set and its derived scheme can
enable dynamic lexicon expansion with the model remaining intact. Experiment
results suggest that the proposed solution, with or without external lexemes,
performs competitively in terms of F1 score and OOV recall across various
datasets.
| 2,019 | Computation and Language |
Distilling BERT into Simple Neural Networks with Unlabeled Transfer Data | Recent advances in pre-training huge models on large amounts of text through
self supervision have obtained state-of-the-art results in various natural
language processing tasks. However, these huge and expensive models are
difficult to use in practise for downstream tasks. Some recent efforts use
knowledge distillation to compress these models. However, we see a gap between
the performance of the smaller student models as compared to that of the large
teacher. In this work, we leverage large amounts of in-domain unlabeled
transfer data in addition to a limited amount of labeled training instances to
bridge this gap for distilling BERT. We show that simple RNN based student
models even with hard distillation can perform at par with the huge teachers
given the transfer set. The student performance can be further improved with
soft distillation and leveraging teacher intermediate representations. We show
that our student models can compress the huge teacher by up to 26x while still
matching or even marginally exceeding the teacher performance in low-resource
settings with small amount of labeled data. Additionally, for the multilingual
extension of this work with XtremeDistil (Mukherjee and Hassan Awadallah,
2020), we demonstrate massive distillation of multilingual BERT-like teacher
models by upto 35x in terms of parameter compression and 51x in terms of
latency speedup for batch inference while retaining 95% of its F1-score for NER
over 41 languages.
| 2,020 | Computation and Language |
DialectGram: Detecting Dialectal Variation at Multiple Geographic
Resolutions | Several computational models have been developed to detect and analyze
dialect variation in recent years. Most of these models assume a predefined set
of geographical regions over which they detect and analyze dialectal variation.
However, dialect variation occurs at multiple levels of geographic resolution
ranging from cities within a state, states within a country, and between
countries across continents. In this work, we propose a model that enables
detection of dialectal variation at multiple levels of geographic resolution
obviating the need for a-priori definition of the resolution level. Our method
DialectGram, learns dialect-sensitive word embeddings while being agnostic of
the geographic resolution. Specifically it only requires one-time training and
enables analysis of dialectal variation at a chosen resolution post-hoc -- a
significant departure from prior models which need to be re-trained whenever
the pre-defined set of regions changes. Furthermore, DialectGram explicitly
models senses thus enabling one to estimate the proportion of each sense usage
in any given region. Finally, we quantitatively evaluate our model against
other baselines on a new evaluation dataset DialectSim (in English) and show
that DialectGram can effectively model linguistic variation.
| 2,020 | Computation and Language |
Multi-level Gated Recurrent Neural Network for Dialog Act Classification | In this paper we focus on the problem of dialog act (DA) labelling. This
problem has recently attracted a lot of attention as it is an important
sub-part of an automatic question answering system, which is currently in great
demand. Traditional methods tend to see this problem as a sequence labelling
task and deals with it by applying classifiers with rich features. Most of the
current neural network models still omit the sequential information in the
conversation. Henceforth, we apply a novel multi-level gated recurrent neural
network (GRNN) with non-textual information to predict the DA tag. Our model
not only utilizes textual information, but also makes use of non-textual and
contextual information. In comparison, our model has shown significant
improvement over previous works on Switchboard Dialog Act (SWDA) task by over
6%.
| 2,019 | Computation and Language |
Modeling Confidence in Sequence-to-Sequence Models | Recently, significant improvements have been achieved in various natural
language processing tasks using neural sequence-to-sequence models. While
aiming for the best generation quality is important, ultimately it is also
necessary to develop models that can assess the quality of their output.
In this work, we propose to use the similarity between training and test
conditions as a measure for models' confidence. We investigate methods solely
using the similarity as well as methods combining it with the posterior
probability. While traditionally only target tokens are annotated with
confidence measures, we also investigate methods to annotate source tokens with
confidence. By learning an internal alignment model, we can significantly
improve confidence projection over using state-of-the-art external alignment
tools. We evaluate the proposed methods on downstream confidence estimation for
machine translation (MT). We show improvements on segment-level confidence
estimation as well as on confidence estimation for source tokens. In addition,
we show that the same methods can also be applied to other tasks using
sequence-to-sequence models. On the automatic speech recognition (ASR) task, we
are able to find 60% of the errors by looking at 20% of the data.
| 2,019 | Computation and Language |
Template-free Data-to-Text Generation of Finnish Sports News | News articles such as sports game reports are often thought to closely follow
the underlying game statistics, but in practice they contain a notable amount
of background knowledge, interpretation, insight into the game, and quotes that
are not present in the official statistics. This poses a challenge for
automated data-to-text news generation with real-world news corpora as training
data. We report on the development of a corpus of Finnish ice hockey news,
edited to be suitable for training of end-to-end news generation methods, as
well as demonstrate generation of text, which was judged by journalists to be
relatively close to a viable product. The new dataset and system source code
are available for research purposes at
https://github.com/scoopmatic/finnish-hockey-news-generation-paper.
| 2,019 | Computation and Language |
Detecting Deception in Political Debates Using Acoustic and Textual
Features | We present work on deception detection, where, given a spoken claim, we aim
to predict its factuality. While previous work in the speech community has
relied on recordings from staged setups where people were asked to tell the
truth or to lie and their statements were recorded, here we use real-world
political debates. Thanks to the efforts of fact-checking organizations, it is
possible to obtain annotations for statements in the context of a political
discourse as true, half-true, or false. Starting with such data from the
CLEF-2018 CheckThat! Lab, which was limited to text, we performed alignment to
the corresponding videos, thus producing a multimodal dataset. We further
developed a multimodal deep-learning architecture for the task of deception
detection, which yielded sizable improvements over the state of the art for the
CLEF-2018 Lab task 2. Our experiments show that the use of the acoustic signal
consistently helped to improve the performance compared to using textual and
metadata features only, based on several different evaluation measures. We
release the new dataset to the research community, hoping to help advance the
overall field of multimodal deception detection.
| 2,019 | Computation and Language |
Predicting the Role of Political Trolls in Social Media | We investigate the political roles of "Internet trolls" in social media.
Political trolls, such as the ones linked to the Russian Internet Research
Agency (IRA), have recently gained enormous attention for their ability to sway
public opinion and even influence elections. Analysis of the online traces of
trolls has shown different behavioral patterns, which target different slices
of the population. However, this analysis is manual and labor-intensive, thus
making it impractical as a first-response tool for newly-discovered troll
farms. In this paper, we show how to automate this analysis by using machine
learning in a realistic setting. In particular, we show how to classify trolls
according to their political role ---left, news feed, right--- by using
features extracted from social media, i.e., Twitter, in two scenarios: (i) in a
traditional supervised learning scenario, where labels for trolls are
available, and (ii) in a distant supervision scenario, where labels for trolls
are not available, and we rely on more-commonly-available labels for news
outlets mentioned by the trolls. Technically, we leverage the community
structure and the text of the messages in the online social network of trolls
represented as a graph, from which we extract several types of learned
representations, i.e.,~embeddings, for the trolls. Experiments on the "IRA
Russian Troll" dataset show that our methodology improves over the
state-of-the-art in the first scenario, while providing a compelling case for
the second scenario, which has not been explored in the literature thus far.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.