Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Voice command generation using Progressive Wavegans | Generative Adversarial Networks (GANs) have become exceedingly popular in a
wide range of data-driven research fields, due in part to their success in
image generation. Their ability to generate new samples, often from only a
small amount of input data, makes them an exciting research tool in areas with
limited data resources. One less-explored application of GANs is the synthesis
of speech and audio samples. Herein, we propose a set of extensions to the
WaveGAN paradigm, a recently proposed approach for sound generation using GANs.
The aim of these extensions - preprocessing, Audio-to-Audio generation, skip
connections and progressive structures - is to improve the human likeness of
synthetic speech samples. Scores from listening tests with 30 volunteers
demonstrated a moderate improvement (Cohen's d coefficient of 0.65) in human
likeness using the proposed extensions compared to the original WaveGAN
approach.
| 2,019 | Computation and Language |
Un duel probabiliste pour d\'epartager deux pr\'esidents (LIA @
DEFT'2005) | We present a set of probabilistic models applied to binary classification as
defined in the DEFT'05 challenge. The challenge consisted a mixture of two
differents problems in Natural Language Processing : identification of author
(a sequence of Fran\c{c}ois Mitterrand's sentences might have been inserted
into a speech of Jacques Chirac) and thematic break detection (the subjects
addressed by the two authors are supposed to be different). Markov chains,
Bayes models and an adaptative process have been used to identify the paternity
of these sequences. A probabilistic model of the internal coherence of speeches
which has been employed to identify thematic breaks. Adding this model has
shown to improve the quality results. A comparison with different approaches
demostrates the superiority of a strategy that combines learning, coherence and
adaptation. Applied to the DEFT'05 data test the results in terms of precision
(0.890), recall (0.955) and Fscore (0.925) measure are very promising.
| 2,007 | Computation and Language |
Deep Text-to-Speech System with Seq2Seq Model | Recent trends in neural network based text-to-speech/speech synthesis
pipelines have employed recurrent Seq2seq architectures that can synthesize
realistic sounding speech directly from text characters. These systems however
have complex architectures and takes a substantial amount of time to train. We
introduce several modifications to these Seq2seq architectures that allow for
faster training time, and also allows us to reduce the complexity of the model
architecture at the same time. We show that our proposed model can achieve
attention alignment much faster than previous architectures and that good audio
quality can be achieved with a model that's much smaller in size. Sample audio
available at https://soundcloud.com/gary-wang-23/sets/tts-samples-for-cmpt-419.
| 2,019 | Computation and Language |
Neutron: An Implementation of the Transformer Translation Model and its
Variants | The Transformer translation model is easier to parallelize and provides
better performance compared to recurrent seq2seq models, which makes it popular
among industry and research community. We implement the Neutron in this work,
including the Transformer model and its several variants from most recent
researches. It is highly optimized, easy to modify and provides comparable
performance with interesting features while keeping readability.
| 2,020 | Computation and Language |
Automatic Classification of Pathology Reports using TF-IDF Features | A Pathology report is arguably one of the most important documents in
medicine containing interpretive information about the visual findings from the
patient's biopsy sample. Each pathology report has a retention period of up to
20 years after the treatment of a patient. Cancer registries process and encode
high volumes of free-text pathology reports for surveillance of cancer and
tumor diseases all across the world. In spite of their extremely valuable
information they hold, pathology reports are not used in any systematic way to
facilitate computational pathology. Therefore, in this study, we investigate
automated machine-learning techniques to identify/predict the primary diagnosis
(based on ICD-O code) from pathology reports. We performed experiments by
extracting the TF-IDF features from the reports and classifying them using
three different methods---SVM, XGBoost, and Logistic Regression. We constructed
a new dataset with 1,949 pathology reports arranged into 37 ICD-O categories,
collected from four different primary sites, namely lung, kidney, thymus, and
testis. The reports were manually transcribed into text format after collecting
them as PDF files from NCI Genomic Data Commons public dataset. We subsequently
pre-processed the reports by removing irrelevant textual artifacts produced by
OCR software. The highest classification accuracy we achieved was 92\% using
XGBoost classifier on TF-IDF feature vectors, the linear SVM scored 87\%
accuracy. Furthermore, the study shows that TF-IDF vectors are suitable for
highlighting the important keywords within a report which can be helpful for
the cancer research and diagnostic workflow. The results are encouraging in
demonstrating the potential of machine learning methods for classification and
encoding of pathology reports.
| 2,019 | Computation and Language |
Complexity-entropy analysis at different levels of organization in
written language | Written language is complex. A written text can be considered an attempt to
convey a meaningful message which ends up being constrained by language rules,
context dependence and highly redundant in its use of resources. Despite all
these constraints, unpredictability is an essential element of natural
language. Here we present the use of entropic measures to assert the balance
between predictability and surprise in written text. In short, it is possible
to measure innovation and context preservation in a document. It is shown that
this can also be done at the different levels of organization of a text. The
type of analysis presented is reasonably general, and can also be used to
analyze the same balance in other complex messages such as DNA, where a
hierarchy of organizational levels are known to exist.
| 2,019 | Computation and Language |
The emergence of number and syntax units in LSTM language models | Recent work has shown that LSTMs trained on a generic language modeling
objective capture syntax-sensitive generalizations such as long-distance number
agreement. We have however no mechanistic understanding of how they accomplish
this remarkable feat. Some have conjectured it depends on heuristics that do
not truly take hierarchical structure into account. We present here a detailed
study of the inner mechanics of number tracking in LSTMs at the single neuron
level. We discover that long-distance number information is largely managed by
two `number units'. Importantly, the behaviour of these units is partially
controlled by other units independently shown to track syntactic structure. We
conclude that LSTMs are, to some extent, implementing genuinely syntactic
processing mechanisms, paving the way to a more general understanding of
grammatical encoding in LSTMs.
| 2,019 | Computation and Language |
An Exploration of State-of-the-art Methods for Offensive Language
Detection | We provide a comprehensive investigation of different custom and
off-the-shelf architectures as well as different approaches to generating
feature vectors for offensive language detection. We also show that these
approaches work well on small and noisy datasets such as on the Offensive
Language Identification Dataset (OLID), so it should be possible to use them
for other applications.
| 2,019 | Computation and Language |
A Multilingual Encoding Method for Text Classification and Dialect
Identification Using Convolutional Neural Network | This thesis presents a language-independent text classification model by
introduced two new encoding methods "BUNOW" and "BUNOC" used for feeding the
raw text data into a new CNN spatial architecture with vertical and horizontal
convolutional process instead of commonly used methods like one hot vector or
word representation (i.e. word2vec) with temporal CNN architecture. The
proposed model can be classified as hybrid word-character model in its work
methodology because it consumes less memory space by using a fewer neural
network parameters as in character level representation, in addition to
providing much faster computations with fewer network layers depth, as in word
level representation. A promising result achieved compared to state of art
models in two different morphological benchmarked dataset one for Arabic
language and one for English language.
| 2,019 | Computation and Language |
Cloze-driven Pretraining of Self-attention Networks | We present a new approach for pretraining a bi-directional transformer model
that provides significant performance gains across a variety of language
understanding problems. Our model solves a cloze-style word reconstruction
task, where each word is ablated and must be predicted given the rest of the
text. Experiments demonstrate large performance gains on GLUE and new state of
the art results on NER as well as constituency parsing benchmarks, consistent
with the concurrently introduced BERT model. We also present a detailed
analysis of a number of factors that contribute to effective pretraining,
including data domain and size, model capacity, and variations on the cloze
objective.
| 2,019 | Computation and Language |
Hybrid Approaches for our Participation to the n2c2 Challenge on Cohort
Selection for Clinical Trials | Objective: Natural language processing can help minimize human intervention
in identifying patients meeting eligibility criteria for clinical trials, but
there is still a long way to go to obtain a general and systematic approach
that is useful for researchers. We describe two methods taking a step in this
direction and present their results obtained during the n2c2 challenge on
cohort selection for clinical trials. Materials and Methods: The first method
is a weakly supervised method using an unlabeled corpus (MIMIC) to build a
silver standard, by producing semi-automatically a small and very precise set
of rules to detect some samples of positive and negative patients. This silver
standard is then used to train a traditional supervised model. The second
method is a terminology-based approach where a medical expert selects the
appropriate concepts, and a procedure is defined to search the terms and check
the structural or temporal constraints. Results: On the n2c2 dataset containing
annotated data about 13 selection criteria on 288 patients, we obtained an
overall F1-measure of 0.8969, which is the third best result out of 45
participant teams, with no statistically significant difference with the
best-ranked team. Discussion: Both approaches obtained very encouraging results
and apply to different types of criteria. The weakly supervised method requires
explicit descriptions of positive and negative examples in some reports. The
terminology-based method is very efficient when medical concepts carry most of
the relevant information. Conclusion: It is unlikely that much more annotated
data will be soon available for the task of identifying a wide range of patient
phenotypes. One must focus on weakly or non-supervised learning methods using
both structured and unstructured data and relying on a comprehensive
representation of the patients.
| 2,020 | Computation and Language |
CVIT-MT Systems for WAT-2018 | This document describes the machine translation system used in the
submissions of IIIT-Hyderabad CVIT-MT for the WAT-2018 English-Hindi
translation task. Performance is evaluated on the associated corpus provided by
the organizers. We experimented with convolutional sequence to sequence
architectures. We also train with additional data obtained through
backtranslation.
| 2,019 | Computation and Language |
compare-mt: A Tool for Holistic Comparison of Language Generation
Systems | In this paper, we describe compare-mt, a tool for holistic analysis and
comparison of the results of systems for language generation tasks such as
machine translation. The main goal of the tool is to give the user a high-level
and coherent view of the salient differences between systems that can then be
used to guide further analysis or system improvement. It implements a number of
tools to do so, such as analysis of accuracy of generation of particular types
of words, bucketed histograms of sentence accuracies or counts based on salient
characteristics, and extraction of characteristic $n$-grams for each system. It
also has a number of advanced features such as use of linguistic labels, source
side data, or comparison of log likelihoods for probabilistic models, and also
aims to be easily extensible by users to new types of analysis. The code is
available at https://github.com/neulab/compare-mt
| 2,019 | Computation and Language |
Natural Language Generation at Scale: A Case Study for Open Domain
Question Answering | Current approaches to Natural Language Generation (NLG) for dialog mainly
focus on domain-specific, task-oriented applications (e.g. restaurant booking)
using limited ontologies (up to 20 slot types), usually without considering the
previous conversation context. Furthermore, these approaches require large
amounts of data for each domain, and do not benefit from examples that may be
available for other domains. This work explores the feasibility of applying
statistical NLG to scenarios requiring larger ontologies, such as multi-domain
dialog applications or open-domain question answering (QA) based on knowledge
graphs. We model NLG through an Encoder-Decoder framework using a large dataset
of interactions between real-world users and a conversational agent for
open-domain QA. First, we investigate the impact of increasing the number of
slot types on the generation quality and experiment with different partitions
of the QA data with progressively larger ontologies (up to 369 slot types).
Second, we perform multi-task learning experiments between open-domain QA and
task-oriented dialog, and benchmark our model on a popular NLG dataset.
Moreover, we experiment with using the conversational context as an additional
input to improve response generation quality. Our experiments show the
feasibility of learning statistical NLG models for open-domain QA with larger
ontologies.
| 2,019 | Computation and Language |
Aligning Biomedical Metadata with Ontologies Using Clustering and
Embeddings | The metadata about scientific experiments published in online repositories
have been shown to suffer from a high degree of representational
heterogeneity---there are often many ways to represent the same type of
information, such as a geographical location via its latitude and longitude. To
harness the potential that metadata have for discovering scientific data, it is
crucial that they be represented in a uniform way that can be queried
effectively. One step toward uniformly-represented metadata is to normalize the
multiple, distinct field names used in metadata (e.g., lat lon, lat and long)
to describe the same type of value. To that end, we present a new method based
on clustering and embeddings (i.e., vector representations of words) to align
metadata field names with ontology terms. We apply our method to biomedical
metadata by generating embeddings for terms in biomedical ontologies from the
BioPortal repository. We carried out a comparative study between our method and
the NCBO Annotator, which revealed that our method yields more and
substantially better alignments between metadata and ontology terms.
| 2,020 | Computation and Language |
When redundancy is useful: A Bayesian approach to 'overinformative'
referring expressions | Referring is one of the most basic and prevalent uses of language. How do
speakers choose from the wealth of referring expressions at their disposal?
Rational theories of language use have come under attack for decades for not
being able to account for the seemingly irrational overinformativeness
ubiquitous in referring expressions. Here we present a novel production model
of referring expressions within the Rational Speech Act framework that treats
speakers as agents that rationally trade off cost and informativeness of
utterances. Crucially, we relax the assumption that informativeness is computed
with respect to a deterministic Boolean semantics, in favor of a
non-deterministic continuous semantics. This innovation allows us to capture a
large number of seemingly disparate phenomena within one unified framework: the
basic asymmetry in speakers' propensity to overmodify with color rather than
size; the increase in overmodification in complex scenes; the increase in
overmodification with atypical features; and the increase in specificity in
nominal reference as a function of typicality. These findings cast a new light
on the production of referring expressions: rather than being wastefully
overinformative, reference is usefully redundant.
| 2,019 | Computation and Language |
Simple, Fast, Accurate Intent Classification and Slot Labeling for
Goal-Oriented Dialogue Systems | With the advent of conversational assistants, like Amazon Alexa, Google Now,
etc., dialogue systems are gaining a lot of traction, especially in industrial
setting. These systems typically consist of Spoken Language understanding
component which, in turn, consists of two tasks - Intent Classification (IC)
and Slot Labeling (SL). Generally, these two tasks are modeled together jointly
to achieve best performance. However, this joint modeling adds to model
obfuscation. In this work, we first design framework for a modularization of
joint IC-SL task to enhance architecture transparency. Then, we explore a
number of self-attention, convolutional, and recurrent models, contributing a
large-scale analysis of modeling paradigms for IC+SL across two datasets.
Finally, using this framework, we propose a class of 'label-recurrent' models
that otherwise non-recurrent, with a 10-dimensional representation of the label
history, and show that our proposed systems are easy to interpret, highly
accurate (achieving over 30% error reduction in SL over the state-of-the-art on
the Snips dataset), as well as fast, at 2x the inference and 2/3 to 1/2 the
training time of comparable recurrent models, thus giving an edge in critical
real-world systems.
| 2,019 | Computation and Language |
Contextual Compositionality Detection with External Knowledge Bases
andWord Embeddings | When the meaning of a phrase cannot be inferred from the individual meanings
of its words (e.g., hot dog), that phrase is said to be non-compositional.
Automatic compositionality detection in multi-word phrases is critical in any
application of semantic processing, such as search engines; failing to detect
non-compositional phrases can hurt system effectiveness notably. Existing
research treats phrases as either compositional or non-compositional in a
deterministic manner. In this paper, we operationalize the viewpoint that
compositionality is contextual rather than deterministic, i.e., that whether a
phrase is compositional or non-compositional depends on its context. For
example, the phrase `green card' is compositional when referring to a green
colored card, whereas it is non-compositional when meaning permanent residence
authorization. We address the challenge of detecting this type of contextual
compositionality as follows: given a multi-word phrase, we enrich the word
embedding representing its semantics with evidence about its global context
(terms it often collocates with) as well as its local context (narratives where
that phrase is used, which we call usage scenarios). We further extend this
representation with information extracted from external knowledge bases. The
resulting representation incorporates both localized context and more general
usage of the phrase and allows to detect its compositionality in a
non-deterministic and contextual way. Empirical evaluation of our model on a
dataset of phrase compositionality, manually collected by crowdsourcing
contextual compositionality assessments, shows that our model outperforms
state-of-the-art baselines notably on detecting phrase compositionality.
| 2,019 | Computation and Language |
Left-to-Right Dependency Parsing with Pointer Networks | We propose a novel transition-based algorithm that straightforwardly parses
sentences from left to right by building $n$ attachments, with $n$ being the
length of the input sentence. Similarly to the recent stack-pointer parser by
Ma et al. (2018), we use the pointer network framework that, given a word, can
directly point to a position from the sentence. However, our left-to-right
approach is simpler than the original top-down stack-pointer parser (not
requiring a stack) and reduces transition sequence length in half, from 2$n$-1
actions to $n$. This results in a quadratic non-projective parser that runs
twice as fast as the original while achieving the best accuracy to date on the
English PTB dataset (96.04% UAS, 94.43% LAS) among fully-supervised
single-model dependency parsers, and improves over the former top-down
transition system in the majority of languages tested.
| 2,019 | Computation and Language |
Decay-Function-Free Time-Aware Attention to Context and Speaker
Indicator for Spoken Language Understanding | To capture salient contextual information for spoken language understanding
(SLU) of a dialogue, we propose time-aware models that automatically learn the
latent time-decay function of the history without a manual time-decay function.
We also propose a method to identify and label the current speaker to improve
the SLU accuracy. In experiments on the benchmark dataset used in Dialog State
Tracking Challenge 4, the proposed models achieved significantly higher F1
scores than the state-of-the-art contextual models. Finally, we analyze the
effectiveness of the introduced models in detail. The analysis demonstrates
that the proposed methods were effective to improve SLU accuracy individually.
| 2,019 | Computation and Language |
Probing the Need for Visual Context in Multimodal Machine Translation | Current work on multimodal machine translation (MMT) has suggested that the
visual modality is either unnecessary or only marginally beneficial. We posit
that this is a consequence of the very simple, short and repetitive sentences
used in the only available dataset for the task (Multi30K), rendering the
source text sufficient as context. In the general case, however, we believe
that it is possible to combine visual and textual information in order to
ground translations. In this paper we probe the contribution of the visual
modality to state-of-the-art MMT models by conducting a systematic analysis
where we partially deprive the models from source-side textual context. Our
results show that under limited textual context, models are capable of
leveraging the visual input to generate better translations. This contradicts
the current belief that MMT models disregard the visual modality because of
either the quality of the image features or the way they are integrated into
the model.
| 2,019 | Computation and Language |
Combination of multiple Deep Learning architectures for Offensive
Language Detection in Tweets | This report contains the details regarding our submission to the OffensEval
2019 (SemEval 2019 - Task 6). The competition was based on the Offensive
Language Identification Dataset. We first discuss the details of the classifier
implemented and the type of input data used and pre-processing performed. We
then move onto critically evaluating our performance. We have achieved a
macro-average F1-score of 0.76, 0.68, 0.54, respectively for Task a, Task b,
and Task c, which we believe reflects on the level of sophistication of the
models implemented. Finally, we will be discussing the difficulties encountered
and possible improvements for the future.
| 2,019 | Computation and Language |
Russian Language Datasets in the Digitial Humanities Domain and Their
Evaluation with Word Embeddings | In this paper, we present Russian language datasets in the digital humanities
domain for the evaluation of word embedding techniques or similar language
modeling and feature learning algorithms. The datasets are split into two task
types, word intrusion and word analogy, and contain 31362 task units in total.
The characteristics of the tasks and datasets are that they build upon small,
domain-specific corpora, and that the datasets contain a high number of named
entities. The datasets were created manually for two fantasy novel book series
("A Song of Ice and Fire" and "Harry Potter"). We provide baseline evaluations
with popular word embedding models trained on the book corpora for the given
tasks, both for the Russian and English language versions of the datasets.
Finally, we compare and analyze the results and discuss specifics of Russian
language with regards to the problem setting.
| 2,019 | Computation and Language |
Selective Attention for Context-aware Neural Machine Translation | Despite the progress made in sentence-level NMT, current systems still fall
short at achieving fluent, good quality translation for a full document. Recent
works in context-aware NMT consider only a few previous sentences as context
and may not scale to entire documents. To this end, we propose a novel and
scalable top-down approach to hierarchical attention for context-aware NMT
which uses sparse attention to selectively focus on relevant sentences in the
document context and then attends to key words in those sentences. We also
propose single-level attention approaches based on sentence or word-level
information in the context. The document-level context representation, produced
from these attention modules, is integrated into the encoder or decoder of the
Transformer model depending on whether we use monolingual or bilingual context.
Our experiments and evaluation on English-German datasets in different document
MT settings show that our selective attention approach not only significantly
outperforms context-agnostic baselines but also surpasses context-aware
baselines in most cases.
| 2,019 | Computation and Language |
Bidirectional Recurrent Models for Offensive Tweet Classification | In this paper we propose four deep recurrent architectures to tackle the task
of offensive tweet detection as well as further classification into targeting
and subject of said targeting. Our architectures are based on LSTMs and GRUs,
we present a simple bidirectional LSTM as a baseline system and then further
increase the complexity of the models by adding convolutional layers and
implementing a split-process-merge architecture with LSTM and GRU as
processors. Multiple pre-processing techniques were also investigated. The
validation F1-score results from each model are presented for the three
subtasks as well as the final F1-score performance on the private competition
test set. It was found that model complexity did not necessarily yield better
results. Our best-performing model was also the simplest, a bidirectional LSTM;
closely followed by a two-branch bidirectional LSTM and GRU architecture.
| 2,019 | Computation and Language |
Linguistic Knowledge and Transferability of Contextual Representations | Contextual word representations derived from large-scale neural language
models are successful across a diverse set of NLP tasks, suggesting that they
encode useful and transferable features of language. To shed light on the
linguistic knowledge they capture, we study the representations produced by
several recent pretrained contextualizers (variants of ELMo, the OpenAI
transformer language model, and BERT) with a suite of seventeen diverse probing
tasks. We find that linear models trained on top of frozen contextual
representations are competitive with state-of-the-art task-specific models in
many cases, but fail on tasks requiring fine-grained linguistic knowledge
(e.g., conjunct identification). To investigate the transferability of
contextual word representations, we quantify differences in the transferability
of individual layers within contextualizers, especially between recurrent
neural networks (RNNs) and transformers. For instance, higher layers of RNNs
are more task-specific, while transformer layers do not exhibit the same
monotonic trend. In addition, to better understand what makes contextual word
representations transferable, we compare language model pretraining with eleven
supervised pretraining tasks. For any given task, pretraining on a closely
related task yields better performance than language model pretraining (which
is better on average) when the pretraining dataset is fixed. However, language
model pretraining on more data gives the best results.
| 2,019 | Computation and Language |
RAP-Net: Recurrent Attention Pooling Networks for Dialogue Response
Selection | The response selection has been an emerging research topic due to the growing
interest in dialogue modeling, where the goal of the task is to select an
appropriate response for continuing dialogues. To further push the end-to-end
dialogue model toward real-world scenarios, the seventh Dialog System
Technology Challenge (DSTC7) proposed a challenging track based on real chatlog
datasets. The competition focuses on dialogue modeling with several advanced
characteristics: (1) natural language diversity, (2) capability of precisely
selecting a proper response from a large set of candidates or the scenario
without any correct answer, and (3) knowledge grounding. This paper introduces
recurrent attention pooling networks (RAP-Net), a novel framework for response
selection, which can well estimate the relevance between the dialogue contexts
and the candidates. The proposed RAP-Net is shown to be effective and can be
generalized across different datasets and settings in the DSTC7 experiments.
| 2,019 | Computation and Language |
Learning Multi-Level Information for Dialogue Response Selection by
Highway Recurrent Transformer | With the increasing research interest in dialogue response generation, there
is an emerging branch formulating this task as selecting next sentences, where
given the partial dialogue contexts, the goal is to determine the most probable
next sentence. Following the recent success of the Transformer model, this
paper proposes (1) a new variant of attention mechanism based on multi-head
attention, called highway attention, and (2) a recurrent model based on
transformer and the proposed highway attention, so-called Highway Recurrent
Transformer. Experiments on the response selection task in the seventh Dialog
System Technology Challenge (DSTC7) show the capability of the proposed model
of modeling both utterance-level and dialogue-level information; the
effectiveness of each module is further analyzed as well.
| 2,019 | Computation and Language |
SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in
Social Media (OffensEval) | We present the results and the main findings of SemEval-2019 Task 6 on
Identifying and Categorizing Offensive Language in Social Media (OffensEval).
The task was based on a new dataset, the Offensive Language Identification
Dataset (OLID), which contains over 14,000 English tweets. It featured three
sub-tasks. In sub-task A, the goal was to discriminate between offensive and
non-offensive posts. In sub-task B, the focus was on the type of offensive
content in the post. Finally, in sub-task C, systems had to detect the target
of the offensive posts. OffensEval attracted a large number of participants and
it was one of the most popular tasks in SemEval-2019. In total, about 800 teams
signed up to participate in the task, and 115 of them submitted results, which
we present and analyze in this report.
| 2,019 | Computation and Language |
Recent advances in conversational NLP : Towards the standardization of
Chatbot building | Dialogue systems have become recently essential in our life. Their use is
getting more and more fluid and easy throughout the time. This boils down to
the improvements made in NLP and AI fields. In this paper, we try to provide an
overview to the current state of the art of dialogue systems, their categories
and the different approaches to build them. We end up with a discussion that
compares all the techniques and analyzes the strengths and weaknesses of each.
Finally, we present an opinion piece suggesting to orientate the research
towards the standardization of dialogue systems building.
| 2,019 | Computation and Language |
Low Resource Text Classification with ULMFit and Backtranslation | In computer vision, virtually every state-of-the-art deep learning system is
trained with data augmentation. In text classification, however, data
augmentation is less widely practiced because it must be performed before
training and risks introducing label noise. We augment the IMDB movie reviews
dataset with examples generated by two families of techniques: random token
perturbations introduced by Wei and Zou [2019] and backtranslation --
translating to a second language then back to English. In low resource
environments, backtranslation generates significant improvement on top of the
state of-the-art ULMFit model. A ULMFit model pretrained on wikitext103 and
then fine-tuned on only 50 IMDB examples and 500 synthetic examples generated
by backtranslation achieves 80.6% accuracy, an 8.1% improvement over the
augmentation-free baseline with only 9 minutes of additional training time.
Random token perturbations do not yield any improvements but incur equivalent
computational cost. The benefits of training with backtranslated examples
decreases with the size of the available training data. On the full dataset,
neither augmentation technique improves upon ULMFit's state of the art
performance. We address this by using backtranslations as a form of test time
augmentation as well as ensembling ULMFit with other models, and achieve small
improvements.
| 2,019 | Computation and Language |
A Type-coherent, Expressive Representation as an Initial Step to
Language Understanding | A growing interest in tasks involving language understanding by the NLP
community has led to the need for effective semantic parsing and inference.
Modern NLP systems use semantic representations that do not quite fulfill the
nuanced needs for language understanding: adequately modeling language
semantics, enabling general inferences, and being accurately recoverable. This
document describes underspecified logical forms (ULF) for Episodic Logic (EL),
which is an initial form for a semantic representation that balances these
needs. ULFs fully resolve the semantic type structure while leaving issues such
as quantifier scope, word sense, and anaphora unresolved; they provide a
starting point for further resolution into EL, and enable certain structural
inferences without further resolution. This document also presents preliminary
results of creating a hand-annotated corpus of ULFs for the purpose of training
a precise ULF parser, showing a three-person pairwise interannotator agreement
of 0.88 on confident annotations. We hypothesize that a divide-and-conquer
approach to semantic parsing starting with derivation of ULFs will lead to
semantic analyses that do justice to subtle aspects of linguistic meaning, and
will enable construction of more accurate semantic parsers.
| 2,019 | Computation and Language |
An end-to-end Neural Network Framework for Text Clustering | The unsupervised text clustering is one of the major tasks in natural
language processing (NLP) and remains a difficult and complex problem.
Conventional \mbox{methods} generally treat this task using separated steps,
including text representation learning and clustering the representations. As
an improvement, neural methods have also been introduced for continuous
representation learning to address the sparsity problem. However, the
multi-step process still deviates from the unified optimization target.
Especially the second step of cluster is generally performed with conventional
methods such as k-Means. We propose a pure neural framework for text clustering
in an end-to-end manner. It jointly learns the text representation and the
clustering model. Our model works well when the context can be obtained, which
is nearly always the case in the field of NLP. We have our method
\mbox{evaluated} on two widely used benchmarks: IMDB movie reviews for
sentiment classification and $20$-Newsgroup for topic categorization. Despite
its simplicity, experiments show the model outperforms previous clustering
methods by a large margin. Furthermore, the model is also verified on English
wiki dataset as a large corpus.
| 2,019 | Computation and Language |
LINSPECTOR: Multilingual Probing Tasks for Word Representations | Despite an ever growing number of word representation models introduced for a
large number of languages, there is a lack of a standardized technique to
provide insights into what is captured by these models. Such insights would
help the community to get an estimate of the downstream task performance, as
well as to design more informed neural architectures, while avoiding extensive
experimentation which requires substantial computational resources not all
researchers have access to. A recent development in NLP is to use simple
classification tasks, also called probing tasks, that test for a single
linguistic feature such as part-of-speech. Existing studies mostly focus on
exploring the linguistic information encoded by the continuous representations
of English text. However, from a typological perspective the morphologically
poor English is rather an outlier: the information encoded by the word order
and function words in English is often stored on a morphological level in other
languages. To address this, we introduce 15 type-level probing tasks such as
case marking, possession, word length, morphological tag count and pseudoword
identification for 24 languages. We present a reusable methodology for creation
and evaluation of such tests in a multilingual setting. We then present
experiments on several diverse multilingual word embedding models, in which we
relate the probing task performance for a diverse set of languages to a range
of five classic NLP tasks: POS-tagging, dependency parsing, semantic role
labeling, named entity recognition and natural language inference. We find that
a number of probing tests have significantly high positive correlation to the
downstream tasks, especially for morphologically rich languages. We show that
our tests can be used to explore word embeddings or black-box neural models for
linguistic cues in a multilingual setting.
| 2,019 | Computation and Language |
Data Augmentation via Dependency Tree Morphing for Low-Resource
Languages | Neural NLP systems achieve high scores in the presence of sizable training
dataset. Lack of such datasets leads to poor system performances in the case
low-resource languages. We present two simple text augmentation techniques
using dependency trees, inspired from image processing. We crop sentences by
removing dependency links, and we rotate sentences by moving the tree fragments
around the root. We apply these techniques to augment the training sets of
low-resource languages in Universal Dependencies project. We implement a
character-level sequence tagging model and evaluate the augmented datasets on
part-of-speech tagging task. We show that crop and rotate provides improvements
over the models trained with non-augmented data for majority of the languages,
especially for languages with rich case marking systems.
| 2,019 | Computation and Language |
Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing
Auxiliary Sentence | Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained
opinion polarity towards a specific aspect, is a challenging subtask of
sentiment analysis (SA). In this paper, we construct an auxiliary sentence from
the aspect and convert ABSA to a sentence-pair classification task, such as
question answering (QA) and natural language inference (NLI). We fine-tune the
pre-trained model from BERT and achieve new state-of-the-art results on
SentiHood and SemEval-2014 Task 4 datasets.
| 2,019 | Computation and Language |
Pre-trained Language Model Representations for Language Generation | Pre-trained language model representations have been successful in a wide
range of language understanding tasks. In this paper, we examine different
strategies to integrate pre-trained representations into sequence to sequence
models and apply it to neural machine translation and abstractive
summarization. We find that pre-trained representations are most effective when
added to the encoder network which slows inference by only 14%. Our experiments
in machine translation show gains of up to 5.3 BLEU in a simulated
resource-poor setup. While returns diminish with more labeled data, we still
observe improvements when millions of sentence-pairs are available. Finally, on
abstractive summarization we achieve a new state of the art on the full text
version of CNN/DailyMail.
| 2,019 | Computation and Language |
Knowledge-Grounded Response Generation with Deep Attentional
Latent-Variable Model | End-to-end dialogue generation has achieved promising results without using
handcrafted features and attributes specific for each task and corpus. However,
one of the fatal drawbacks in such approaches is that they are unable to
generate informative utterances, so it limits their usage from some real-world
conversational applications. This paper attempts at generating diverse and
informative responses with a variational generation model, which contains a
joint attention mechanism conditioning on the information from both dialogue
contexts and extra knowledge.
| 2,019 | Computation and Language |
Competence-based Curriculum Learning for Neural Machine Translation | Current state-of-the-art NMT systems use large neural networks that are not
only slow to train, but also often require many heuristics and optimization
tricks, such as specialized learning rate schedules and large batch sizes. This
is undesirable as it requires extensive hyperparameter tuning. In this paper,
we propose a curriculum learning framework for NMT that reduces training time,
reduces the need for specialized heuristics or large batch sizes, and results
in overall better performance. Our framework consists of a principled way of
deciding which training samples are shown to the model at different times
during training, based on the estimated difficulty of a sample and the current
competence of the model. Filtering training samples in this manner prevents the
model from getting stuck in bad local optima, making it converge faster and
reach a better solution than the common approach of uniformly sampling training
examples. Furthermore, the proposed method can be easily applied to existing
NMT models by simply modifying their input data pipelines. We show that our
framework can help improve the training time and the performance of both
recurrent neural network models and Transformers, achieving up to a 70%
decrease in training time, while at the same time obtaining accuracy
improvements of up to 2.2 BLEU.
| 2,019 | Computation and Language |
Expanding the Text Classification Toolbox with Cross-Lingual Embeddings | Most work in text classification and Natural Language Processing (NLP)
focuses on English or a handful of other languages that have text corpora of
hundreds of millions of words. This is creating a new version of the digital
divide: the artificial intelligence (AI) divide. Transfer-based approaches,
such as Cross-Lingual Text Classification (CLTC) - the task of categorizing
texts written in different languages into a common taxonomy, are a promising
solution to the emerging AI divide. Recent work on CLTC has focused on
demonstrating the benefits of using bilingual word embeddings as features,
relegating the CLTC problem to a mere benchmark based on a simple averaged
perceptron.
In this paper, we explore more extensively and systematically two flavors of
the CLTC problem: news topic classification and textual churn intent detection
(TCID) in social media. In particular, we test the hypothesis that embeddings
with context are more effective, by multi-tasking the learning of multilingual
word embeddings and text classification; we explore neural architectures for
CLTC; and we move from bi- to multi-lingual word embeddings. For all
architectures, types of word embeddings and datasets, we notice a consistent
gain trend in favor of multilingual joint training, especially for
low-resourced languages.
| 2,019 | Computation and Language |
Relation extraction between the clinical entities based on the shortest
dependency path based LSTM | Owing to the exponential rise in the electronic medical records, information
extraction in this domain is becoming an important area of research in recent
years. Relation extraction between the medical concepts such as medical
problem, treatment, and test etc. is also one of the most important tasks in
this area. In this paper, we present an efficient relation extraction system
based on the shortest dependency path (SDP) generated from the dependency
parsed tree of the sentence. Instead of relying on many handcrafted features
and the whole sequence of tokens present in a sentence, our system relies only
on the SDP between the target entities. For every pair of entities, the system
takes only the words in the SDP, their dependency labels, Part-of-Speech
information and the types of the entities as the input. We develop a dependency
parser for extracting dependency information. We perform our experiments on the
benchmark i2b2 dataset for clinical relation extraction challenge 2010.
Experimental results show that our system outperforms the existing systems.
| 2,019 | Computation and Language |
Argument Mining for Understanding Peer Reviews | Peer-review plays a critical role in the scientific writing and publication
ecosystem. To assess the efficiency and efficacy of the reviewing process, one
essential element is to understand and evaluate the reviews themselves. In this
work, we study the content and structure of peer reviews under the argument
mining framework, through automatically detecting (1) argumentative
propositions put forward by reviewers, and (2) their types (e.g., evaluating
the work or making suggestions for improvement). We first collect 14.2K reviews
from major machine learning and natural language processing venues. 400 reviews
are annotated with 10,386 propositions and corresponding types of Evaluation,
Request, Fact, Reference, or Quote. We then train state-of-the-art proposition
segmentation and classification models on the data to evaluate their utilities
and identify new challenges for this new domain, motivating future directions
for argument mining. Further experiments show that proposition usage varies
across venues in amount, type, and topic.
| 2,019 | Computation and Language |
End-to-End Learning Using Cycle Consistency for Image-to-Caption
Transformations | So far, research to generate captions from images has been carried out from
the viewpoint that a caption holds sufficient information for an image. If it
is possible to generate an image that is close to the input image from a
generated caption, i.e., if it is possible to generate a natural language
caption containing sufficient information to reproduce the image, then the
caption is considered to be faithful to the image. To make such regeneration
possible, learning using the cycle-consistency loss is effective. In this
study, we propose a method of generating captions by learning end-to-end mutual
transformations between images and texts. To evaluate our method, we perform
comparative experiments with and without the cycle consistency. The results are
evaluated by an automatic evaluation and crowdsourcing, demonstrating that our
proposed method is effective.
| 2,019 | Computation and Language |
Connecting Language and Knowledge with Heterogeneous Representations for
Neural Relation Extraction | Knowledge Bases (KBs) require constant up-dating to reflect changes to the
world they represent. For general purpose KBs, this is often done through
Relation Extraction (RE), the task of predicting KB relations expressed in text
mentioning entities known to the KB. One way to improve RE is to use KB
Embeddings (KBE) for link prediction. However, despite clear connections
between RE and KBE, little has been done toward properly unifying these models
systematically. We help close the gap with a framework that unifies the
learning of RE and KBE models leading to significant improvements over the
state-of-the-art in RE. The code is available at
https://github.com/billy-inn/HRERE.
| 2,019 | Computation and Language |
Aligning Vector-spaces with Noisy Supervised Lexicons | The problem of learning to translate between two vector spaces given a set of
aligned points arises in several application areas of NLP. Current solutions
assume that the lexicon which defines the alignment pairs is noise-free. We
consider the case where the set of aligned points is allowed to contain an
amount of noise, in the form of incorrect lexicon pairs and show that this
arises in practice by analyzing the edited dictionaries after the cleaning
process. We demonstrate that such noise substantially degrades the accuracy of
the learned translation when using current methods. We propose a model that
accounts for noisy pairs. This is achieved by introducing a generative model
with a compatible iterative EM algorithm. The algorithm jointly learns the
noise level in the lexicon, finds the set of noisy pairs, and learns the
mapping between the spaces. We demonstrate the effectiveness of our proposed
algorithm on two alignment problems: bilingual word embedding translation, and
mapping between diachronic embedding spaces for recovering the semantic shifts
of words across time periods.
| 2,019 | Computation and Language |
Computational and Robotic Models of Early Language Development: A Review | We review computational and robotics models of early language learning and
development. We first explain why and how these models are used to understand
better how children learn language. We argue that they provide concrete
theories of language learning as a complex dynamic system, complementing
traditional methods in psychology and linguistics. We review different modeling
formalisms, grounded in techniques from machine learning and artificial
intelligence such as Bayesian and neural network approaches. We then discuss
their role in understanding several key mechanisms of language development:
cross-situational statistical learning, embodiment, situated social
interaction, intrinsically motivated learning, and cultural evolution. We
conclude by discussing future challenges for research, including modeling of
large-scale empirical data about language acquisition in real-world
environments.
Keywords: Early language learning, Computational and robotic models, machine
learning, development, embodiment, social interaction, intrinsic motivation,
self-organization, dynamical systems, complexity.
| 2,019 | Computation and Language |
Fine-tune BERT for Extractive Summarization | BERT, a pre-trained Transformer model, has achieved ground-breaking
performance on multiple NLP tasks. In this paper, we describe BERTSUM, a simple
variant of BERT, for extractive summarization. Our system is the state of the
art on the CNN/Dailymail dataset, outperforming the previous best-performed
system by 1.65 on ROUGE-L. The codes to reproduce our results are available at
https://github.com/nlpyang/BertSum
| 2,019 | Computation and Language |
dpUGC: Learn Differentially Private Representation for User Generated
Contents | This paper firstly proposes a simple yet efficient generalized approach to
apply differential privacy to text representation (i.e., word embedding). Based
on it, we propose a user-level approach to learn personalized differentially
private word embedding model on user generated contents (UGC). To our best
knowledge, this is the first work of learning user-level differentially private
word embedding model from text for sharing. The proposed approaches protect the
privacy of the individual from re-identification, especially provide better
trade-off of privacy and data utility on UGC data for sharing. The experimental
results show that the trained embedding models are applicable for the classic
text analysis tasks (e.g., regression). Moreover, the proposed approaches of
learning differentially private embedding models are both framework- and data-
independent, which facilitates the deployment and sharing. The source code is
available at https://github.com/sonvx/dpText.
| 2,019 | Computation and Language |
Recognizing Arrow Of Time In The Short Stories | Recognizing arrow of time in short stories is a challenging task. i.e., given
only two paragraphs, determining which comes first and which comes next is a
difficult task even for humans. In this paper, we have collected and curated a
novel dataset for tackling this challenging task. We have shown that a
pre-trained BERT architecture achieves reasonable accuracy on the task, and
outperforms RNN-based architectures.
| 2,019 | Computation and Language |
On Measuring Social Biases in Sentence Encoders | The Word Embedding Association Test shows that GloVe and word2vec word
embeddings exhibit human-like implicit biases based on gender, race, and other
social constructs (Caliskan et al., 2017). Meanwhile, research on learning
reusable text representations has begun to explore sentence-level texts, with
some sentence encoders seeing enthusiastic adoption. Accordingly, we extend the
Word Embedding Association Test to measure bias in sentence encoders. We then
test several sentence encoders, including state-of-the-art methods such as ELMo
and BERT, for the social biases studied in prior work and two important biases
that are difficult or impossible to test at the word level. We observe mixed
results including suspicious patterns of sensitivity that suggest the test's
assumptions may not hold in general. We conclude by proposing directions for
future work on measuring bias in sentence encoders.
| 2,019 | Computation and Language |
Neural Grammatical Error Correction with Finite State Transducers | Grammatical error correction (GEC) is one of the areas in natural language
processing in which purely neural models have not yet superseded more
traditional symbolic models. Hybrid systems combining phrase-based statistical
machine translation (SMT) and neural sequence models are currently among the
most effective approaches to GEC. However, both SMT and neural
sequence-to-sequence models require large amounts of annotated data. Language
model based GEC (LM-GEC) is a promising alternative which does not rely on
annotated training data. We show how to improve LM-GEC by applying modelling
techniques based on finite state transducers. We report further gains by
rescoring with neural language models. We show that our methods developed for
LM-GEC can also be used with SMT systems if annotated training data is
available. Our best system outperforms the best published result on the
CoNLL-2014 test set, and achieves far better relative improvements over the SMT
baselines than previous hybrid systems.
| 2,019 | Computation and Language |
Diversifying Reply Suggestions using a Matching-Conditional Variational
Autoencoder | We consider the problem of diversifying automated reply suggestions for a
commercial instant-messaging (IM) system (Skype). Our conversation model is a
standard matching based information retrieval architecture, which consists of
two parallel encoders to project messages and replies into a common feature
representation. During inference, we select replies from a fixed response set
using nearest neighbors in the feature space. To diversify responses, we
formulate the model as a generative latent variable model with Conditional
Variational Auto-Encoder (M-CVAE). We propose a constrained-sampling approach
to make the variational inference in M-CVAE efficient for our production
system. In offline experiments, M-CVAE consistently increased diversity by
~30-40% without significant impact on relevance. This translated to a 5% gain
in click-rate in our online production system.
| 2,019 | Computation and Language |
Federated Learning Of Out-Of-Vocabulary Words | We demonstrate that a character-level recurrent neural network is able to
learn out-of-vocabulary (OOV) words under federated learning settings, for the
purpose of expanding the vocabulary of a virtual keyboard for smartphones
without exporting sensitive text to servers. High-frequency words can be
sampled from the trained generative model by drawing from the joint posterior
directly. We study the feasibility of the approach in two settings: (1) using
simulated federated learning on a publicly available non-IID per-user dataset
from a popular social networking website, (2) using federated learning on data
hosted on user mobile devices. The model achieves good recall and precision
compared to ground-truth OOV words in setting (1). With (2) we demonstrate the
practicality of this approach by showing that we can learn meaningful OOV words
with good character-level prediction accuracy and cross entropy loss.
| 2,019 | Computation and Language |
Reinforcement Learning Based Text Style Transfer without Parallel
Training Corpus | Text style transfer rephrases a text from a source style (e.g., informal) to
a target style (e.g., formal) while keeping its original meaning. Despite the
success existing works have achieved using a parallel corpus for the two
styles, transferring text style has proven significantly more challenging when
there is no parallel training corpus. In this paper, we address this challenge
by using a reinforcement-learning-based generator-evaluator architecture. Our
generator employs an attention-based encoder-decoder to transfer a sentence
from the source style to the target style. Our evaluator is an adversarially
trained style discriminator with semantic and syntactic constraints that score
the generated sentence for style, meaning preservation, and fluency.
Experimental results on two different style transfer tasks (sentiment transfer
and formality transfer) show that our model outperforms state-of-the-art
approaches. Furthermore, we perform a manual evaluation that demonstrates the
effectiveness of the proposed method using subjective metrics of generated text
quality.
| 2,019 | Computation and Language |
Document Similarity for Texts of Varying Lengths via Hidden Topics | Measuring similarity between texts is an important task for several
applications. Available approaches to measure document similarity are
inadequate for document pairs that have non-comparable lengths, such as a long
document and its summary. This is because of the lexical, contextual and the
abstraction gaps between a long document of rich details and its concise
summary of abstract information. In this paper, we present a document matching
approach to bridge this gap, by comparing the texts in a common space of hidden
topics. We evaluate the matching algorithm on two matching tasks and find that
it consistently and widely outperforms strong baselines. We also highlight the
benefits of incorporating domain knowledge to text matching.
| 2,019 | Computation and Language |
SciBERT: A Pretrained Language Model for Scientific Text | Obtaining large-scale annotated data for NLP tasks in the scientific domain
is challenging and expensive. We release SciBERT, a pretrained language model
based on BERT (Devlin et al., 2018) to address the lack of high-quality,
large-scale labeled scientific data. SciBERT leverages unsupervised pretraining
on a large multi-domain corpus of scientific publications to improve
performance on downstream scientific NLP tasks. We evaluate on a suite of tasks
including sequence tagging, sentence classification and dependency parsing,
with datasets from a variety of scientific domains. We demonstrate
statistically significant improvements over BERT and achieve new
state-of-the-art results on several of these tasks. The code and pretrained
models are available at https://github.com/allenai/scibert/.
| 2,019 | Computation and Language |
A Silver Standard Corpus of Human Phenotype-Gene Relations | Human phenotype-gene relations are fundamental to fully understand the origin
of some phenotypic abnormalities and their associated diseases. Biomedical
literature is the most comprehensive source of these relations, however, we
need Relation Extraction tools to automatically recognize them. Most of these
tools require an annotated corpus and to the best of our knowledge, there is no
corpus available annotated with human phenotype-gene relations. This paper
presents the Phenotype-Gene Relations (PGR) corpus, a silver standard corpus of
human phenotype and gene annotations and their relations. The corpus consists
of 1712 abstracts, 5676 human phenotype annotations, 13835 gene annotations,
and 4283 relations. We generated this corpus using Named-Entity Recognition
tools, whose results were partially evaluated by eight curators, obtaining a
precision of 87.01%. By using the corpus we were able to obtain promising
results with two state-of-the-art deep learning tools, namely 78.05% of
precision. The PGR corpus was made publicly available to the research
community.
| 2,019 | Computation and Language |
Language Model Adaptation for Language and Dialect Identification of
Text | This article describes an unsupervised language model adaptation approach
that can be used to enhance the performance of language identification methods.
The approach is applied to a current version of the HeLI language
identification method, which is now called HeLI 2.0. We describe the HeLI 2.0
method in detail. The resulting system is evaluated using the datasets from the
German dialect identification and Indo-Aryan language identification shared
tasks of the VarDial workshops 2017 and 2018. The new approach with language
identification provides considerably higher F1-scores than the previous HeLI
method or the other systems which participated in the shared tasks. The results
indicate that unsupervised language model adaptation should be considered as an
option in all language identification tasks, especially in those where
encountering out-of-domain data is likely.
| 2,019 | Computation and Language |
A New Approach for Semi-automatic Building and Extending a Multilingual
Terminology Thesaurus | This paper describes a new system for semi-automatically building, extending
and managing a terminological thesaurus---a multilingual terminology dictionary
enriched with relationships between the terms themselves to form a thesaurus.
The system allows to radically enhance the workflow of current terminology
expert groups, where most of the editing decisions still come from
introspection. The presented system supplements the lexicographic process with
natural language processing techniques, which are seamlessly integrated to the
thesaurus editing environment. The system's methodology and the resulting
thesaurus are closely connected to new domain corpora in the six languages
involved. They are used for term usage examples as well as for the automatic
extraction of new candidate terms. The terminological thesaurus is now
accessible via a web-based application, which a) presents rich detailed
information on each term, b) visualizes term relations, and c) displays
real-life usage examples of the term in the domain-related documents and in the
context-based similar terms. Furthermore, the specialized corpora are used to
detect candidate translations of terms from the central language (Czech) to the
other languages (English, French, German, Russian and Slovak) as well as to
detect broader Czech terms, which help to place new terms in the actual
thesaurus hierarchy. This project has been realized as a terminological
thesaurus of land surveying, but the presented tools and methodology are
reusable for other terminology domains.
| 2,019 | Computation and Language |
A Probabilistic Generative Model of Linguistic Typology | In the principles-and-parameters framework, the structural features of
languages depend on parameters that may be toggled on or off, with a single
parameter often dictating the status of multiple features. The implied
covariance between features inspires our probabilisation of this line of
linguistic inquiry---we develop a generative model of language based on
exponential-family matrix factorisation. By modelling all languages and
features within the same architecture, we show how structural similarities
between languages can be exploited to predict typological features with
near-perfect accuracy, outperforming several baselines on the task of
predicting held-out features. Furthermore, we show that language embeddings
pre-trained on monolingual text allow for generalisation to unobserved
languages. This finding has clear practical and also theoretical implications:
the results confirm what linguists have hypothesised, i.e.~that there are
significant correlations between typological features and languages.
| 2,019 | Computation and Language |
Deep Learning and Word Embeddings for Tweet Classification for Crisis
Response | Tradition tweet classification models for crisis response focus on
convolutional layers and domain-specific word embeddings. In this paper, we
study the application of different neural networks with general-purpose and
domain-specific word embeddings to investigate their ability to improve the
performance of tweet classification models. We evaluate four tweet
classification models on CrisisNLP dataset and obtain comparable results which
indicates that general-purpose word embedding such as GloVe can be used instead
of domain-specific word embedding especially with Bi-LSTM where results
reported the highest performance of 62.04% F1 score.
| 2,019 | Computation and Language |
ner and pos when nothing is capitalized | For those languages which use it, capitalization is an important signal for
the fundamental NLP tasks of Named Entity Recognition (NER) and Part of Speech
(POS) tagging. In fact, it is such a strong signal that model performance on
these tasks drops sharply in common lowercased scenarios, such as noisy web
text or machine translation outputs. In this work, we perform a systematic
analysis of solutions to this problem, modifying only the casing of the train
or test data using lowercasing and truecasing methods. While prior work and
first impressions might suggest training a caseless model, or using a truecaser
at test time, we show that the most effective strategy is a concatenation of
cased and lowercased training data, producing a single model with high
performance on both cased and uncased text. As shown in our experiments, this
result holds across tasks and input representations. Finally, we show that our
proposed solution gives an 8% F1 improvement in mention detection on noisy
out-of-domain Twitter data.
| 2,019 | Computation and Language |
On Attribution of Recurrent Neural Network Predictions via Additive
Decomposition | RNN models have achieved the state-of-the-art performance in a wide range of
text mining tasks. However, these models are often regarded as black-boxes and
are criticized due to the lack of interpretability. In this paper, we enhance
the interpretability of RNNs by providing interpretable rationales for RNN
predictions. Nevertheless, interpreting RNNs is a challenging problem. Firstly,
unlike existing methods that rely on local approximation, we aim to provide
rationales that are more faithful to the decision making process of RNN models.
Secondly, a flexible interpretation method should be able to assign
contribution scores to text segments of varying lengths, instead of only to
individual words. To tackle these challenges, we propose a novel attribution
method, called REAT, to provide interpretations to RNN predictions. REAT
decomposes the final prediction of a RNN into additive contribution of each
word in the input text. This additive decomposition enables REAT to further
obtain phrase-level attribution scores. In addition, REAT is generally
applicable to various RNN architectures, including GRU, LSTM and their
bidirectional versions. Experimental results demonstrate the faithfulness and
interpretability of the proposed attribution method. Comprehensive analysis
shows that our attribution method could unveil the useful linguistic knowledge
captured by RNNs. Some analysis further demonstrates our method could be
utilized as a debugging tool to examine the vulnerability and failure reasons
of RNNs, which may lead to several promising future directions to promote
generalization ability of RNNs.
| 2,019 | Computation and Language |
CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages | We describe our development of CSS10, a collection of single speaker speech
datasets for ten languages. It is composed of short audio clips from LibriVox
audiobooks and their aligned texts. To validate its quality we train two neural
text-to-speech models on each dataset. Subsequently, we conduct Mean Opinion
Score tests on the synthesized speech samples. We make our datasets,
pre-trained models, and test resources publicly available. We hope they will be
used for future speech tasks.
| 2,019 | Computation and Language |
Grammatical Error Correction and Style Transfer via Zero-shot
Monolingual Translation | Both grammatical error correction and text style transfer can be viewed as
monolingual sequence-to-sequence transformation tasks, but the scarcity of
directly annotated data for either task makes them unfeasible for most
languages. We present an approach that does both tasks within the same trained
model, and only uses regular language parallel data, without requiring
error-corrected or style-adapted texts. We apply our model to three languages
and present a thorough evaluation on both tasks, showing that the model is
reliable for a number of error types and style transfer aspects.
| 2,019 | Computation and Language |
Multilevel Text Normalization with Sequence-to-Sequence Networks and
Multisource Learning | We define multilevel text normalization as sequence-to-sequence processing
that transforms naturally noisy text into a sequence of normalized units of
meaning (morphemes) in three steps: 1) writing normalization, 2) lemmatization,
3) canonical segmentation. These steps are traditionally considered separate
NLP tasks, with diverse solutions, evaluation schemes and data sources. We
exploit the fact that all these tasks involve sub-word sequence-to-sequence
transformation to propose a systematic solution for all of them using neural
encoder-decoder technology. The specific challenge that we tackle in this paper
is integrating the traditional know-how on separate tasks into the neural
sequence-to-sequence framework to improve the state of the art. We address this
challenge by enriching the general framework with mechanisms that allow
processing the information on multiple levels of text organization (characters,
morphemes, words, sentences) in combination with structural information
(multilevel language model, part-of-speech) and heterogeneous sources (text,
dictionaries). We show that our solution consistently improves on the current
methods in all three steps. In addition, we analyze the performance of our
system to show the specific contribution of the integrating components to the
overall improvement.
| 2,019 | Computation and Language |
Does My Rebuttal Matter? Insights from a Major NLP Conference | Peer review is a core element of the scientific process, particularly in
conference-centered fields such as ML and NLP. However, only few studies have
evaluated its properties empirically. Aiming to fill this gap, we present a
corpus that contains over 4k reviews and 1.2k author responses from ACL-2018.
We quantitatively and qualitatively assess the corpus. This includes a pilot
study on paper weaknesses given by reviewers and on quality of author
responses. We then focus on the role of the rebuttal phase, and propose a novel
task to predict after-rebuttal (i.e., final) scores from initial reviews and
author responses. Although author responses do have a marginal (and
statistically significant) influence on the final scores, especially for
borderline papers, our results suggest that a reviewer's final score is largely
determined by her initial score and the distance to the other reviewers'
initial scores. In this context, we discuss the conformity bias inherent to
peer reviewing, a bias that has largely been overlooked in previous research.
We hope our analyses will help better assess the usefulness of the rebuttal
phase in NLP conferences.
| 2,019 | Computation and Language |
Learning semantic sentence representations from visually grounded
language without lexical knowledge | Current approaches to learning semantic representations of sentences often
use prior word-level knowledge. The current study aims to leverage visual
information in order to capture sentence level semantics without the need for
word embeddings. We use a multimodal sentence encoder trained on a corpus of
images with matching text captions to produce visually grounded sentence
embeddings. Deep Neural Networks are trained to map the two modalities to a
common embedding space such that for an image the corresponding caption can be
retrieved and vice versa. We show that our model achieves results comparable to
the current state-of-the-art on two popular image-caption retrieval benchmark
data sets: MSCOCO and Flickr8k. We evaluate the semantic content of the
resulting sentence embeddings using the data from the Semantic Textual
Similarity benchmark task and show that the multimodal embeddings correlate
well with human semantic similarity judgements. The system achieves
state-of-the-art results on several of these benchmarks, which shows that a
system trained solely on multimodal data, without assuming any word
representations, is able to capture sentence level semantics. Importantly, this
result shows that we do not need prior knowledge of lexical level semantics in
order to model sentence level semantics. These findings demonstrate the
importance of visual information in semantics.
| 2,019 | Computation and Language |
Structural Neural Encoders for AMR-to-text Generation | AMR-to-text generation is a problem recently introduced to the NLP community,
in which the goal is to generate sentences from Abstract Meaning Representation
(AMR) graphs. Sequence-to-sequence models can be used to this end by converting
the AMR graphs to strings. Approaching the problem while working directly with
graphs requires the use of graph-to-sequence models that encode the AMR graph
into a vector representation. Such encoding has been shown to be beneficial in
the past, and unlike sequential encoding, it allows us to explicitly capture
reentrant structures in the AMR graphs. We investigate the extent to which
reentrancies (nodes with multiple parents) have an impact on AMR-to-text
generation by comparing graph encoders to tree encoders, where reentrancies are
not preserved. We show that improvements in the treatment of reentrancies and
long-range dependencies contribute to higher overall scores for graph encoders.
Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of
the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of
the art by 1.24 points.
| 2,019 | Computation and Language |
Using Monolingual Data in Neural Machine Translation: a Systematic Study | Neural Machine Translation (MT) has radically changed the way systems are
developed. A major difference with the previous generation (Phrase-Based MT) is
the way monolingual target data, which often abounds, is used in these two
paradigms. While Phrase-Based MT can seamlessly integrate very large language
models trained on billions of sentences, the best option for Neural MT
developers seems to be the generation of artificial parallel data through
\textsl{back-translation} - a technique that fails to fully take advantage of
existing datasets. In this paper, we conduct a systematic study of
back-translation, comparing alternative uses of monolingual data, as well as
multiple data generation procedures. Our findings confirm that back-translation
is very effective and give new explanations as to why this is the case. We also
introduce new data simulation techniques that are almost as effective, yet much
cheaper to implement.
| 2,019 | Computation and Language |
Text Processing Like Humans Do: Visually Attacking and Shielding NLP
Systems | Visual modifications to text are often used to obfuscate offensive comments
in social media (e.g., "!d10t") or as a writing style ("1337" in "leet speak"),
among other scenarios. We consider this as a new type of adversarial attack in
NLP, a setting to which humans are very robust, as our experiments with both
simple and more difficult visual input perturbations demonstrate. We then
investigate the impact of visual adversarial attacks on current NLP systems on
character-, word-, and sentence-level tasks, showing that both neural and
non-neural models are, in contrast to humans, extremely sensitive to such
attacks, suffering performance decreases of up to 82\%. We then explore three
shielding methods---visual character embeddings, adversarial training, and
rule-based recovery---which substantially improve the robustness of the models.
However, the shielding methods still fall behind performances achieved in
non-attack scenarios, which demonstrates the difficulty of dealing with visual
attacks.
| 2,020 | Computation and Language |
Visualization and Interpretation of Latent Spaces for Controlling
Expressive Speech Synthesis through Audio Analysis | The field of Text-to-Speech has experienced huge improvements last years
benefiting from deep learning techniques. Producing realistic speech becomes
possible now. As a consequence, the research on the control of the
expressiveness, allowing to generate speech in different styles or manners, has
attracted increasing attention lately. Systems able to control style have been
developed and show impressive results. However the control parameters often
consist of latent variables and remain complex to interpret. In this paper, we
analyze and compare different latent spaces and obtain an interpretation of
their influence on expressive speech. This will enable the possibility to build
controllable speech synthesis systems with an understandable behaviour.
| 2,019 | Computation and Language |
An Improved Approach for Semantic Graph Composition with CCG | This paper builds on previous work using Combinatory Categorial Grammar (CCG)
to derive a transparent syntax-semantics interface for Abstract Meaning
Representation (AMR) parsing. We define new semantics for the CCG combinators
that is better suited to deriving AMR graphs. In particular, we define
relation-wise alternatives for the application and composition combinators:
these require that the two constituents being combined overlap in one AMR
relation. We also provide a new semantics for type raising, which is necessary
for certain constructions. Using these mechanisms, we suggest an analysis of
eventive nouns, which present a challenge for deriving AMR graphs. Our
theoretical analysis will facilitate future work on robust and transparent AMR
parsing using CCG.
| 2,019 | Computation and Language |
A Large-Scale Multi-Length Headline Corpus for Analyzing
Length-Constrained Headline Generation Model Evaluation | Browsing news articles on multiple devices is now possible. The lengths of
news article headlines have precise upper bounds, dictated by the size of the
display of the relevant device or interface. Therefore, controlling the length
of headlines is essential when applying the task of headline generation to news
production. However, because there is no corpus of headlines of multiple
lengths for a given article, previous research on controlling output length in
headline generation has not discussed whether the system outputs could be
adequately evaluated without multiple references of different lengths. In this
paper, we introduce two corpora, which are Japanese News Corpus (JNC) and
JApanese MUlti-Length Headline Corpus (JAMUL), to confirm the validity of
previous evaluation settings. The JNC provides common supervision data for
headline generation. The JAMUL is a large-scale evaluation dataset for
headlines of three different lengths composed by professional editors. We
report new findings on these corpora; for example, although the longest length
reference summary can appropriately evaluate the existing methods controlling
output length, this evaluation setting has several problems.
| 2,019 | Computation and Language |
A dataset for resolving referring expressions in spoken dialogue via
contextual query rewrites (CQR) | We present Contextual Query Rewrite (CQR) a dataset for multi-domain
task-oriented spoken dialogue systems that is an extension of the Stanford
dialog corpus (Eric et al., 2017a). While previous approaches have addressed
the issue of diverse schemas by learning candidate transformations (Naik et
al., 2018), we instead model the reference resolution task as a user query
reformulation task, where the dialog state is serialized into a natural
language query that can be executed by the downstream spoken language
understanding system. In this paper, we describe our methodology for creating
the query reformulation extension to the dialog corpus, and present an initial
set of experiments to establish a baseline for the CQR task. We have released
the corpus to the public [1] to support further research in this area.
| 2,019 | Computation and Language |
Sogou Machine Reading Comprehension Toolkit | Machine reading comprehension have been intensively studied in recent years,
and neural network-based models have shown dominant performances. In this
paper, we present a Sogou Machine Reading Comprehension (SMRC) toolkit that can
be used to provide the fast and efficient development of modern machine
comprehension models, including both published models and original prototypes.
To achieve this goal, the toolkit provides dataset readers, a flexible
preprocessing pipeline, necessary neural network components, and built-in
models, which make the whole process of data preparation, model construction,
and training easier.
| 2,019 | Computation and Language |
Mining Discourse Markers for Unsupervised Sentence Representation
Learning | Current state of the art systems in NLP heavily rely on manually annotated
datasets, which are expensive to construct. Very little work adequately
exploits unannotated data -- such as discourse markers between sentences --
mainly because of data sparseness and ineffective extraction methods. In the
present work, we propose a method to automatically discover sentence pairs with
relevant discourse markers, and apply it to massive amounts of data. Our
resulting dataset contains 174 discourse markers with at least 10k examples
each, even for rare markers such as coincidentally or amazingly We use the
resulting data as supervision for learning transferable sentence embeddings. In
addition, we show that even though sentence representation learning through
prediction of discourse markers yields state of the art results across
different transfer tasks, it is not clear that our models made use of the
semantic relation between sentences, thus leaving room for further
improvements. Our datasets are publicly available
(https://github.com/synapse-developpement/Discovery)
| 2,019 | Computation and Language |
Imbalanced Sentiment Classification Enhanced with Discourse Marker | Imbalanced data commonly exists in real world, espacially in
sentiment-related corpus, making it difficult to train a classifier to
distinguish latent sentiment in text data. We observe that humans often express
transitional emotion between two adjacent discourses with discourse markers
like "but", "though", "while", etc, and the head discourse and the tail
discourse 3 usually indicate opposite emotional tendencies. Based on this
observation, we propose a novel plug-and-play method, which first samples
discourses according to transitional discourse markers and then validates
sentimental polarities with the help of a pretrained attention-based model. Our
method increases sample diversity in the first place, can serve as a upstream
preprocessing part in data augmentation. We conduct experiments on three public
sentiment datasets, with several frequently used algorithms. Results show that
our method is found to be consistently effective, even in highly imbalanced
scenario, and easily be integrated with oversampling method to boost the
performance on imbalanced sentiment classification.
| 2,019 | Computation and Language |
Handling Noisy Labels for Robustly Learning from Self-Training Data for
Low-Resource Sequence Labeling | In this paper, we address the problem of effectively self-training neural
networks in a low-resource setting. Self-training is frequently used to
automatically increase the amount of training data. However, in a low-resource
scenario, it is less effective due to unreliable annotations created using
self-labeling of unlabeled data. We propose to combine self-training with noise
handling on the self-labeled data. Directly estimating noise on the combined
clean training set and self-labeled data can lead to corruption of the clean
data and hence, performs worse. Thus, we propose the Clean and Noisy Label
Neural Network which trains on clean and noisy self-labeled data simultaneously
by explicitly modelling clean and noisy labels separately. In our experiments
on Chunking and NER, this approach performs more robustly than the baselines.
Complementary to this explicit approach, noise can also be handled implicitly
with the help of an auxiliary learning task. To such a complementary approach,
our method is more beneficial than other baseline methods and together provides
the best performance overall.
| 2,019 | Computation and Language |
Train, Sort, Explain: Learning to Diagnose Translation Models | Evaluating translation models is a trade-off between effort and detail. On
the one end of the spectrum there are automatic count-based methods such as
BLEU, on the other end linguistic evaluations by humans, which arguably are
more informative but also require a disproportionately high effort. To narrow
the spectrum, we propose a general approach on how to automatically expose
systematic differences between human and machine translations to human experts.
Inspired by adversarial settings, we train a neural text classifier to
distinguish human from machine translations. A classifier that performs and
generalizes well after training should recognize systematic differences between
the two classes, which we uncover with neural explainability methods. Our
proof-of-concept implementation, DiaMaT, is open source. Applied to a dataset
translated by a state-of-the-art neural Transformer model, DiaMaT achieves a
classification accuracy of 75% and exposes meaningful differences between
humans and the Transformer, amidst the current discussion about human parity.
| 2,019 | Computation and Language |
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks | In the natural language processing literature, neural networks are becoming
increasingly deeper and complex. The recent poster child of this trend is the
deep language representation model, which includes BERT, ELMo, and GPT. These
developments have led to the conviction that previous-generation, shallower
neural networks for language understanding are obsolete. In this paper,
however, we demonstrate that rudimentary, lightweight neural networks can still
be made competitive without architecture changes, external training data, or
additional input features. We propose to distill knowledge from BERT, a
state-of-the-art language representation model, into a single-layer BiLSTM, as
well as its siamese counterpart for sentence-pair tasks. Across multiple
datasets in paraphrasing, natural language inference, and sentiment
classification, we achieve comparable results with ELMo, while using roughly
100 times fewer parameters and 15 times less inference time.
| 2,019 | Computation and Language |
Resilient Combination of Complementary CNN and RNN Features for Text
Classification through Attention and Ensembling | State-of-the-art methods for text classification include several distinct
steps of pre-processing, feature extraction and post-processing. In this work,
we focus on end-to-end neural architectures and show that the best performance
in text classification is obtained by combining information from different
neural modules. Concretely, we combine convolution, recurrent and attention
modules with ensemble methods and show that they are complementary. We
introduce ECGA, an end-to-end go-to architecture for novel text classification
tasks. We prove that it is efficient and robust, as it attains or surpasses the
state-of-the-art on varied datasets, including both low and high data regimes.
| 2,019 | Computation and Language |
Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken
Dialogues | Prosodic cues in conversational speech aid listeners in discerning a message.
We investigate whether acoustic cues in spoken dialogue can be used to identify
the importance of individual words to the meaning of a conversation turn.
Individuals who are Deaf and Hard of Hearing often rely on real-time captions
in live meetings. Word error rate, a traditional metric for evaluating
automatic speech recognition, fails to capture that some words are more
important for a system to transcribe correctly than others. We present and
evaluate neural architectures that use acoustic features for 3-class word
importance prediction. Our model performs competitively against
state-of-the-art text-based word-importance prediction models, and it
demonstrates particular benefits when operating on imperfect ASR output.
| 2,019 | Computation and Language |
In Search of Meaning: Lessons, Resources and Next Steps for
Computational Analysis of Financial Discourse | We critically assess mainstream accounting and finance research applying
methods from computational linguistics (CL) to study financial discourse. We
also review common themes and innovations in the literature and assess the
incremental contributions of work applying CL methods over manual content
analysis. Key conclusions emerging from our analysis are: (a) accounting and
finance research is behind the curve in terms of CL methods generally and word
sense disambiguation in particular; (b) implementation issues mean the proposed
benefits of CL are often less pronounced than proponents suggest; (c)
structural issues limit practical relevance; and (d) CL methods and high
quality manual analysis represent complementary approaches to analyzing
financial discourse. We describe four CL tools that have yet to gain traction
in mainstream AF research but which we believe offer promising ways to enhance
the study of meaning in financial discourse. The four tools are named entity
recognition (NER), summarization, semantics and corpus linguistics.
| 2,019 | Computation and Language |
Acoustically Grounded Word Embeddings for Improved Acoustics-to-Word
Speech Recognition | Direct acoustics-to-word (A2W) systems for end-to-end automatic speech
recognition are simpler to train, and more efficient to decode with, than
sub-word systems. However, A2W systems can have difficulties at training time
when data is limited, and at decoding time when recognizing words outside the
training vocabulary. To address these shortcomings, we investigate the use of
recently proposed acoustic and acoustically grounded word embedding techniques
in A2W systems. The idea is based on treating the final pre-softmax weight
matrix of an AWE recognizer as a matrix of word embedding vectors, and using an
externally trained set of word embeddings to improve the quality of this
matrix. In particular we introduce two ideas: (1) Enforcing similarity at
training time between the external embeddings and the recognizer weights, and
(2) using the word embeddings at test time for predicting out-of-vocabulary
words. Our word embedding model is acoustically grounded, that is it is learned
jointly with acoustic embeddings so as to encode the words' acoustic-phonetic
content; and it is parametric, so that it can embed any arbitrary (potentially
out-of-vocabulary) sequence of characters. We find that both techniques improve
the performance of an A2W recognizer on conversational telephone speech.
| 2,019 | Computation and Language |
A General FOFE-net Framework for Simple and Effective Question Answering
over Knowledge Bases | Question answering over knowledge base (KB-QA) has recently become a popular
research topic in NLP. One popular way to solve the KB-QA problem is to make
use of a pipeline of several NLP modules, including entity discovery and
linking (EDL) and relation detection. Recent success on KB-QA task usually
involves complex network structures with sophisticated heuristics. Inspired by
a previous work that builds a strong KB-QA baseline, we propose a simple but
general neural model composed of fixed-size ordinally forgetting encoding
(FOFE) and deep neural networks, called FOFE-net to solve KB-QA problem at
different stages. For evaluation, we use two popular KB-QA datasets,
SimpleQuestions and WebQSP, and a newly created dataset, FreebaseQA. The
experimental results show that FOFE-net performs well on KB-QA subtasks, entity
discovery and linking (EDL) and relation detection, and in turn pushing overall
KB-QA system to achieve strong results on all datasets.
| 2,019 | Computation and Language |
Attention-Augmented End-to-End Multi-Task Learning for Emotion
Prediction from Speech | Despite the increasing research interest in end-to-end learning systems for
speech emotion recognition, conventional systems either suffer from the
overfitting due in part to the limited training data, or do not explicitly
consider the different contributions of automatically learnt representations
for a specific task. In this contribution, we propose a novel end-to-end
framework which is enhanced by learning other auxiliary tasks and an attention
mechanism. That is, we jointly train an end-to-end network with several
different but related emotion prediction tasks, i.e., arousal, valence, and
dominance predictions, to extract more robust representations shared among
various tasks than traditional systems with the hope that it is able to relieve
the overfitting problem. Meanwhile, an attention layer is implemented on top of
the layers for each task, with the aim to capture the contribution distribution
of different segment parts for each individual task. To evaluate the
effectiveness of the proposed system, we conducted a set of experiments on the
widely used database IEMOCAP. The empirical results show that the proposed
systems significantly outperform corresponding baseline systems.
| 2,019 | Computation and Language |
Train One Get One Free: Partially Supervised Neural Network for Bug
Report Duplicate Detection and Clustering | Tracking user reported bugs requires considerable engineering effort in going
through many repetitive reports and assigning them to the correct teams. This
paper proposes a neural architecture that can jointly (1) detect if two bug
reports are duplicates, and (2) aggregate them into latent topics. Leveraging
the assumption that learning the topic of a bug is a sub-task for detecting
duplicates, we design a loss function that can jointly perform both tasks but
needs supervision for only duplicate classification, achieving topic clustering
in an unsupervised fashion. We use a two-step attention module that uses
self-attention for topic clustering and conditional attention for duplicate
detection. We study the characteristics of two types of real world datasets
that have been marked for duplicate bugs by engineers and by non-technical
annotators. The results demonstrate that our model not only can outperform
state-of-the-art methods for duplicate classification on both cases, but can
also learn meaningful latent clusters without additional supervision.
| 2,019 | Computation and Language |
A framework for fake review detection in online consumer electronics
retailers | The impact of online reviews on businesses has grown significantly during
last years, being crucial to determine business success in a wide array of
sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some
users use unethical means to improve their online reputation by writing fake
reviews of their businesses or competitors. Previous research has addressed
fake review detection in a number of domains, such as product or business
reviews in restaurants and hotels. However, in spite of its economical
interest, the domain of consumer electronics businesses has not yet been
thoroughly studied. This article proposes a feature framework for detecting
fake reviews that has been evaluated in the consumer electronics domain. The
contributions are fourfold: (i) Construction of a dataset for classifying fake
reviews in the consumer electronics domain in four different cities based on
scraping techniques; (ii) definition of a feature framework for fake review
detection; (iii) development of a fake review classification method based on
the proposed framework and (iv) evaluation and analysis of the results for each
of the cities under study. We have reached an 82% F-Score on the classification
task and the Ada Boost classifier has been proven to be the best one by
statistical means according to the Friedman test.
| 2,019 | Computation and Language |
Frowning Frodo, Wincing Leia, and a Seriously Great Friendship: Learning
to Classify Emotional Relationships of Fictional Characters | The development of a fictional plot is centered around characters who closely
interact with each other forming dynamic social networks. In literature
analysis, such networks have mostly been analyzed without particular relation
types or focusing on roles which the characters take with respect to each
other. We argue that an important aspect for the analysis of stories and their
development is the emotion between characters. In this paper, we combine these
aspects into a unified framework to classify emotional relationships of
fictional characters. We formalize it as a new task and describe the annotation
of a corpus, based on fan-fiction short stories. The extraction pipeline which
we propose consists of character identification (which we treat as given by an
oracle here) and the relation classification. For the latter, we provide
results using several approaches previously proposed for relation
identification with neural methods. The best result of 0.45 F1 is achieved with
a GRU with character position indicators on the task of predicting undirected
emotion relations in the associated social network graph.
| 2,019 | Computation and Language |
Towards Knowledge-Based Personalized Product Description Generation in
E-commerce | Quality product descriptions are critical for providing competitive customer
experience in an e-commerce platform. An accurate and attractive description
not only helps customers make an informed decision but also improves the
likelihood of purchase. However, crafting a successful product description is
tedious and highly time-consuming. Due to its importance, automating the
product description generation has attracted considerable interests from both
research and industrial communities. Existing methods mainly use templates or
statistical methods, and their performance could be rather limited. In this
paper, we explore a new way to generate the personalized product description by
combining the power of neural networks and knowledge base. Specifically, we
propose a KnOwledge Based pErsonalized (or KOBE) product description generation
model in the context of e-commerce. In KOBE, we extend the encoder-decoder
framework, the Transformer, to a sequence modeling formulation using
self-attention. In order to make the description both informative and
personalized, KOBE considers a variety of important factors during text
generation, including product aspects, user categories, and knowledge base,
etc. Experiments on real-world datasets demonstrate that the proposed method
out-performs the baseline on various metrics. KOBE can achieve an improvement
of 9.7% over state-of-the-arts in terms of BLEU. We also present several case
studies as the anecdotal evidence to further prove the effectiveness of the
proposed approach. The framework has been deployed in Taobao, the largest
online e-commerce platform in China.
| 2,019 | Computation and Language |
Re-Ranking Words to Improve Interpretability of Automatically Generated
Topics | Topics models, such as LDA, are widely used in Natural Language Processing.
Making their output interpretable is an important area of research with
applications to areas such as the enhancement of exploratory search interfaces
and the development of interpretable machine learning models. Conventionally,
topics are represented by their n most probable words, however, these
representations are often difficult for humans to interpret. This paper
explores the re-ranking of topic words to generate more interpretable topic
representations. A range of approaches are compared and evaluated in two
experiments. The first uses crowdworkers to associate topics represented by
different word rankings with related documents. The second experiment is an
automatic approach based on a document retrieval task applied on multiple
domains. Results in both experiments demonstrate that re-ranking words improves
topic interpretability and that the most effective re-ranking schemes were
those which combine information about the importance of words both within
topics and their relative frequency in the entire corpus. In addition, close
correlation between the results of the two evaluation approaches suggests that
the automatic method proposed here could be used to evaluate re-ranking methods
without the need for human judgements.
| 2,019 | Computation and Language |
Integrating Semantic Knowledge to Tackle Zero-shot Text Classification | Insufficient or even unavailable training data of emerging classes is a big
challenge of many classification tasks, including text classification.
Recognising text documents of classes that have never been seen in the learning
stage, so-called zero-shot text classification, is therefore difficult and only
limited previous works tackled this problem. In this paper, we propose a
two-phase framework together with data augmentation and feature augmentation to
solve this problem. Four kinds of semantic knowledge (word embeddings, class
descriptions, class hierarchy, and a general knowledge graph) are incorporated
into the proposed framework to deal with instances of unseen classes
effectively. Experimental results show that each and the combination of the two
phases achieve the best overall accuracy compared with baselines and recent
approaches in classifying real-world texts under the zero-shot scenario.
| 2,019 | Computation and Language |
Keyphrase Generation: A Text Summarization Struggle | Authors' keyphrases assigned to scientific articles are essential for
recognizing content and topic aspects. Most of the proposed supervised and
unsupervised methods for keyphrase generation are unable to produce terms that
are valuable but do not appear in the text. In this paper, we explore the
possibility of considering the keyphrase string as an abstractive summary of
the title and the abstract. First, we collect, process and release a large
dataset of scientific paper metadata that contains 2.2 million records. Then we
experiment with popular text summarization neural architectures. Despite using
advanced deep learning models, large quantities of data and many days of
computation, our systematic evaluation on four test datasets reveals that the
explored text summarization methods could not produce better keyphrases than
the simpler unsupervised methods, or the existing supervised ones.
| 2,019 | Computation and Language |
Structured Minimally Supervised Learning for Neural Relation Extraction | We present an approach to minimally supervised relation extraction that
combines the benefits of learned representations and structured learning, and
accurately predicts sentence-level relation mentions given only
proposition-level supervision from a KB. By explicitly reasoning about missing
data during learning, our approach enables large-scale training of 1D
convolutional neural networks while mitigating the issue of label noise
inherent in distant supervision. Our approach achieves state-of-the-art results
on minimally supervised sentential relation extraction, outperforming a number
of baselines, including a competitive approach that uses the attention layer of
a purely neural model.
| 2,019 | Computation and Language |
ANA at SemEval-2019 Task 3: Contextual Emotion detection in
Conversations through hierarchical LSTMs and BERT | This paper describes the system submitted by ANA Team for the SemEval-2019
Task 3: EmoContext. We propose a novel Hierarchical LSTMs for Contextual
Emotion Detection (HRLCE) model. It classifies the emotion of an utterance
given its conversational context. The results show that, in this task, our
HRCLE outperforms the most recent state-of-the-art text classification
framework: BERT. We combine the results generated by BERT and HRCLE to achieve
an overall score of 0.7709 which ranked 5th on the final leader board of the
competition among 165 Teams.
| 2,019 | Computation and Language |
Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag
Attentions | This paper presents a neural relation extraction method to deal with the
noisy training data generated by distant supervision. Previous studies mainly
focus on sentence-level de-noising by designing neural networks with intra-bag
attentions. In this paper, both intra-bag and inter-bag attentions are
considered in order to deal with the noise at sentence-level and bag-level
respectively. First, relation-aware bag representations are calculated by
weighting sentence embeddings using intra-bag attentions. Here, each possible
relation is utilized as the query for attention calculation instead of only
using the target relation in conventional methods. Furthermore, the
representation of a group of bags in the training set which share the same
relation label is calculated by weighting bag representations using a
similarity-based inter-bag attention module. Finally, a bag group is utilized
as a training sample when building our relation extractor. Experimental results
on the New York Times dataset demonstrate the effectiveness of our proposed
intra-bag and inter-bag attention modules. Our method also achieves better
relation extraction accuracy than state-of-the-art methods on this dataset.
| 2,019 | Computation and Language |
Linguistic generalization and compositionality in modern artificial
neural networks | In the last decade, deep artificial neural networks have achieved astounding
performance in many natural language processing tasks. Given the high
productivity of language, these models must possess effective generalization
abilities. It is widely assumed that humans handle linguistic productivity by
means of algebraic compositional rules: Are deep networks similarly
compositional? After reviewing the main innovations characterizing current deep
language processing networks, I discuss a set of studies suggesting that deep
networks are capable of subtle grammar-dependent generalizations, but also that
they do not rely on systematic compositional rules. I argue that the intriguing
behaviour of these devices (still awaiting a full understanding) should be of
interest to linguists and cognitive scientists, as it offers a new perspective
on possible computational strategies to deal with linguistic productivity
beyond rule-based compositionality, and it might lead to new insights into the
less systematic generalization patterns that also appear in natural language.
| 2,019 | Computation and Language |
Machine translation considering context information using
Encoder-Decoder model | In the task of machine translation, context information is one of the
important factor. But considering the context information model dose not
proposed. The paper propose a new model which can integrate context information
and make translation. In this paper, we create a new model based Encoder
Decoder model. When translating current sentence, the model integrates output
from preceding encoder with current encoder. The model can consider context
information and the result score is higher than existing model.
| 2,019 | Computation and Language |
Modeling Drug-Disease Relations with Linguistic and Knowledge Graph
Constraints | FDA drug labels are rich sources of information about drugs and drug-disease
relations, but their complexity makes them challenging texts to analyze in
isolation. To overcome this, we situate these labels in two health knowledge
graphs: one built from precise structured information about drugs and diseases,
and another built entirely from a database of clinical narrative texts using
simple heuristic methods. We show that Probabilistic Soft Logic models defined
over these graphs are superior to text-only and relation-only variants, and
that the clinical narratives graph delivers exceptional results with little
manual effort. Finally, we release a new dataset of drug labels with
annotations for five distinct drug-disease relations.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.