Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Difficulty Controllable Generation of Reading Comprehension Questions | We investigate the difficulty levels of questions in reading comprehension
datasets such as SQuAD, and propose a new question generation setting, named
Difficulty-controllable Question Generation (DQG). Taking as input a sentence
in the reading comprehension paragraph and some of its text fragments (i.e.,
answers) that we want to ask questions about, a DQG method needs to generate
questions each of which has a given text fragment as its answer, and meanwhile
the generation is under the control of specified difficulty labels---the output
questions should satisfy the specified difficulty as much as possible. To solve
this task, we propose an end-to-end framework to generate questions of
designated difficulty levels by exploring a few important intuitions. For
evaluation, we prepared the first dataset of reading comprehension questions
with difficulty labels. The results show that the question generated by our
framework not only have better quality under the metrics like BLEU, but also
comply with the specified difficulty labels.
| 2,019 | Computation and Language |
Paired Comparison Sentiment Scores | The method of paired comparisons is an established method in psychology. In
this article, it is applied to obtain continuous sentiment scores for words
from comparisons made by test persons. We created an initial lexicon with
$n=199$ German words from a two-fold all-pair comparison experiment with ten
different test persons. From the probabilistic models taken into account, the
logistic model showed the best agreement with the results of the comparison
experiment. The initial lexicon can then be used in different ways. One is to
create special purpose sentiment lexica through the addition of arbitrary words
that are compared with some of the initial words by test persons. A
cross-validation experiment suggests that only about 18 two-fold comparisons
are necessary to estimate the score of a new, yet unknown word, provided these
words are selected by a modification of a method by Silverstein & Farrell.
Another application of the initial lexicon is the evaluation of automatically
created corpus-based lexica. By such an evaluation, we compared the
corpus-based lexica SentiWS, SenticNet, and SentiWordNet, of which SenticNet 4
performed best. This technical report is a corrected and extended version of a
presentation made at the ICDM Sentire workshop in 2016.
| 2,018 | Computation and Language |
Revisiting the Hierarchical Multiscale LSTM | Hierarchical Multiscale LSTM (Chung et al., 2016a) is a state-of-the-art
language model that learns interpretable structure from character-level input.
Such models can provide fertile ground for (cognitive) computational
linguistics studies. However, the high complexity of the architecture, training
procedure and implementations might hinder its applicability. We provide a
detailed reproduction and ablation study of the architecture, shedding light on
some of the potential caveats of re-purposing complex deep-learning
architectures. We further show that simplifying certain aspects of the
architecture can in fact improve its performance. We also investigate the
linguistic units (segments) learned by various levels of the model, and argue
that their quality does not correlate with the overall performance of the model
on language modeling.
| 2,018 | Computation and Language |
Linguistic Characteristics of Censorable Language on SinaWeibo | This paper investigates censorship from a linguistic perspective. We collect
a corpus of censored and uncensored posts on a number of topics, build a
classifier that predicts censorship decisions independent of discussion topics.
Our investigation reveals that the strongest linguistic indicator of censored
content of our corpus is its readability.
| 2,018 | Computation and Language |
Enriching Knowledge Bases with Counting Quantifiers | Information extraction traditionally focuses on extracting relations between
identifiable entities, such as <Monterey, locatedIn, California>. Yet, texts
often also contain Counting information, stating that a subject is in a
specific relation with a number of objects, without mentioning the objects
themselves, for example, "California is divided into 58 counties". Such
counting quantifiers can help in a variety of tasks such as query answering or
knowledge base curation, but are neglected by prior work. This paper develops
the first full-fledged system for extracting counting information from text,
called CINEX. We employ distant supervision using fact counts from a knowledge
base as training seeds, and develop novel techniques for dealing with several
challenges: (i) non-maximal training seeds due to the incompleteness of
knowledge bases, (ii) sparse and skewed observations in text sources, and (iii)
high diversity of linguistic patterns. Experiments with five human-evaluated
relations show that CINEX can achieve 60% average precision for extracting
counting information. In a large-scale experiment, we demonstrate the potential
for knowledge base enrichment by applying CINEX to 2,474 frequent relations in
Wikidata. CINEX can assert the existence of 2.5M facts for 110 distinct
relations, which is 28% more than the existing Wikidata facts for these
relations.
| 2,018 | Computation and Language |
IAM at CLEF eHealth 2018: Concept Annotation and Coding in French Death
Certificates | In this paper, we describe the approach and results for our participation in
the task 1 (multilingual information extraction) of the CLEF eHealth 2018
challenge. We addressed the task of automatically assigning ICD-10 codes to
French death certificates. We used a dictionary-based approach using materials
provided by the task organizers. The terms of the ICD-10 terminology were
normalized, tokenized and stored in a tree data structure. The Levenshtein
distance was used to detect typos. Frequent abbreviations were detected by
manually creating a small set of them. Our system achieved an F-score of 0.786
(precision: 0.794, recall: 0.779). These scores were substantially higher than
the average score of the systems that participated in the challenge.
| 2,018 | Computation and Language |
Universal Transformers | Recurrent neural networks (RNNs) sequentially process data by updating their
state with each new data point, and have long been the de facto choice for
sequence modeling tasks. However, their inherently sequential computation makes
them slow to train. Feed-forward and convolutional architectures have recently
been shown to achieve superior results on some sequence modeling tasks such as
machine translation, with the added advantage that they concurrently process
all inputs in the sequence, leading to easy parallelization and faster training
times. Despite these successes, however, popular feed-forward sequence models
like the Transformer fail to generalize in many simple tasks that recurrent
models handle with ease, e.g. copying strings or even simple logical inference
when the string or formula lengths exceed those observed at training time. We
propose the Universal Transformer (UT), a parallel-in-time self-attentive
recurrent sequence model which can be cast as a generalization of the
Transformer model and which addresses these issues. UTs combine the
parallelizability and global receptive field of feed-forward sequence models
like the Transformer with the recurrent inductive bias of RNNs. We also add a
dynamic per-position halting mechanism and find that it improves accuracy on
several tasks. In contrast to the standard Transformer, under certain
assumptions, UTs can be shown to be Turing-complete. Our experiments show that
UTs outperform standard Transformers on a wide range of algorithmic and
language understanding tasks, including the challenging LAMBADA language
modeling task where UTs achieve a new state of the art, and machine translation
where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De
dataset.
| 2,019 | Computation and Language |
Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment
Analysis | Multimodal machine learning is a core research area spanning the language,
visual and acoustic modalities. The central challenge in multimodal learning
involves learning representations that can process and relate information from
multiple modalities. In this paper, we propose two methods for unsupervised
learning of joint multimodal representations using sequence to sequence
(Seq2Seq) methods: a \textit{Seq2Seq Modality Translation Model} and a
\textit{Hierarchical Seq2Seq Modality Translation Model}. We also explore
multiple different variations on the multimodal inputs and outputs of these
seq2seq models. Our experiments on multimodal sentiment analysis using the
CMU-MOSI dataset indicate that our methods learn informative multimodal
representations that outperform the baselines and achieve improved performance
on multimodal sentiment analysis, specifically in the Bimodal case where our
model is able to improve F1 Score by twelve points. We also discuss future
directions for multimodal Seq2Seq methods.
| 2,018 | Computation and Language |
A Dialogue Annotation Scheme for Weight Management Chat using the
Trans-Theoretical Model of Health Behavior Change | In this study we collect and annotate human-human role-play dialogues in the
domain of weight management. There are two roles in the conversation: the
"seeker" who is looking for ways to lose weight and the "helper" who provides
suggestions to help the "seeker" in their weight loss journey. The chat
dialogues collected are then annotated with a novel annotation scheme inspired
by a popular health behavior change theory called "trans-theoretical model of
health behavior change". We also build classifiers to automatically predict the
annotation labels used in our corpus. We find that classification accuracy
improves when oracle segmentations of the interlocutors' sentences are provided
compared to directly classifying unsegmented sentences.
| 2,018 | Computation and Language |
Towards Understanding End-of-trip Instructions in a Taxi Ride Scenario | We introduce a dataset containing human-authored descriptions of target
locations in an "end-of-trip in a taxi ride" scenario. We describe our data
collection method and a novel annotation scheme that supports understanding of
such descriptions of target locations. Our dataset contains target location
descriptions for both synthetic and real-world images as well as visual
annotations (ground truth labels, dimensions of vehicles and objects,
coordinates of the target location,distance and direction of the target
location from vehicles and objects) that can be used in various visual and
language tasks. We also perform a pilot experiment on how the corpus could be
applied to visual reference resolution in this domain.
| 2,018 | Computation and Language |
An improved neural network model for joint POS tagging and dependency
parsing | We propose a novel neural network model for joint part-of-speech (POS)
tagging and dependency parsing. Our model extends the well-known BIST
graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating
a BiLSTM-based tagging component to produce automatically predicted POS tags
for the parser. On the benchmark English Penn treebank, our model obtains
strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+%
absolute improvements to the BIST graph-based parser, and also obtaining a
state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental
results on parsing 61 "big" Universal Dependencies treebanks from raw texts
show that our model outperforms the baseline UDPipe (Straka and Strakov\'a,
2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS
score. In addition, with our model, we also obtain state-of-the-art downstream
task scores for biomedical event extraction and opinion analysis applications.
Our code is available together with all pre-trained models at:
https://github.com/datquocnguyen/jPTDP
| 2,019 | Computation and Language |
UniParse: A universal graph-based parsing toolkit | This paper describes the design and use of the graph-based parsing framework
and toolkit UniParse, released as an open-source python software package.
UniParse as a framework novelly streamlines research prototyping, development
and evaluation of graph-based dependency parsing architectures. UniParse does
this by enabling highly efficient, sufficiently independent, easily readable,
and easily extensible implementations for all dependency parser components. We
distribute the toolkit with ready-made configurations as re-implementations of
all current state-of-the-art first-order graph-based parsers, including even
more efficient Cython implementations of both encoders and decoders, as well as
the required specialised loss functions.
| 2,018 | Computation and Language |
JeSemE: A Website for Exploring Diachronic Changes in Word Meaning and
Emotion | We here introduce a substantially extended version of JeSemE, an interactive
website for visually exploring computationally derived time-variant information
on word meanings and lexical emotions assembled from five large diachronic text
corpora. JeSemE is designed for scholars in the (digital) humanities as an
alternative to consulting manually compiled, printed dictionaries for such
information (if available at all). This tool uniquely combines state-of-the-art
distributional semantics with a nuanced model of human emotions, two
information streams we deem beneficial for a data-driven interpretation of
texts in the humanities.
| 2,020 | Computation and Language |
Linear Transformations for Cross-lingual Semantic Textual Similarity | Cross-lingual semantic textual similarity systems estimate the degree of the
meaning similarity between two sentences, each in a different language.
State-of-the-art algorithms usually employ machine translation and combine vast
amount of features, making the approach strongly supervised, resource rich, and
difficult to use for poorly-resourced languages.
In this paper, we study linear transformations, which project monolingual
semantic spaces into a shared space using bilingual dictionaries. We propose a
novel transformation, which builds on the best ideas from prior works. We
experiment with unsupervised techniques for sentence similarity based only on
semantic spaces and we show they can be significantly improved by the word
weighting. Our transformation outperforms other methods and together with word
weighting leads to very promising results on several datasets in different
languages.
| 2,018 | Computation and Language |
Cross-lingual Word Analogies using Linear Transformations between
Semantic Spaces | We generalize the word analogy task across languages, to provide a new
intrinsic evaluation method for cross-lingual semantic spaces. We experiment
with six languages within different language families, including English,
German, Spanish, Italian, Czech, and Croatian. State-of-the-art monolingual
semantic spaces are transformed into a shared space using dictionaries of word
translations. We compare several linear transformations and rank them for
experiments with monolingual (no transformation), bilingual (one semantic space
is transformed to another), and multilingual (all semantic spaces are
transformed onto English space) versions of semantic spaces. We show that
tested linear transformations preserve relationships between words (word
analogies) and lead to impressive results. We achieve average accuracy of
51.1%, 43.1%, and 38.2% for monolingual, bilingual, and multilingual semantic
spaces, respectively.
| 2,018 | Computation and Language |
Tracking the Evolution of Words with Time-reflective Text
Representations | More than 80% of today's data is unstructured in nature, and these
unstructured datasets evolve over time. A large part of these datasets are text
documents generated by media outlets, scholarly articles in digital libraries,
findings from scientific and professional communities, and social media. Vector
space models were developed to analyze text data using data mining and machine
learning algorithms. While ample vector space models exist for text data, the
evolutionary aspect of ever-changing text corpora is still missing in
vector-based representations. The advent of word embeddings has enabled us to
create a contextual vector space, but the embeddings fail to consider the
temporal aspects of the feature space successfully. This paper presents an
approach to include temporal aspects in feature spaces. The inclusion of the
time aspect in the feature space provides vectors for every natural language
element, such as words or entities, at every timestamp. Such temporal word
vectors allow us to track how the meaning of a word changes over time, by
studying the changes in its neighborhood. Moreover, a time-reflective text
representation will pave the way to a new set of text analytic abilities
involving time series for text collections. In this paper, we present a
time-reflective vector space model for temporal text data that is able to
capture short and long-term changes in the meaning of words. We compare our
approach with the limited literature on dynamic embeddings. We present
qualitative and quantitative evaluations using the tracking of semantic
evolution as the target application.
| 2,019 | Computation and Language |
Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and
Prince (1988) and the Past Tense Debate | Can advances in NLP help advance cognitive modeling? We examine the role of
artificial neural networks, the current state of the art in many common NLP
tasks, by returning to a classic case study. In 1986, Rumelhart and McClelland
famously introduced a neural architecture that learned to transduce English
verb stems to their past tense forms. Shortly thereafter, Pinker & Prince
(1988) presented a comprehensive rebuttal of many of Rumelhart and McClelland's
claims. Much of the force of their attack centered on the empirical inadequacy
of the Rumelhart and McClelland (1986) model. Today, however, that model is
severely outmoded. We show that the Encoder-Decoder network architectures used
in modern NLP systems obviate most of Pinker and Prince's criticisms without
requiring any simplication of the past tense mapping problem. We suggest that
the empirical performance of modern networks warrants a re-examination of their
utility in linguistic and cognitive modeling.
| 2,019 | Computation and Language |
Ultra-Fine Entity Typing | We introduce a new entity typing task: given a sentence with an entity
mention, the goal is to predict a set of free-form phrases (e.g. skyscraper,
songwriter, or criminal) that describe appropriate types for the target entity.
This formulation allows us to use a new type of distant supervision at large
scale: head words, which indicate the type of the noun phrases they appear in.
We show that these ultra-fine types can be crowd-sourced, and introduce new
evaluation sets that are much more diverse and fine-grained than existing
benchmarks. We present a model that can predict open types, and is trained
using a multitask objective that pools our new head-word supervision with prior
supervision from entity linking. Experimental results demonstrate that our
model is effective in predicting entity types at varying granularity; it
achieves state of the art performance on an existing fine-grained entity typing
benchmark, and sets baselines for our newly-introduced datasets. Our data and
model can be downloaded from: http://nlp.cs.washington.edu/entity_type
| 2,018 | Computation and Language |
A Multi-sentiment-resource Enhanced Attention Network for Sentiment
Classification | Deep learning approaches for sentiment classification do not fully exploit
sentiment linguistic knowledge. In this paper, we propose a
Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the
problem by integrating three kinds of sentiment linguistic knowledge (e.g.,
sentiment lexicon, negation words, intensity words) into the deep neural
network via attention mechanisms. By using various types of sentiment
resources, MEAN utilizes sentiment-relevant information from different
representation subspaces, which makes it more effective to capture the overall
semantics of the sentiment, negation and intensity words for sentiment
prediction. The experimental results demonstrate that MEAN has robust
superiority over strong competitors.
| 2,018 | Computation and Language |
Multi-task dialog act and sentiment recognition on Mastodon | Because of license restrictions, it often becomes impossible to strictly
reproduce most research results on Twitter data already a few months after the
creation of the corpus. This situation worsened gradually as time passes and
tweets become inaccessible. This is a critical issue for reproducible and
accountable research on social media. We partly solve this challenge by
annotating a new Twitter-like corpus from an alternative large social medium
with licenses that are compatible with reproducible experiments: Mastodon. We
manually annotate both dialogues and sentiments on this corpus, and train a
multi-task hierarchical recurrent network on joint sentiment and dialog act
recognition. We experimentally demonstrate that transfer learning may be
efficiently achieved between both tasks, and further analyze some specific
correlations between sentiments and dialogues on social media. Both the
annotated corpus and deep network are released with an open-source license.
| 2,018 | Computation and Language |
Hierarchical Losses and New Resources for Fine-grained Entity Typing and
Linking | Extraction from raw text to a knowledge base of entities and fine-grained
types is often cast as prediction into a flat set of entity and type labels,
neglecting the rich hierarchies over types and entities contained in curated
ontologies. Previous attempts to incorporate hierarchical structure have
yielded little benefit and are restricted to shallow ontologies. This paper
presents new methods using real and complex bilinear mappings for integrating
hierarchical information, yielding substantial improvement over flat
predictions in entity linking and fine-grained entity typing, and achieving new
state-of-the-art results for end-to-end models on the benchmark FIGER dataset.
We also present two new human-annotated datasets containing wide and deep
hierarchies which we will release to the community to encourage further
research in this direction: MedMentions, a collection of PubMed abstracts in
which 246k mentions have been mapped to the massive UMLS ontology; and TypeNet,
which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k
entity types. In experiments on all three datasets we show substantial gains
from hierarchy-aware training.
| 2,018 | Computation and Language |
New/s/leak 2.0 - Multilingual Information Extraction and Visualization
for Investigative Journalism | Investigative journalism in recent years is confronted with two major
challenges: 1) vast amounts of unstructured data originating from large text
collections such as leaks or answers to Freedom of Information requests, and 2)
multi-lingual data due to intensified global cooperation and communication in
politics, business and civil society. Faced with these challenges, journalists
are increasingly cooperating in international networks. To support such
collaborations, we present the new version of new/s/leak 2.0, our open-source
software for content-based searching of leaks. It includes three novel main
features: 1) automatic language detection and language-dependent information
extraction for 40 languages, 2) entity and keyword visualization for efficient
exploration, and 3) decentral deployment for analysis of confidential data from
various formats. We illustrate the new analysis capabilities with an exemplary
case study.
| 2,018 | Computation and Language |
Deep Enhanced Representation for Implicit Discourse Relation Recognition | Implicit discourse relation recognition is a challenging task as the relation
prediction without explicit connectives in discourse parsing needs
understanding of text spans and cannot be easily derived from surface features
from the input sentence pairs. Thus, properly representing the text is very
crucial to this task. In this paper, we propose a model augmented with
different grained text representations, including character, subword, word,
sentence, and sentence pair levels. The proposed deeper model is evaluated on
the benchmark treebank and achieves state-of-the-art accuracy with greater than
48% in 11-way and $F_1$ score greater than 50% in 4-way classifications for the
first time according to our best knowledge.
| 2,018 | Computation and Language |
Low-Resource Text Classification using Domain-Adversarial Learning | Deep learning techniques have recently shown to be successful in many natural
language processing tasks forming state-of-the-art systems. They require,
however, a large amount of annotated data which is often missing. This paper
explores the use of domain-adversarial learning as a regularizer to avoid
overfitting when training domain invariant features for deep, complex neural
networks in low-resource and zero-resource settings in new target domains or
languages. In case of new languages, we show that monolingual word vectors can
be directly used for training without prealignment. Their projection into a
common space can be learnt ad-hoc at training time reaching the final
performance of pretrained multilingual word vectors.
| 2,020 | Computation and Language |
Recurrent Stacking of Layers for Compact Neural Machine Translation
Models | In neural machine translation (NMT), the most common practice is to stack a
number of recurrent or feed-forward layers in the encoder and the decoder. As a
result, the addition of each new layer improves the translation quality
significantly. However, this also leads to a significant increase in the number
of parameters. In this paper, we propose to share parameters across all the
layers thereby leading to a recurrently stacked NMT model. We empirically show
that the translation quality of a model that recurrently stacks a single layer
6 times is comparable to the translation quality of a model that stacks 6
separate layers. We also show that using pseudo-parallel corpora by
back-translation leads to further significant improvements in translation
quality.
| 2,018 | Computation and Language |
Syllabification by Phone Categorization | Syllables play an important role in speech synthesis, speech recognition, and
spoken document retrieval. A novel, low cost, and language agnostic approach to
dividing words into their corresponding syllables is presented. A hybrid
genetic algorithm constructs a categorization of phones optimized for
syllabification. This categorization is used on top of a hidden Markov model
sequence classifier to find syllable boundaries. The technique shows promising
preliminary results when trained and tested on English words.
| 2,018 | Computation and Language |
Concept-Based Embeddings for Natural Language Processing | In this work, we focus on effectively leveraging and integrating information
from concept-level as well as word-level via projecting concepts and words into
a lower dimensional space while retaining most critical semantics. In a broad
context of opinion understanding system, we investigate the use of the fused
embedding for several core NLP tasks: named entity detection and
classification, automatic speech recognition reranking, and targeted sentiment
analysis.
| 2,018 | Computation and Language |
WordNet-Based Information Retrieval Using Common Hypernyms and Combined
Features | Text search based on lexical matching of keywords is not satisfactory due to
polysemous and synonymous words. Semantic search that exploits word meanings,
in general, improves search performance. In this paper, we survey WordNet-based
information retrieval systems, which employ a word sense disambiguation method
to process queries and documents. The problem is that in many cases a word has
more than one possible direct sense, and picking only one of them may give a
wrong sense for the word. Moreover, the previous systems use only word forms to
represent word senses and their hypernyms. We propose a novel approach that
uses the most specific common hypernym of the remaining undisambiguated
multi-senses of a word, as well as combined WordNet features to represent word
meanings. Experiments on a benchmark dataset show that, in terms of the MAP
measure, our search engine is 17.7% better than the lexical search, and at
least 9.4% better than all surveyed search systems using WordNet.
Keywords Ontology, word sense disambiguation, semantic annotation, semantic
search.
| 2,018 | Computation and Language |
LATE Ain'T Earley: A Faster Parallel Earley Parser | We present the LATE algorithm, an asynchronous variant of the Earley
algorithm for parsing context-free grammars. The Earley algorithm is naturally
task-based, but is difficult to parallelize because of dependencies between the
tasks. We present the LATE algorithm, which uses additional data structures to
maintain information about the state of the parse so that work items may be
processed in any order. This property allows the LATE algorithm to be sped up
using task parallelism. We show that the LATE algorithm can achieve a 120x
speedup over the Earley algorithm on a natural language task.
| 2,023 | Computation and Language |
The EcoLexicon English Corpus as an open corpus in Sketch Engine | The EcoLexicon English Corpus (EEC) is a 23.1-million-word corpus of
contemporary environmental texts. It was compiled by the LexiCon research group
for the development of EcoLexicon (Faber, Leon-Arauz & Reimerink 2016; San
Martin et al. 2017), a terminological knowledge base on the environment. It is
available as an open corpus in the well-known corpus query system Sketch Engine
(Kilgarriff et al. 2014), which means that any user, even without a
subscription, can freely access and query the corpus. In this paper, the EEC is
introduced by de- scribing how it was built and compiled and how it can be
queried and exploited, based both on the functionalities provided by Sketch
Engine and on the parameters in which the texts in the EEC are classified.
| 2,018 | Computation and Language |
Neural Chinese Word Segmentation with Dictionary Knowledge | Chinese word segmentation (CWS) is an important task for Chinese NLP.
Recently, many neural network based methods have been proposed for CWS.
However, these methods require a large number of labeled sentences for model
training, and usually cannot utilize the useful information in Chinese
dictionary. In this paper, we propose two methods to exploit the dictionary
information for CWS. The first one is based on pseudo labeled data generation,
and the second one is based on multi-task learning. The experimental results on
two benchmark datasets validate that our approach can effectively improve the
performance of Chinese word segmentation, especially when training data is
insufficient.
| 2,018 | Computation and Language |
A Fast-Converged Acoustic Modeling for Korean Speech Recognition: A
Preliminary Study on Time Delay Neural Network | In this paper, a time delay neural network (TDNN) based acoustic model is
proposed to implement a fast-converged acoustic modeling for Korean speech
recognition. The TDNN has an advantage in fast-convergence where the amount of
training data is limited, due to subsampling which excludes duplicated weights.
The TDNN showed an absolute improvement of 2.12% in terms of character error
rate compared to feed forward neural network (FFNN) based modelling for Korean
speech corpora. The proposed model converged 1.67 times faster than a
FFNN-based model did.
| 2,018 | Computation and Language |
Theme-weighted Ranking of Keywords from Text Documents using Phrase
Embeddings | Keyword extraction is a fundamental task in natural language processing that
facilitates mapping of documents to a concise set of representative single and
multi-word phrases. Keywords from text documents are primarily extracted using
supervised and unsupervised approaches. In this paper, we present an
unsupervised technique that uses a combination of theme-weighted personalized
PageRank algorithm and neural phrase embeddings for extracting and ranking
keywords. We also introduce an efficient way of processing text documents and
training phrase embeddings using existing techniques. We share an evaluation
dataset derived from an existing dataset that is used for choosing the
underlying embedding model. The evaluations for ranked keyword extraction are
performed on two benchmark datasets comprising of short abstracts (Inspec), and
long scientific papers (SemEval 2010), and is shown to produce results better
than the state-of-the-art systems.
| 2,018 | Computation and Language |
Using Textual Summaries to Describe a Set of Products | When customers are faced with the task of making a purchase in an unfamiliar
product domain, it might be useful to provide them with an overview of the
product set to help them understand what they can expect. In this paper we
present and evaluate a method to summarise sets of products in natural
language, focusing on the price range, common product features across the set,
and product features that impact on price. In our study, participants reported
that they found our summaries useful, but we found no evidence that the
summaries influenced the selections made by participants.
| 2,018 | Computation and Language |
Don't get Lost in Negation: An Effective Negation Handled Dialogue Acts
Prediction Algorithm for Twitter Customer Service Conversations | In the last several years, Twitter is being adopted by the companies as an
alternative platform to interact with the customers to address their concerns.
With the abundance of such unconventional conversation resources, push for
developing effective virtual agents is more than ever. To address this
challenge, a better understanding of such customer service conversations is
required. Lately, there have been several works proposing a novel taxonomy for
fine-grained dialogue acts as well as develop algorithms for automatic
detection of these acts. The outcomes of these works are providing stepping
stones for the ultimate goal of building efficient and effective virtual
agents. But none of these works consider handling the notion of negation into
the proposed algorithms. In this work, we developed an SVM-based dialogue acts
prediction algorithm for Twitter customer service conversations where negation
handling is an integral part of the end-to-end solution. For negation handling,
we propose several efficient heuristics as well as adopt recent state-of- art
third party machine learning based solutions. Empirically we show model's
performance gain while handling negation compared to when we don't. Our
experiments show that for the informal text such as tweets, the heuristic-based
approach is more effective.
| 2,018 | Computation and Language |
LSTMs with Attention for Aggression Detection | In this paper, we describe the system submitted for the shared task on
Aggression Identification in Facebook posts and comments by the team Nishnik.
Previous works demonstrate that LSTMs have achieved remarkable performance in
natural language processing tasks. We deploy an LSTM model with an attention
unit over it. Our system ranks 6th and 4th in the Hindi subtask for Facebook
comments and subtask for generalized social media data respectively. And it
ranks 17th and 10th in the corresponding English subtasks.
| 2,018 | Computation and Language |
Low-Resource Contextual Topic Identification on Speech | In topic identification (topic ID) on real-world unstructured audio, an audio
instance of variable topic shifts is first broken into sequential segments, and
each segment is independently classified. We first present a general purpose
method for topic ID on spoken segments in low-resource languages, using a
cascade of universal acoustic modeling, translation lexicons to English, and
English-language topic classification. Next, instead of classifying each
segment independently, we demonstrate that exploring the contextual
dependencies across sequential segments can provide large improvements. In
particular, we propose an attention-based contextual model which is able to
leverage the contexts in a selective manner. We test both our contextual and
non-contextual models on four LORELEI languages, and on all but one our
attention-based contextual model significantly outperforms the
context-independent models.
| 2,018 | Computation and Language |
Hierarchical Multitask Learning for CTC-based Speech Recognition | Previous work has shown that neural encoder-decoder speech recognition can be
improved with hierarchical multitask learning, where auxiliary tasks are added
at intermediate layers of a deep encoder. We explore the effect of hierarchical
multitask learning in the context of connectionist temporal classification
(CTC)-based speech recognition, and investigate several aspects of this
approach. Consistent with previous work, we observe performance improvements on
telephone conversational speech recognition (specifically the Eval2000 test
sets) when training a subword-level CTC model with an auxiliary phone loss at
an intermediate layer. We analyze the effects of a number of experimental
variables (like interpolation constant and position of the auxiliary loss
function), performance in lower-resource settings, and the relationship between
pretraining and multitask learning. We observe that the hierarchical multitask
approach improves over standard multitask training in our higher-data
experiments, while in the low-resource settings standard multitask training
works well. The best results are obtained by combining hierarchical multitask
learning and pretraining, which improves word error rates by 3.4% absolute on
the Eval2000 test sets.
| 2,019 | Computation and Language |
Chinese Poetry Generation with Flexible Styles | Research has shown that sequence-to-sequence neural models, particularly
those with the attention mechanism, can successfully generate classical Chinese
poems. However, neural models are not capable of generating poems that match
specific styles, such as the impulsive style of Li Bai, a famous poet in the
Tang Dynasty. This work proposes a memory-augmented neural model to enable the
generation of style-specific poetry. The key idea is a memory structure that
stores how poems with a desired style were generated by humans, and uses
similar fragments to adjust the generation. We demonstrate that the proposed
algorithm generates poems with flexible styles, including styles of a
particular era and an individual poet.
| 2,018 | Computation and Language |
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing | Robust dialogue belief tracking is a key component in maintaining good
quality dialogue systems. The tasks that dialogue systems are trying to solve
are becoming increasingly complex, requiring scalability to multi domain,
semantically rich dialogues. However, most current approaches have difficulty
scaling up with domains because of the dependency of the model parameters on
the dialogue ontology. In this paper, a novel approach is introduced that fully
utilizes semantic similarity between dialogue utterances and the ontology
terms, allowing the information to be shared across domains. The evaluation is
performed on a recently collected multi-domain dialogues dataset, one order of
magnitude larger than currently available corpora. Our model demonstrates great
capability in handling multi-domain dialogues, simultaneously outperforming
existing state-of-the-art models in single-domain dialogue tracking tasks.
| 2,018 | Computation and Language |
Power Networks: A Novel Neural Architecture to Predict Power Relations | Can language analysis reveal the underlying social power relations that exist
between participants of an interaction? Prior work within NLP has shown promise
in this area, but the performance of automatically predicting power relations
using NLP analysis of social interactions remains wanting. In this paper, we
present a novel neural architecture that captures manifestations of power
within individual emails which are then aggregated in an order-preserving way
in order to infer the direction of power between pairs of participants in an
email thread. We obtain an accuracy of 80.4%, a 10.1% improvement over
state-of-the-art methods, in this task. We further apply our model to the task
of predicting power relations between individuals based on the entire set of
messages exchanged between them; here also, our model significantly outperforms
the70.0% accuracy using prior state-of-the-art techniques, obtaining an
accuracy of 83.0%.
| 2,018 | Computation and Language |
Using semantic clustering to support situation awareness on Twitter: The
case of World Views | In recent years, situation awareness has been recognised as a critical part
of effective decision making, in particular for crisis management. One way to
extract value and allow for better situation awareness is to develop a system
capable of analysing a dataset of multiple posts, and clustering consistent
posts into different views or stories (or, world views). However, this can be
challenging as it requires an understanding of the data, including determining
what is consistent data, and what data corroborates other data. Attempting to
address these problems, this article proposes Subject-Verb-Object Semantic
Suffix Tree Clustering (SVOSSTC) and a system to support it, with a special
focus on Twitter content. The novelty and value of SVOSSTC is its emphasis on
utilising the Subject-Verb-Object (SVO) typology in order to construct
semantically consistent world views, in which individuals---particularly those
involved in crisis response---might achieve an enhanced picture of a situation
from social media data. To evaluate our system and its ability to provide
enhanced situation awareness, we tested it against existing approaches,
including human data analysis, using a variety of real-world scenarios. The
results indicated a noteworthy degree of evidence (e.g., in cluster granularity
and meaningfulness) to affirm the suitability and rigour of our approach.
Moreover, these results highlight this article's proposals as innovative and
practical system contributions to the research field.
| 2,018 | Computation and Language |
Developing a Portable Natural Language Processing Based Phenotyping
System | This paper presents a portable phenotyping system that is capable of
integrating both rule-based and statistical machine learning based approaches.
Our system utilizes UMLS to extract clinically relevant features from the
unstructured text and then facilitates portability across different
institutions and data systems by incorporating OHDSI's OMOP Common Data Model
(CDM) to standardize necessary data elements. Our system can also store the key
components of rule-based systems (e.g., regular expression matches) in the
format of OMOP CDM, thus enabling the reuse, adaptation and extension of many
existing rule-based clinical NLP systems. We experimented with our system on
the corpus from i2b2's Obesity Challenge as a pilot study. Our system
facilitates portable phenotyping of obesity and its 15 comorbidities based on
the unstructured patient discharge summaries, while achieving a performance
that often ranked among the top 10 of the challenge participants. This
standardization enables a consistent application of numerous rule-based and
machine learning based classification techniques downstream.
| 2,018 | Computation and Language |
Improving Named Entity Recognition by Jointly Learning to Disambiguate
Morphological Tags | Previous studies have shown that linguistic features of a word such as
possession, genitive or other grammatical cases can be employed in word
representations of a named entity recognition (NER) tagger to improve the
performance for morphologically rich languages. However, these taggers require
external morphological disambiguation (MD) tools to function which are hard to
obtain or non-existent for many languages. In this work, we propose a model
which alleviates the need for such disambiguators by jointly learning NER and
MD taggers in languages for which one can provide a list of candidate
morphological analyses. We show that this can be done independent of the
morphological annotation schemes, which differ among languages. Our experiments
employing three different model architectures that join these two tasks show
that joint learning improves NER performance. Furthermore, the morphological
disambiguator's performance is shown to be competitive.
| 2,018 | Computation and Language |
Automatic Severity Classification of Coronary Artery Disease via
Recurrent Capsule Network | Coronary artery disease (CAD) is one of the leading causes of cardiovascular
disease deaths. CAD condition progresses rapidly, if not diagnosed and treated
at an early stage may eventually lead to an irreversible state of the heart
muscle death. Invasive coronary arteriography is the gold standard technique
for CAD diagnosis. Coronary arteriography texts describe which part has
stenosis and how much stenosis is in details. It is crucial to conduct the
severity classification of CAD. In this paper, we employ a recurrent capsule
network (RCN) to extract semantic relations between clinical named entities in
Chinese coronary arteriography texts, through which we can automatically find
out the maximal stenosis for each lumen to inference how severe CAD is
according to the improved method of Gensini. Experimental results on the corpus
collected from Shanghai Shuguang Hospital show that our proposed method
achieves an accuracy of 97.0\% in the severity classification of CAD.
| 2,018 | Computation and Language |
Forward Attention in Sequence-to-sequence Acoustic Modelling for Speech
Synthesis | This paper proposes a forward attention method for the sequenceto- sequence
acoustic modeling of speech synthesis. This method is motivated by the nature
of the monotonic alignment from phone sequences to acoustic sequences. Only the
alignment paths that satisfy the monotonic condition are taken into
consideration at each decoder timestep. The modified attention probabilities at
each timestep are computed recursively using a forward algorithm. A transition
agent for forward attention is further proposed, which helps the attention
mechanism to make decisions whether to move forward or stay at each decoder
timestep. Experimental results show that the proposed forward attention method
achieves faster convergence speed and higher stability than the baseline
attention method. Besides, the method of forward attention with transition
agent can also help improve the naturalness of synthetic speech and control the
speed of synthetic speech effectively.
| 2,018 | Computation and Language |
Unsupervised Online Multitask Learning of Behavioral Sentence Embeddings | Unsupervised learning has been an attractive method for easily deriving
meaningful data representations from vast amounts of unlabeled data. These
representations, or embeddings, often yield superior results in many tasks,
whether used directly or as features in subsequent training stages. However,
the quality of the embeddings is highly dependent on the assumed knowledge in
the unlabeled data and how the system extracts information without supervision.
Domain portability is also very limited in unsupervised learning, often
requiring re-training on other in-domain corpora to achieve robustness. In this
work we present a multitask paradigm for unsupervised contextual learning of
behavioral interactions which addresses unsupervised domain adaption. We
introduce an online multitask objective into unsupervised learning and show
that sentence embeddings generated through this process increases performance
of affective tasks.
| 2,018 | Computation and Language |
Distinct patterns of syntactic agreement errors in recurrent networks
and humans | Determining the correct form of a verb in context requires an understanding
of the syntactic structure of the sentence. Recurrent neural networks have been
shown to perform this task with an error rate comparable to humans, despite the
fact that they are not designed with explicit syntactic representations. To
examine the extent to which the syntactic representations of these networks are
similar to those used by humans when processing sentences, we compare the
detailed pattern of errors that RNNs and humans make on this task. Despite
significant similarities (attraction errors, asymmetry between singular and
plural subjects), the error patterns differed in important ways. In particular,
in complex sentences with relative clauses error rates increased in RNNs but
decreased in humans. Furthermore, RNNs showed a cumulative effect of attractors
but humans did not. We conclude that at least in some respects the syntactic
representations acquired by RNNs are fundamentally different from those used by
humans.
| 2,018 | Computation and Language |
Fake news as we feel it: perception and conceptualization of the term
"fake news" in the media | In this article, we quantitatively analyze how the term "fake news" is being
shaped in news media in recent years. We study the perception and the
conceptualization of this term in the traditional media using eight years of
data collected from news outlets based in 20 countries. Our results not only
corroborate previous indications of a high increase in the usage of the
expression "fake news", but also show contextual changes around this expression
after the United States presidential election of 2016. Among other results, we
found changes in the related vocabulary, in the mentioned entities, in the
surrounding topics and in the contextual polarity around the term "fake news",
suggesting that this expression underwent a change in perception and
conceptualization after 2016. These outcomes expand the understandings on the
usage of the term "fake news", helping to comprehend and more accurately
characterize this relevant social phenomenon linked to misinformation and
manipulation.
| 2,018 | Computation and Language |
Is it worth it? Budget-related evaluation metrics for model selection | Creating a linguistic resource is often done by using a machine learning
model that filters the content that goes through to a human annotator, before
going into the final resource. However, budgets are often limited, and the
amount of available data exceeds the amount of affordable annotation. In order
to optimize the benefit from the invested human work, we argue that deciding on
which model one should employ depends not only on generalized evaluation
metrics such as F-score, but also on the gain metric. Because the model with
the highest F-score may not necessarily have the best sequencing of predicted
classes, this may lead to wasting funds on annotating false positives, yielding
zero improvement of the linguistic resource. We exemplify our point with a case
study, using real data from a task of building a verb-noun idiom dictionary. We
show that, given the choice of three systems with varying F-scores, the system
with the highest F-score does not yield the highest profits. In other words, in
our case the cost-benefit trade off is more favorable for a system with a lower
F-score.
| 2,018 | Computation and Language |
Hierarchical Multi Task Learning With CTC | In Automatic Speech Recognition it is still challenging to learn useful
intermediate representations when using high-level (or abstract) target units
such as words. For that reason, character or phoneme based systems tend to
outperform word-based systems when just few hundreds of hours of training data
are being used. In this paper, we first show how hierarchical multi-task
training can encourage the formation of useful intermediate representations. We
achieve this by performing Connectionist Temporal Classification at different
levels of the network with targets of different granularity. Our model thus
performs predictions in multiple scales for the same input. On the standard
300h Switchboard training setup, our hierarchical multi-task architecture
exhibits improvements over single-task architectures with the same number of
parameters. Our model obtains 14.0% Word Error Rate on the Eval2000 Switchboard
subset without any decoder or language model, outperforming the current
state-of-the-art on acoustic-to-word models.
| 2,019 | Computation and Language |
Semantic Parsing: Syntactic assurance to target sentence using LSTM
Encoder CFG-Decoder | Semantic parsing can be defined as the process of mapping natural language
sentences into a machine interpretable, formal representation of its meaning.
Semantic parsing using LSTM encoder-decoder neural networks have become
promising approach. However, human automated translation of natural language
does not provide grammaticality guarantees for the sentences generate such a
guarantee is particularly important for practical cases where a data base query
can cause critical errors if the sentence is ungrammatical. In this work, we
propose an neural architecture called Encoder CFG-Decoder, whose output
conforms to a given context-free grammar. Results are show for any
implementation of such architecture display its correctness and providing
benchmark accuracy levels better than the literature.
| 2,018 | Computation and Language |
Guess who? Multilingual approach for the automated generation of
author-stylized poetry | This paper addresses the problem of stylized text generation in a
multilingual setup. A version of a language model based on a long short-term
memory (LSTM) artificial neural network with extended phonetic and semantic
embeddings is used for stylized poetry generation. The quality of the resulting
poems generated by the network is estimated through bilingual evaluation
understudy (BLEU), a survey and a new cross-entropy based metric that is
suggested for the problems of such type. The experiments show that the proposed
model consistently outperforms random sample and vanilla-LSTM baselines, humans
also tend to associate machine generated texts with the target author.
| 2,022 | Computation and Language |
A Hand-Held Multimedia Translation and Interpretation System with
Application to Diet Management | We propose a network independent, hand-held system to translate and
disambiguate foreign restaurant menu items in real-time. The system is based on
the use of a portable multimedia device, such as a smartphones or a PDA. An
accurate and fast translation is obtained using a Machine Translation engine
and a context-specific corpora to which we apply two pre-processing steps,
called translation standardization and $n$-gram consolidation. The phrase-table
generated is orders of magnitude lighter than the ones commonly used in market
applications, thus making translations computationally less expensive, and
decreasing the battery usage. Translation ambiguities are mitigated using
multimedia information including images of dishes and ingredients, along with
ingredient lists. We implemented a prototype of our system on an iPod Touch
Second Generation for English speakers traveling in Spain. Our tests indicate
that our translation method yields higher accuracy than translation engines
such as Google Translate, and does so almost instantaneously. The memory
requirements of the application, including the database of images, are also
well within the limits of the device. By combining it with a database of
nutritional information, our proposed system can be used to help individuals
who follow a medical diet maintain this diet while traveling.
| 2,018 | Computation and Language |
Evaluating Word Embeddings in Multi-label Classification Using
Fine-grained Name Typing | Embedding models typically associate each word with a single real-valued
vector, representing its different properties. Evaluation methods, therefore,
need to analyze the accuracy and completeness of these properties in
embeddings. This requires fine-grained analysis of embedding subspaces.
Multi-label classification is an appropriate way to do so. We propose a new
evaluation method for word embeddings based on multi-label classification given
a word embedding. The task we use is fine-grained name typing: given a large
corpus, find all types that a name can refer to based on the name embedding.
Given the scale of entities in knowledge bases, we can build datasets for this
task that are complementary to the current embedding evaluation datasets in:
they are very large, contain fine-grained classes, and allow the direct
evaluation of embeddings without confounding factors like sentence context
| 2,018 | Computation and Language |
Towards Explainable and Controllable Open Domain Dialogue Generation
with Dialogue Acts | We study open domain dialogue generation with dialogue acts designed to
explain how people engage in social chat. To imitate human behavior, we propose
managing the flow of human-machine interactions with the dialogue acts as
policies. The policies and response generation are jointly learned from
human-human conversations, and the former is further optimized with a
reinforcement learning approach. With the dialogue acts, we achieve significant
improvement over state-of-the-art methods on response quality for given
contexts and dialogue length in both machine-machine simulation and
human-machine conversation.
| 2,018 | Computation and Language |
Imparting Interpretability to Word Embeddings while Preserving Semantic
Structure | As an ubiquitous method in natural language processing, word embeddings are
extensively employed to map semantic properties of words into a dense vector
representation. They capture semantic and syntactic relations among words but
the vectors corresponding to the words are only meaningful relative to each
other. Neither the vector nor its dimensions have any absolute, interpretable
meaning. We introduce an additive modification to the objective function of the
embedding learning algorithm that encourages the embedding vectors of words
that are semantically related to a predefined concept to take larger values
along a specified dimension, while leaving the original semantic learning
mechanism mostly unaffected. In other words, we align words that are already
determined to be related, along predefined concepts. Therefore, we impart
interpretability to the word embedding by assigning meaning to its vector
dimensions. The predefined concepts are derived from an external lexical
resource, which in this paper is chosen as Roget's Thesaurus. We observe that
alignment along the chosen concepts is not limited to words in the Thesaurus
and extends to other related words as well. We quantify the extent of
interpretability and assignment of meaning from our experimental results.
Manual human evaluation results have also been presented to further verify that
the proposed method increases interpretability. We also demonstrate the
preservation of semantic coherence of the resulting vector space by using
word-analogy and word-similarity tests. These tests show that the
interpretability-imparted word embeddings that are obtained by the proposed
framework do not sacrifice performances in common benchmark tests.
| 2,020 | Computation and Language |
ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech | In this work, we propose a new solution for parallel wave generation by
WaveNet. In contrast to parallel WaveNet (van den Oord et al., 2018), we
distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet
by minimizing a regularized KL divergence between their highly-peaked output
distributions. Our method computes the KL divergence in closed-form, which
simplifies the training algorithm and provides very efficient distillation. In
addition, we introduce the first text-to-wave neural architecture for speech
synthesis, which is fully convolutional and enables fast end-to-end training
from scratch. It significantly outperforms the previous pipeline that connects
a text-to-spectrogram model to a separately trained WaveNet (Ping et al.,
2018). We also successfully distill a parallel waveform synthesizer conditioned
on the hidden representation in this end-to-end model.
| 2,019 | Computation and Language |
Clinical Text Classification with Rule-based Features and
Knowledge-guided Convolutional Neural Networks | Clinical text classification is an important problem in medical natural
language processing. Existing studies have conventionally focused on rules or
knowledge sources-based feature engineering, but only a few have exploited
effective feature learning capability of deep learning methods. In this study,
we propose a novel approach which combines rule-based features and
knowledge-guided deep learning techniques for effective disease classification.
Critical Steps of our method include identifying trigger phrases, predicting
classes with very few examples using trigger phrases and training a
convolutional neural network with word embeddings and Unified Medical Language
System (UMLS) entity embeddings. We evaluated our method on the 2008
Integrating Informatics with Biology and the Bedside (i2b2) obesity challenge.
The results show that our method outperforms the state of the art methods.
| 2,018 | Computation and Language |
Using Deep Neural Networks to Translate Multi-lingual Threat
Intelligence | The multilingual nature of the Internet increases complications in the
cybersecurity community's ongoing efforts to strategically mine threat
intelligence from OSINT data on the web. OSINT sources such as social media,
blogs, and dark web vulnerability markets exist in diverse languages and hinder
security analysts, who are unable to draw conclusions from intelligence in
languages they don't understand. Although third party translation engines are
growing stronger, they are unsuited for private security environments. First,
sensitive intelligence is not a permitted input to third party engines due to
privacy and confidentiality policies. In addition, third party engines produce
generalized translations that tend to lack exclusive cybersecurity terminology.
In this paper, we address these issues and describe our system that enables
threat intelligence understanding across unfamiliar languages. We create a
neural network based system that takes in cybersecurity data in a different
language and outputs the respective English translation. The English
translation can then be understood by an analyst, and can also serve as input
to an AI based cyber-defense system that can take mitigative action. As a proof
of concept, we have created a pipeline which takes Russian threats and
generates its corresponding English, RDF, and vectorized representations. Our
network optimizes translations on specifically, cybersecurity data.
| 2,018 | Computation and Language |
Statistical Model Compression for Small-Footprint Natural Language
Understanding | In this paper we investigate statistical model compression applied to natural
language understanding (NLU) models. Small-footprint NLU models are important
for enabling offline systems on hardware restricted devices, and for decreasing
on-demand model loading latency in cloud-based systems. To compress NLU models,
we present two main techniques, parameter quantization and perfect feature
hashing. These techniques are complementary to existing model pruning
strategies such as L1 regularization. We performed experiments on a large scale
NLU system. The results show that our approach achieves 14-fold reduction in
memory usage compared to the original models with minimal predictive
performance impact.
| 2,018 | Computation and Language |
Rearranging the Familiar: Testing Compositional Generalization in
Recurrent Networks | Systematic compositionality is the ability to recombine meaningful units with
regular and predictable outcomes, and it's seen as key to humans' capacity for
generalization in language. Recent work has studied systematic compositionality
in modern seq2seq models using generalization to novel navigation instructions
in a grounded environment as a probing tool, requiring models to quickly
bootstrap the meaning of new words. We extend this framework here to settings
where the model needs only to recombine well-trained functional words (such as
"around" and "right") in novel contexts. Our findings confirm and strengthen
the earlier ones: seq2seq models can be impressively good at generalizing to
novel combinations of previously-seen input, but only when they receive
extensive training on the specific pattern to be generalized (e.g.,
generalizing from many examples of "X around right" to "jump around right"),
while failing when generalization requires novel application of compositional
rules (e.g., inferring the meaning of "around right" from those of "right" and
"around").
| 2,018 | Computation and Language |
Learning Representations for Soft Skill Matching | Employers actively look for talents having not only specific hard skills but
also various soft skills. To analyze the soft skill demands on the job market,
it is important to be able to detect soft skill phrases from job advertisements
automatically. However, a naive matching of soft skill phrases can lead to
false positive matches when a soft skill phrase, such as friendly, is used to
describe a company, a team, or another entity, rather than a desired candidate.
In this paper, we propose a phrase-matching-based approach which
differentiates between soft skill phrases referring to a candidate vs.
something else. The disambiguation is formulated as a binary text
classification problem where the prediction is made for the potential soft
skill based on the context where it occurs. To inform the model about the soft
skill for which the prediction is made, we develop several approaches,
including soft skill masking and soft skill tagging.
We compare several neural network based approaches, including CNN, LSTM and
Hierarchical Attention Model. The proposed tagging-based input representation
using LSTM achieved the highest recall of 83.92% on the job dataset when fixing
a precision to 95%.
| 2,018 | Computation and Language |
Twitter Sentiment Analysis System | Social media is increasingly used by humans to express their feelings and
opinions in the form of short text messages. Detecting sentiments in the text
has a wide range of applications including identifying anxiety or depression of
individuals and measuring well-being or mood of a community. Sentiments can be
expressed in many ways that can be seen such as facial expression and gestures,
speech and by written text. Sentiment Analysis in text documents is essentially
a content-based classification problem involving concepts from the domains of
Natural Language Processing as well as Machine Learning. In this paper,
sentiment recognition based on textual data and the techniques used in
sentiment analysis are discussed.
| 2,018 | Computation and Language |
Twitter Sentiment Analysis via Bi-sense Emoji Embedding and
Attention-based LSTM | Sentiment analysis on large-scale social media data is important to bridge
the gaps between social media contents and real world activities including
political election prediction, individual and public emotional status
monitoring and analysis, and so on. Although textual sentiment analysis has
been well studied based on platforms such as Twitter and Instagram, analysis of
the role of extensive emoji uses in sentiment analysis remains light. In this
paper, we propose a novel scheme for Twitter sentiment analysis with extra
attention on emojis. We first learn bi-sense emoji embeddings under positive
and negative sentimental tweets individually, and then train a sentiment
classifier by attending on these bi-sense emoji embeddings with an
attention-based long short-term memory network (LSTM). Our experiments show
that the bi-sense embedding is effective for extracting sentiment-aware
embeddings of emojis and outperforms the state-of-the-art models. We also
visualize the attentions to show that the bi-sense emoji embedding provides
better guidance on the attention mechanism to obtain a more robust
understanding of the semantics and sentiments.
| 2,018 | Computation and Language |
Question-Aware Sentence Gating Networks for Question and Answering | Machine comprehension question answering, which finds an answer to the
question given a passage, involves high-level reasoning processes of
understanding and tracking the relevant contents across various semantic units
such as words, phrases, and sentences in a document. This paper proposes the
novel question-aware sentence gating networks that directly incorporate the
sentence-level information into word-level encoding processes. To this end, our
model first learns question-aware sentence representations and then dynamically
combines them with word-level representations, resulting in semantically
meaningful word representations for QA tasks. Experimental results demonstrate
that our approach consistently improves the accuracy over existing baseline
approaches on various QA datasets and bears the wide applicability to other
neural network-based QA models.
| 2,018 | Computation and Language |
An Efficient End-to-End Neural Model for Handwritten Text Recognition | Offline handwritten text recognition from images is an important problem for
enterprises attempting to digitize large volumes of handmarked scanned
documents/reports. Deep recurrent models such as Multi-dimensional LSTMs have
been shown to yield superior performance over traditional Hidden Markov Model
based approaches that suffer from the Markov assumption and therefore lack the
representational power of RNNs. In this paper we introduce a novel approach
that combines a deep convolutional network with a recurrent Encoder-Decoder
network to map an image to a sequence of characters corresponding to the text
present in the image. The entire model is trained end-to-end using Focal Loss,
an improvement over the standard Cross-Entropy loss that addresses the class
imbalance problem, inherent to text recognition. To enhance the decoding
capacity of the model, Beam Search algorithm is employed which searches for the
best sequence out of a set of hypotheses based on a joint distribution of
individual characters. Our model takes as input a downsampled version of the
original image thereby making it both computationally and memory efficient. The
experimental results were benchmarked against two publicly available datasets,
IAM and RIMES. We surpass the state-of-the-art word level accuracy on the
evaluation set of both datasets by 3.5% & 1.1%, respectively.
| 2,018 | Computation and Language |
Abstractive and Extractive Text Summarization using Document Context
Vector and Recurrent Neural Networks | Sequence to sequence (Seq2Seq) learning has recently been used for
abstractive and extractive summarization. In current study, Seq2Seq models have
been used for eBay product description summarization. We propose a novel
Document-Context based Seq2Seq models using RNNs for abstractive and extractive
summarizations. Intuitively, this is similar to humans reading the title,
abstract or any other contextual information before reading the document. This
gives humans a high-level idea of what the document is about. We use this idea
and propose that Seq2Seq models should be started with contextual information
at the first time-step of the input to obtain better summaries. In this manner,
the output summaries are more document centric, than being generic, overcoming
one of the major hurdles of using generative models. We generate
document-context from user-behavior and seller provided information. We train
and evaluate our models on human-extracted-golden-summaries. The
document-contextual Seq2Seq models outperform standard Seq2Seq models.
Moreover, generating human extracted summaries is prohibitively expensive to
scale, we therefore propose a semi-supervised technique for extracting
approximate summaries and using it for training Seq2Seq models at scale.
Semi-supervised models are evaluated against human extracted summaries and are
found to be of similar efficacy. We provide side by side comparison for
abstractive and extractive summarizers (contextual and non-contextual) on same
evaluation dataset. Overall, we provide methodologies to use and evaluate the
proposed techniques for large document summarization. Furthermore, we found
these techniques to be highly effective, which is not the case with existing
techniques.
| 2,018 | Computation and Language |
ScoutBot: A Dialogue System for Collaborative Navigation | ScoutBot is a dialogue interface to physical and simulated robots that
supports collaborative exploration of environments. The demonstration will
allow users to issue unconstrained spoken language commands to ScoutBot.
ScoutBot will prompt for clarification if the user's instruction needs
additional input. It is trained on human-robot dialogue collected from
Wizard-of-Oz experiments, where robot responses were initiated by a human
wizard in previous interactions. The demonstration will show a simulated ground
robot (Clearpath Jackal) in a simulated environment supported by ROS (Robot
Operating System).
| 2,018 | Computation and Language |
Consequences and Factors of Stylistic Differences in Human-Robot
Dialogue | This paper identifies stylistic differences in instruction-giving observed in
a corpus of human-robot dialogue. Differences in verbosity and structure (i.e.,
single-intent vs. multi-intent instructions) arose naturally without
restrictions or prior guidance on how users should speak with the robot.
Different styles were found to produce different rates of miscommunication, and
correlations were found between style differences and individual user
variation, trust, and interaction experience with the robot. Understanding
potential consequences and factors that influence style can inform design of
dialogue systems that are robust to natural variation from human users.
| 2,018 | Computation and Language |
A Pipeline for Creative Visual Storytelling | Computational visual storytelling produces a textual description of events
and interpretations depicted in a sequence of images. These texts are made
possible by advances and cross-disciplinary approaches in natural language
processing, generation, and computer vision. We define a computational creative
visual storytelling as one with the ability to alter the telling of a story
along three aspects: to speak about different environments, to produce
variations based on narrative goals, and to adapt the narrative to the
audience. These aspects of creative storytelling and their effect on the
narrative have yet to be explored in visual storytelling. This paper presents a
pipeline of task-modules, Object Identification, Single-Image Inferencing, and
Multi-Image Narration, that serve as a preliminary design for building a
creative visual storyteller. We have piloted this design for a sequence of
images in an annotation task. We present and analyze the collected corpus and
describe plans towards automation.
| 2,018 | Computation and Language |
Phonetic-and-Semantic Embedding of Spoken Words with Applications in
Spoken Content Retrieval | Word embedding or Word2Vec has been successful in offering semantics for text
words learned from the context of words. Audio Word2Vec was shown to offer
phonetic structures for spoken words (signal segments for words) learned from
signals within spoken words. This paper proposes a two-stage framework to
perform phonetic-and-semantic embedding on spoken words considering the context
of the spoken words. Stage 1 performs phonetic embedding with speaker
characteristics disentangled. Stage 2 then performs semantic embedding in
addition. We further propose to evaluate the phonetic-and-semantic nature of
the audio embeddings obtained in Stage 2 by parallelizing with text embeddings.
In general, phonetic structure and semantics inevitably disturb each other. For
example the words "brother" and "sister" are close in semantics but very
different in phonetic structure, while the words "brother" and "bother" are in
the other way around. But phonetic-and-semantic embedding is attractive, as
shown in the initial experiments on spoken document retrieval. Not only spoken
documents including the spoken query can be retrieved based on the phonetic
structures, but spoken documents semantically related to the query but not
including the query can also be retrieved based on the semantics.
| 2,019 | Computation and Language |
Tree-structured multi-stage principal component analysis (TMPCA): theory
and applications | A PCA based sequence-to-vector (seq2vec) dimension reduction method for the
text classification problem, called the tree-structured multi-stage principal
component analysis (TMPCA) is presented in this paper. Theoretical analysis and
applicability of TMPCA are demonstrated as an extension to our previous work
(Su, Huang & Kuo). Unlike conventional word-to-vector embedding methods, the
TMPCA method conducts dimension reduction at the sequence level without labeled
training data. Furthermore, it can preserve the sequential structure of input
sequences. We show that TMPCA is computationally efficient and able to
facilitate sequence-based text classification tasks by preserving strong mutual
information between its input and output mathematically. It is also
demonstrated by experimental results that a dense (fully connected) network
trained on the TMPCA preprocessed data achieves better performance than
state-of-the-art fastText and other neural-network-based solutions.
| 2,018 | Computation and Language |
German Dialect Identification Using Classifier Ensembles | In this paper we present the GDI_classification entry to the second German
Dialect Identification (GDI) shared task organized within the scope of the
VarDial Evaluation Campaign 2018. We present a system based on SVM classifier
ensembles trained on characters and words. The system was trained on a
collection of speech transcripts of five Swiss-German dialects provided by the
organizers. The transcripts included in the dataset contained speakers from
Basel, Bern, Lucerne, and Zurich. Our entry in the challenge reached 62.03%
F1-score and was ranked third out of eight teams.
| 2,018 | Computation and Language |
Multi-scale Alignment and Contextual History for Attention Mechanism in
Sequence-to-sequence Model | A sequence-to-sequence model is a neural network module for mapping two
sequences of different lengths. The sequence-to-sequence model has three core
modules: encoder, decoder, and attention. Attention is the bridge that connects
the encoder and decoder modules and improves model performance in many tasks.
In this paper, we propose two ideas to improve sequence-to-sequence model
performance by enhancing the attention module. First, we maintain the history
of the location and the expected context from several previous time-steps.
Second, we apply multiscale convolution from several previous attention vectors
to the current decoder state. We utilized our proposed framework for
sequence-to-sequence speech recognition and text-to-speech systems. The results
reveal that our proposed extension could improve performance significantly
compared to a standard attention baseline.
| 2,018 | Computation and Language |
Examining Scientific Writing Styles from the Perspective of Linguistic
Complexity | Publishing articles in high-impact English journals is difficult for scholars
around the world, especially for non-native English-speaking scholars (NNESs),
most of whom struggle with proficiency in English. In order to uncover the
differences in English scientific writing between native English-speaking
scholars (NESs) and NNESs, we collected a large-scale data set containing more
than 150,000 full-text articles published in PLoS between 2006 and 2015. We
divided these articles into three groups according to the ethnic backgrounds of
the first and corresponding authors, obtained by Ethnea, and examined the
scientific writing styles in English from a two-fold perspective of linguistic
complexity: (1) syntactic complexity, including measurements of sentence length
and sentence complexity; and (2) lexical complexity, including measurements of
lexical diversity, lexical density, and lexical sophistication. The
observations suggest marginal differences between groups in syntactical and
lexical complexity.
| 2,018 | Computation and Language |
Deep Dialog Act Recognition using Multiple Token, Segment, and Context
Information Representations | Dialog act (DA) recognition is a task that has been widely explored over the
years. Recently, most approaches to the task explored different DNN
architectures to combine the representations of the words in a segment and
generate a segment representation that provides cues for intention. In this
study, we explore means to generate more informative segment representations,
not only by exploring different network architectures, but also by considering
different token representations, not only at the word level, but also at the
character and functional levels. At the word level, in addition to the commonly
used uncontextualized embeddings, we explore the use of contextualized
representations, which provide information concerning word sense and segment
structure. Character-level tokenization is important to capture
intention-related morphological aspects that cannot be captured at the word
level. Finally, the functional level provides an abstraction from words, which
shifts the focus to the structure of the segment. We also explore approaches to
enrich the segment representation with context information from the history of
the dialog, both in terms of the classifications of the surrounding segments
and the turn-taking history. This kind of information has already been proved
important for the disambiguation of DAs in previous studies. Nevertheless, we
are able to capture additional information by considering a summary of the
dialog history and a wider turn-taking context. By combining the best
approaches at each step, we achieve results that surpass the previous
state-of-the-art on generic DA recognition on both SwDA and MRDA, two of the
most widely explored corpora for the task. Furthermore, by considering both
past and future context, simulating annotation scenario, our approach achieves
a performance similar to that of a human annotator on SwDA and surpasses it on
MRDA.
| 2,019 | Computation and Language |
ASR-free CNN-DTW keyword spotting using multilingual bottleneck features
for almost zero-resource languages | We consider multilingual bottleneck features (BNFs) for nearly zero-resource
keyword spotting. This forms part of a United Nations effort using keyword
spotting to support humanitarian relief programmes in parts of Africa where
languages are severely under-resourced. We use 1920 isolated keywords (40
types, 34 minutes) as exemplars for dynamic time warping (DTW) template
matching, which is performed on a much larger body of untranscribed speech.
These DTW costs are used as targets for a convolutional neural network (CNN)
keyword spotter, giving a much faster system than direct DTW. Here we consider
how available data from well-resourced languages can improve this CNN-DTW
approach. We show that multilingual BNFs trained on ten languages improve the
area under the ROC curve of a CNN-DTW system by 10.9% absolute relative to the
MFCC baseline. By combining low-resource DTW-based supervision with information
from well-resourced languages, CNN-DTW is a competitive option for low-resource
keyword spotting.
| 2,018 | Computation and Language |
Automatic Speech Recognition for Humanitarian Applications in Somali | We present our first efforts in building an automatic speech recognition
system for Somali, an under-resourced language, using 1.57 hrs of annotated
speech for acoustic model training. The system is part of an ongoing effort by
the United Nations (UN) to implement keyword spotting systems supporting
humanitarian relief programmes in parts of Africa where languages are severely
under-resourced. We evaluate several types of acoustic model, including recent
neural architectures. Language model data augmentation using a combination of
recurrent neural networks (RNN) and long short-term memory neural networks
(LSTMs) as well as the perturbation of acoustic data are also considered. We
find that both types of data augmentation are beneficial to performance, with
our best system using a combination of convolutional neural networks (CNNs),
time-delay neural networks (TDNNs) and bi-directional long short term memory
(BLSTMs) to achieve a word error rate of 53.75%.
| 2,018 | Computation and Language |
Otem&Utem: Over- and Under-Translation Evaluation Metric for NMT | Although neural machine translation(NMT) yields promising translation
performance, it unfortunately suffers from over- and under-translation is- sues
[Tu et al., 2016], of which studies have become research hotspots in NMT. At
present, these studies mainly apply the dominant automatic evaluation metrics,
such as BLEU, to evaluate the overall translation quality with respect to both
adequacy and uency. However, they are unable to accurately measure the ability
of NMT systems in dealing with the above-mentioned issues. In this paper, we
propose two quantitative metrics, the Otem and Utem, to automatically evaluate
the system perfor- mance in terms of over- and under-translation respectively.
Both metrics are based on the proportion of mismatched n-grams between gold
ref- erence and system translation. We evaluate both metrics by comparing their
scores with human evaluations, where the values of Pearson Cor- relation
Coefficient reveal their strong correlation. Moreover, in-depth analyses on
various translation systems indicate some inconsistency be- tween BLEU and our
proposed metrics, highlighting the necessity and significance of our metrics.
| 2,018 | Computation and Language |
Cross-lingual Argumentation Mining: Machine Translation (and a bit of
Projection) is All You Need! | Argumentation mining (AM) requires the identification of complex discourse
structures and has lately been applied with success monolingually. In this
work, we show that the existing resources are, however, not adequate for
assessing cross-lingual AM, due to their heterogeneity or lack of complexity.
We therefore create suitable parallel corpora by (human and machine)
translating a popular AM dataset consisting of persuasive student essays into
German, French, Spanish, and Chinese. We then compare (i) annotation projection
and (ii) bilingual word embeddings based direct transfer strategies for
cross-lingual AM, finding that the former performs considerably better and
almost eliminates the loss from cross-lingual transfer. Moreover, we find that
annotation projection works equally well when using either costly human or
cheap machine translations. Our code and data are available at
\url{http://github.com/UKPLab/coling2018-xling_argument_mining}.
| 2,018 | Computation and Language |
The division of labor in communication: Speakers help listeners account
for asymmetries in visual perspective | Recent debates over adults' theory of mind use have been fueled by surprising
failures of perspective-taking in communication, suggesting that
perspective-taking can be relatively effortful. How, then, should speakers and
listeners allocate their resources to achieve successful communication? We
begin with the observation that this shared goal induces a natural division of
labor: the resources one agent chooses to allocate toward perspective-taking
should depend on their expectations about the other's allocation. We formalize
this idea in a resource-rational model augmenting recent probabilistic
weighting accounts with a mechanism for (costly) control over the degree of
perspective-taking. In a series of simulations, we first derive an intermediate
degree of perspective weighting as an optimal tradeoff between expected costs
and benefits of perspective-taking. We then present two behavioral experiments
testing novel predictions of our model. In Experiment 1, we manipulated the
presence or absence of occlusions in a director-matcher task and found that
speakers spontaneously produced more informative descriptions to account for
"known unknowns" in their partner's private view. In Experiment 2, we compared
the scripted utterances used by confederates in prior work with those produced
in interactions with unscripted directors. We found that confederates were
systematically less informative than listeners would initially expect given the
presence of occlusions, but listeners used violations to adaptively make fewer
errors over time. Taken together, our work suggests that people are not simply
"mindblind"; they use contextually appropriate expectations to navigate the
division of labor with their partner. We discuss how a resource rational
framework may provide a more deeply explanatory foundation for understanding
flexible perspective-taking under processing constraints.
| 2,020 | Computation and Language |
"Bilingual Expert" Can Find Translation Errors | Recent advances in statistical machine translation via the adoption of neural
sequence-to-sequence models empower the end-to-end system to achieve
state-of-the-art in many WMT benchmarks. The performance of such machine
translation (MT) system is usually evaluated by automatic metric BLEU when the
golden references are provided for validation. However, for model inference or
production deployment, the golden references are prohibitively available or
require expensive human annotation with bilingual expertise. In order to
address the issue of quality evaluation (QE) without reference, we propose a
general framework for automatic evaluation of translation output for most WMT
quality evaluation tasks. We first build a conditional target language model
with a novel bidirectional transformer, named neural bilingual expert model,
which is pre-trained on large parallel corpora for feature extraction. For QE
inference, the bilingual expert model can simultaneously produce the joint
latent representation between the source and the translation, and real-valued
measurements of possible erroneous tokens based on the prior knowledge learned
from parallel data. Subsequently, the features will further be fed into a
simple Bi-LSTM predictive model for quality evaluation. The experimental
results show that our approach achieves the state-of-the-art performance in the
quality estimation track of WMT 2017/2018.
| 2,018 | Computation and Language |
Text Classification based on Multiple Block Convolutional Highways | In the Text Classification areas of Sentiment Analysis,
Subjectivity/Objectivity Analysis, and Opinion Polarity, Convolutional Neural
Networks have gained special attention because of their performance and
accuracy. In this work, we applied recent advances in CNNs and propose a novel
architecture, Multiple Block Convolutional Highways (MBCH), which achieves
improved accuracy on multiple popular benchmark datasets, compared to previous
architectures. The MBCH is based on new techniques and architectures including
highway networks, DenseNet, batch normalization and bottleneck layers. In
addition, to cope with the limitations of existing pre-trained word vectors
which are used as inputs for the CNN, we propose a novel method, Improved Word
Vectors (IWV). The IWV improves the accuracy of CNNs which are used for text
classification tasks.
| 2,018 | Computation and Language |
Repartitioning of the ComplexWebQuestions Dataset | Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a dataset
focused on answering complex questions by decomposing them into a sequence of
simpler questions and extracting the answer from retrieved web snippets. In
their work the authors used a pre-trained reading comprehension (RC) model
(Salant and Berant, 2018) to extract the answer from the web snippets. In this
short note we show that training a RC model directly on the training data of
ComplexWebQuestions reveals a leakage from the training set to the test set
that allows to obtain unreasonably high performance. As a solution, we
construct a new partitioning of ComplexWebQuestions that does not suffer from
this leakage and publicly release it. We also perform an empirical evaluation
on these two datasets and show that training a RC model on the training data
substantially improves state-of-the-art performance.
| 2,018 | Computation and Language |
Finding Better Subword Segmentation for Neural Machine Translation | For different language pairs, word-level neural machine translation (NMT)
models with a fixed-size vocabulary suffer from the same problem of
representing out-of-vocabulary (OOV) words. The common practice usually
replaces all these rare or unknown words with a <UNK> token, which limits the
translation performance to some extent. Most of recent work handled such a
problem by splitting words into characters or other specially extracted subword
units to enable open-vocabulary translation. Byte pair encoding (BPE) is one of
the successful attempts that has been shown extremely competitive by providing
effective subword segmentation for NMT systems. In this paper, we extend the
BPE style segmentation to a general unsupervised framework with three
statistical measures: frequency (FRQ), accessor variety (AV) and description
length gain (DLG). We test our approach on two translation tasks: German to
English and Chinese to English. The experimental results show that AV and DLG
enhanced systems outperform the FRQ baseline in the frequency weighted schemes
at different significant levels.
| 2,018 | Computation and Language |
A Novel ILP Framework for Summarizing Content with High Lexical Variety | Summarizing content contributed by individuals can be challenging, because
people make different lexical choices even when describing the same events.
However, there remains a significant need to summarize such content. Examples
include the student responses to post-class reflective questions, product
reviews, and news articles published by different news agencies related to the
same events. High lexical diversity of these documents hinders the system's
ability to effectively identify salient content and reduce summary redundancy.
In this paper, we overcome this issue by introducing an integer linear
programming-based summarization framework. It incorporates a low-rank
approximation to the sentence-word co-occurrence matrix to intrinsically group
semantically-similar lexical items. We conduct extensive experiments on
datasets of student responses, product reviews, and news documents. Our
approach compares favorably to a number of extractive baselines as well as a
neural abstractive summarization system. The paper finally sheds light on when
and why the proposed framework is effective at summarizing content with high
lexical variety.
| 2,018 | Computation and Language |
Understanding and representing the semantics of large structured
documents | Understanding large, structured documents like scholarly articles, requests
for proposals or business reports is a complex and difficult task. It involves
discovering a document's overall purpose and subject(s), understanding the
function and meaning of its sections and subsections, and extracting low level
entities and facts about them. In this research, we present a deep learning
based document ontology to capture the general purpose semantic structure and
domain specific semantic concepts from a large number of academic articles and
business documents. The ontology is able to describe different functional parts
of a document, which can be used to enhance semantic indexing for a better
understanding by human beings and machines. We evaluate our models through
extensive experiments on datasets of scholarly articles from arXiv and Request
for Proposal documents.
| 2,018 | Computation and Language |
Modular Mechanistic Networks: On Bridging Mechanistic and
Phenomenological Models with Deep Neural Networks in Natural Language
Processing | Natural language processing (NLP) can be done using either top-down (theory
driven) and bottom-up (data driven) approaches, which we call mechanistic and
phenomenological respectively. The approaches are frequently considered to
stand in opposition to each other. Examining some recent approaches in deep
learning we argue that deep neural networks incorporate both perspectives and,
furthermore, that leveraging this aspect of deep learning may help in solving
complex problems within language technology, such as modelling language and
perception in the domain of spatial cognition.
| 2,002 | Computation and Language |
Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a
Structured Variational Autoencoder | Human annotation for syntactic parsing is expensive, and large resources are
available only for a fraction of languages. A question we ask is whether one
can leverage abundant unlabeled texts to improve syntactic parsers, beyond just
using the texts to obtain more generalisable lexical features (i.e. beyond word
embeddings). To this end, we propose a novel latent-variable generative model
for semi-supervised syntactic dependency parsing. As exact inference is
intractable, we introduce a differentiable relaxation to obtain approximate
samples and compute gradients with respect to the parser parameters. Our method
(Differentiable Perturb-and-Parse) relies on differentiable dynamic programming
over stochastically perturbed edge scores. We demonstrate effectiveness of our
approach with experiments on English, French and Swedish.
| 2,019 | Computation and Language |
Variational Memory Encoder-Decoder | Introducing variability while maintaining coherence is a core task in
learning to generate utterances in conversation. Standard neural
encoder-decoder models and their extensions using conditional variational
autoencoder often result in either trivial or digressive responses. To overcome
this, we explore a novel approach that injects variability into neural
encoder-decoder via the use of external memory as a mixture model, namely
Variational Memory Encoder-Decoder (VMED). By associating each memory read with
a mode in the latent mixture distribution at each timestep, our model can
capture the variability observed in sequential data such as natural
conversations. We empirically compare the proposed model against other recent
approaches on various conversational datasets. The results show that VMED
consistently achieves significant improvement over others in both metric-based
and qualitative evaluations.
| 2,018 | Computation and Language |
Concurrent Learning of Semantic Relations | Discovering whether words are semantically related and identifying the
specific semantic relation that holds between them is of crucial importance for
NLP as it is essential for tasks like query expansion in IR. Within this
context, different methodologies have been proposed that either exclusively
focus on a single lexical relation (e.g. hypernymy vs. random) or learn
specific classifiers capable of identifying multiple semantic relations (e.g.
hypernymy vs. synonymy vs. random). In this paper, we propose another way to
look at the problem that relies on the multi-task learning paradigm. In
particular, we want to study whether the learning process of a given semantic
relation (e.g. hypernymy) can be improved by the concurrent learning of another
semantic relation (e.g. co-hyponymy). Within this context, we particularly
examine the benefits of semi-supervised learning where the training of a
prediction function is performed over few labeled data jointly with many
unlabeled ones. Preliminary results based on simple learning strategies and
state-of-the-art distributional feature representations show that concurrent
learning can lead to improvements in a vast majority of tested situations.
| 2,018 | Computation and Language |
Open Source Automatic Speech Recognition for German | High quality Automatic Speech Recognition (ASR) is a prerequisite for
speech-based applications and research. While state-of-the-art ASR software is
freely available, the language dependent acoustic models are lacking for
languages other than English, due to the limited amount of freely available
training data. We train acoustic models for German with Kaldi on two datasets,
which are both distributed under a Creative Commons license. The resulting
model is freely redistributable, lowering the cost of entry for German ASR. The
models are trained on a total of 412 hours of German read speech data and we
achieve a relative word error reduction of 26% by adding data from the Spoken
Wikipedia Corpus to the previously best freely available German acoustic model
recipe and dataset. Our best model achieves a word error rate of 14.38 on the
Tuda-De test set. Due to the large amount of speakers and the diversity of
topics included in the training data, our model is robust against speaker
variation and topic shift.
| 2,018 | Computation and Language |
Automatic Short Answer Grading and Feedback Using Text Mining Methods | Automatic grading is not a new approach but the need to adapt the latest
technology to automatic grading has become very important. As the technology
has rapidly became more powerful on scoring exams and essays, especially from
the 1990s onwards, partially or wholly automated grading systems using
computational methods have evolved and have become a major area of research. In
particular, the demand of scoring of natural language responses has created a
need for tools that can be applied to automatically grade these responses. In
this paper, we focus on the concept of automatic grading of short answer
questions such as are typical in the UK GCSE system, and providing useful
feedback on their answers to students. We present experimental results on a
dataset provided from the introductory computer science class in the University
of North Texas. We first apply standard data mining techniques to the corpus of
student answers for the purpose of measuring similarity between the student
answers and the model answer. This is based on the number of common words. We
then evaluate the relation between these similarities and marks awarded by
scorers. We then consider an approach that groups student answers into
clusters. Each cluster would be awarded the same mark, and the same feedback
given to each answer in a cluster. In this manner, we demonstrate that clusters
indicate the groups of students who are awarded the same or the similar scores.
Words in each cluster are compared to show that clusters are constructed based
on how many and which words of the model answer have been used. The main
novelty in this paper is that we design a model to predict marks based on the
similarities between the student answers and the model answer.
| 2,020 | Computation and Language |
Auto-Encoding Variational Neural Machine Translation | We present a deep generative model of bilingual sentence pairs for machine
translation. The model generates source and target sentences jointly from a
shared latent representation and is parameterised by neural networks. We
perform efficient training using amortised variational inference and
reparameterised gradients. Additionally, we discuss the statistical
implications of joint modelling and propose an efficient approximation to
maximum a posteriori decoding for fast test-time predictions. We demonstrate
the effectiveness of our model in three machine translation scenarios:
in-domain training, mixed-domain training, and learning from a mix of
gold-standard and synthetic data. Our experiments show consistently that our
joint formulation outperforms conditional modelling (i.e. standard neural
machine translation) in all such scenarios.
| 2,019 | Computation and Language |
Judging a Book by its Description : Analyzing Gender Stereotypes in the
Man Bookers Prize Winning Fiction | The presence of gender stereotypes in many aspects of society is a well-known
phenomenon. In this paper, we focus on studying and quantifying such
stereotypes and bias in the Man Bookers Prize winning fiction. We consider 275
books shortlisted for Man Bookers Prize between 1969 and 2017. The gender bias
is analyzed by semantic modeling of book descriptions on Goodreads. This
reveals the pervasiveness of gender bias and stereotype in the books on
different features like occupation, introductions and actions associated to the
characters in the book.
| 2,018 | Computation and Language |
Concept Tagging for Natural Language Understanding: Two Decadelong
Algorithm Development | Concept tagging is a type of structured learning needed for natural language
understanding (NLU) systems. In this task, meaning labels from a domain
ontology are assigned to word sequences. In this paper, we review the
algorithms developed over the last twenty five years. We perform a comparative
evaluation of generative, discriminative and deep learning methods on two
public datasets. We report on the statistical variability performance
measurements. The third contribution is the release of a repository of the
algorithms, datasets and recipes for NLU evaluation.
| 2,018 | Computation and Language |
Resource-Size matters: Improving Neural Named Entity Recognition with
Optimized Large Corpora | This study improves the performance of neural named entity recognition by a
margin of up to 11% in F-score on the example of a low-resource language like
German, thereby outperforming existing baselines and establishing a new
state-of-the-art on each single open-source dataset. Rather than designing
deeper and wider hybrid neural architectures, we gather all available resources
and perform a detailed optimization and grammar-dependent morphological
processing consisting of lemmatization and part-of-speech tagging prior to
exposing the raw data to any training process. We test our approach in a
threefold monolingual experimental setup of a) single, b) joint, and c)
optimized training and shed light on the dependency of downstream-tasks on the
size of corpora used to compute word embeddings.
| 2,018 | Computation and Language |
A small Griko-Italian speech translation corpus | This paper presents an extension to a very low-resource parallel corpus
collected in an endangered language, Griko, making it useful for computational
research. The corpus consists of 330 utterances (about 20 minutes of speech)
which have been transcribed and translated in Italian, with annotations for
word-level speech-to-transcription and speech-to-translation alignments. The
corpus also includes morphosyntactic tags and word-level glosses. Applying an
automatic unit discovery method, pseudo-phones were also generated. We detail
how the corpus was collected, cleaned and processed, and we illustrate its use
on zero-resource tasks by presenting some baseline results for the task of
speech-to-translation alignment and unsupervised word discovery. The dataset is
available online, aiming to encourage replicability and diversity in
computational language documentation experiments.
| 2,018 | Computation and Language |
Clustering Prominent People and Organizations in Topic-Specific Text
Corpora | Named entities in text documents are the names of people, organization,
location or other types of objects in the documents that exist in the real
world. A persisting research challenge is to use computational techniques to
identify such entities in text documents. Once identified, several text mining
tools and algorithms can be utilized to leverage these discovered named
entities and improve NLP applications. In this paper, a method that clusters
prominent names of people and organizations based on their semantic similarity
in a text corpus is proposed. The method relies on common named entity
recognition techniques and on recent word embeddings models. The semantic
similarity scores generated using the word embeddings models for the named
entities are used to cluster similar entities of the people and organizations
types. Two human judges evaluated ten variations of the method after it was run
on a corpus that consists of 4,821 articles on a specific topic. The
performance of the method was measured using three quantitative measures. The
results of these three metrics demonstrate that the method is effective in
clustering semantically similar named entities.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.