Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Toward Computation and Memory Efficient Neural Network Acoustic Models
with Binary Weights and Activations | Neural network acoustic models have significantly advanced state of the art
speech recognition over the past few years. However, they are usually
computationally expensive due to the large number of matrix-vector
multiplications and nonlinearity operations. Neural network models also require
significant amounts of memory for inference because of the large model size.
For these two reasons, it is challenging to deploy neural network based speech
recognizers on resource-constrained platforms such as embedded devices. This
paper investigates the use of binary weights and activations for computation
and memory efficient neural network acoustic models. Compared to real-valued
weight matrices, binary weights require much fewer bits for storage, thereby
cutting down the memory footprint. Furthermore, with binary weights or
activations, the matrix-vector multiplications are turned into addition and
subtraction operations, which are computationally much faster and more energy
efficient for hardware platforms. In this paper, we study the applications of
binary weights and activations for neural network acoustic modeling, reporting
encouraging results on the WSJ and AMI corpora.
| 2,017 | Computation and Language |
Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a
Syntactic Scaffold | We present a new, efficient frame-semantic parser that labels semantic
arguments to FrameNet predicates. Built using an extension to the segmental RNN
that emphasizes recall, our basic system achieves competitive performance
without any calls to a syntactic parser. We then introduce a method that uses
phrase-syntactic annotations from the Penn Treebank during training only,
through a multitask objective; no parsing is required at training or test time.
This "syntactic scaffold" offers a cheaper alternative to traditional syntactic
pipelining, and achieves state-of-the-art performance.
| 2,017 | Computation and Language |
Frame-Based Continuous Lexical Semantics through Exponential Family
Tensor Factorization and Semantic Proto-Roles | We study how different frame annotations complement one another when learning
continuous lexical semantics. We learn the representations from a tensorized
skip-gram model that consistently encodes syntactic-semantic content better,
with multiple 10% gains over baselines.
| 2,017 | Computation and Language |
Recurrent neural networks with specialized word embeddings for
health-domain named-entity recognition | Background. Previous state-of-the-art systems on Drug Name Recognition (DNR)
and Clinical Concept Extraction (CCE) have focused on a combination of text
"feature engineering" and conventional machine learning algorithms such as
conditional random fields and support vector machines. However, developing good
features is inherently heavily time-consuming. Conversely, more modern machine
learning approaches such as recurrent neural networks (RNNs) have proved
capable of automatically learning effective features from either random
assignments or automated word "embeddings". Objectives. (i) To create a highly
accurate DNR and CCE system that avoids conventional, time-consuming feature
engineering. (ii) To create richer, more specialized word embeddings by using
health domain datasets such as MIMIC-III. (iii) To evaluate our systems over
three contemporary datasets. Methods. Two deep learning methods, namely the
Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model
is set as the baseline to compare the deep learning systems to a traditional
machine learning approach. The same features are used for all the models.
Results. We have obtained the best results with the Bidirectional LSTM-CRF
model, which has outperformed all previously proposed systems. The specialized
embeddings have helped to cover unusual words in DDI-DrugBank and DDI-MedLine,
but not in the 2010 i2b2/VA IRB Revision dataset. Conclusion. We present a
state-of-the-art system for DNR and CCE. Automated word embeddings has allowed
us to avoid costly feature engineering and achieve higher accuracy.
Nevertheless, the embeddings need to be retrained over datasets that are
adequate for the domain, in order to adequately cover the domain-specific
vocabulary.
| 2,018 | Computation and Language |
Improving Distributed Representations of Tweets - Present and Future | Unsupervised representation learning for tweets is an important research
field which helps in solving several business applications such as sentiment
analysis, hashtag prediction, paraphrase detection and microblog ranking. A
good tweet representation learning model must handle the idiosyncratic nature
of tweets which poses several challenges such as short length, informal words,
unusual grammar and misspellings. However, there is a lack of prior work which
surveys the representation learning models with a focus on tweets. In this
work, we organize the models based on its objective function which aids the
understanding of the literature. We also provide interesting future directions,
which we believe are fruitful in advancing this field by building high-quality
tweet representation learning models.
| 2,017 | Computation and Language |
Stronger Baselines for Trustable Results in Neural Machine Translation | Interest in neural machine translation has grown rapidly as its effectiveness
has been demonstrated across language and data scenarios. New research
regularly introduces architectural and algorithmic improvements that lead to
significant gains over "vanilla" NMT implementations. However, these new
techniques are rarely evaluated in the context of previously published
techniques, specifically those that are widely used in state-of-theart
production and shared-task systems. As a result, it is often difficult to
determine whether improvements from research will carry over to systems
deployed for real-world use. In this work, we recommend three specific methods
that are relatively easy to implement and result in much stronger experimental
systems. Beyond reporting significantly higher BLEU scores, we conduct an
in-depth analysis of where improvements originate and what inherent weaknesses
of basic NMT models are being addressed. We then compare the relative gains
afforded by several other techniques proposed in the literature when starting
with vanilla systems versus our stronger baselines, showing that experimental
conclusions may change depending on the baseline chosen. This indicates that
choosing a strong baseline is crucial for reporting reliable experimental
results.
| 2,017 | Computation and Language |
AP17-OLR Challenge: Data, Plan, and Baseline | We present the data profile and the evaluation plan of the second oriental
language recognition (OLR) challenge AP17-OLR. Compared to the event last year
(AP16-OLR), the new challenge involves more languages and focuses more on short
utterances. The data is offered by SpeechOcean and the NSFC M2ASR project. Two
types of baselines are constructed to assist the participants, one is based on
the i-vector model and the other is based on various neural networks. We report
the baseline results evaluated with various metrics defined by the AP17-OLR
evaluation plan and demonstrate that the combined database is a reasonable data
resource for multilingual research. All the data is free for participants, and
the Kaldi recipes for the baselines have been published online.
| 2,017 | Computation and Language |
Two-Stage Synthesis Networks for Transfer Learning in Machine
Comprehension | We develop a technique for transfer learning in machine comprehension (MC)
using a novel two-stage synthesis network (SynNet). Given a high-performing MC
model in one domain, our technique aims to answer questions about documents in
another domain, where we use no labeled data of question-answer pairs. Using
the proposed SynNet with a pretrained model from the SQuAD dataset on the
challenging NewsQA dataset, we achieve an F1 measure of 44.3% with a single
model and 46.6% with an ensemble, approaching performance of in-domain models
(F1 measure of 50.0%) and outperforming the out-of-domain baseline of 7.6%,
without use of provided annotations.
| 2,017 | Computation and Language |
Relevance of Unsupervised Metrics in Task-Oriented Dialogue for
Evaluating Natural Language Generation | Automated metrics such as BLEU are widely used in the machine translation
literature. They have also been used recently in the dialogue community for
evaluating dialogue response generation. However, previous work in dialogue
response generation has shown that these metrics do not correlate strongly with
human judgment in the non task-oriented dialogue setting. Task-oriented
dialogue responses are expressed on narrower domains and exhibit lower
diversity. It is thus reasonable to think that these automated metrics would
correlate well with human judgment in the task-oriented setting where the
generation task consists of translating dialogue acts into a sentence. We
conduct an empirical study to confirm whether this is the case. Our findings
indicate that these automated metrics have stronger correlation with human
judgments in the task-oriented setting compared to what has been observed in
the non task-oriented setting. We also observe that these metrics correlate
even better for datasets which provide multiple ground truth reference
sentences. In addition, we show that some of the currently available corpora
for task-oriented language generation can be solved with simple models and
advocate for more challenging datasets.
| 2,017 | Computation and Language |
Automatic Mapping of French Discourse Connectives to PDTB Discourse
Relations | In this paper, we present an approach to exploit phrase tables generated by
statistical machine translation in order to map French discourse connectives to
discourse relations. Using this approach, we created ConcoLeDisCo, a lexicon of
French discourse connectives and their PDTB relations. When evaluated against
LEXCONN, ConcoLeDisCo achieves a recall of 0.81 and an Average Precision of
0.68 for the Concession and Condition relations.
| 2,017 | Computation and Language |
Synthetic Data for Neural Machine Translation of Spoken-Dialects | In this paper, we introduce a novel approach to generate synthetic data for
training Neural Machine Translation systems. The proposed approach transforms a
given parallel corpus between a written language and a target language to a
parallel corpus between a spoken dialect variant and the target language. Our
approach is language independent and can be used to generate data for any
variant of the source language such as slang or spoken dialect or even for a
different language that is closely related to the source language.
The proposed approach is based on local embedding projection of distributed
representations which utilizes monolingual embeddings to transform parallel
data across language variants. We report experimental results on Levantine to
English translation using Neural Machine Translation. We show that the
generated data can improve a very large scale system by more than 2.8 Bleu
points using synthetic spoken data which shows that it can be used to provide a
reliable translation system for a spoken dialect that does not have sufficient
parallel data.
| 2,017 | Computation and Language |
Efficient Attention using a Fixed-Size Memory Representation | The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work,
we propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of
K attention contexts during encoding and lets the decoder compute an efficient
lookup that does not need to consult the memory. We show that our approach
performs on-par with the standard attention mechanism while yielding inference
speedups of 20% for real-world translation tasks and more for tasks with longer
sequences. By visualizing attention scores we demonstrate that our models learn
distinct, meaningful alignments.
| 2,017 | Computation and Language |
SAM: Semantic Attribute Modulation for Language Modeling and Style
Variation | This paper presents a Semantic Attribute Modulation (SAM) for language
modeling and style variation. The semantic attribute modulation includes
various document attributes, such as titles, authors, and document categories.
We consider two types of attributes, (title attributes and category
attributes), and a flexible attribute selection scheme by automatically scoring
them via an attribute attention mechanism. The semantic attributes are embedded
into the hidden semantic space as the generation inputs. With the attributes
properly harnessed, our proposed SAM can generate interpretable texts with
regard to the input attributes. Qualitative analysis, including word semantic
analysis and attention values, shows the interpretability of SAM. On several
typical text datasets, we empirically demonstrate the superiority of the
Semantic Attribute Modulated language model with different combinations of
document attributes. Moreover, we present a style variation for the lyric
generation using SAM, which shows a strong connection between the style
variation and the semantic attributes.
| 2,017 | Computation and Language |
Sample-efficient Actor-Critic Reinforcement Learning with Supervised
Data for Dialogue Management | Deep reinforcement learning (RL) methods have significant potential for
dialogue policy optimisation. However, they suffer from a poor performance in
the early stages of learning. This is especially problematic for on-line
learning with real users. Two approaches are introduced to tackle this problem.
Firstly, to speed up the learning process, two sample-efficient neural networks
algorithms: trust region actor-critic with experience replay (TRACER) and
episodic natural actor-critic with experience replay (eNACER) are presented.
For TRACER, the trust region helps to control the learning step size and avoid
catastrophic model changes. For eNACER, the natural gradient identifies the
steepest ascent direction in policy space to speed up the convergence. Both
models employ off-policy learning with experience replay to improve
sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of
demonstration data is utilised to pre-train the models prior to on-line
reinforcement learning. Combining these two approaches, we demonstrate a
practical approach to learn deep RL-based dialogue policies and demonstrate
their effectiveness in a task-oriented information seeking domain.
| 2,017 | Computation and Language |
Heterogeneous Supervision for Relation Extraction: A Representation
Learning Approach | Relation extraction is a fundamental task in information extraction. Most
existing methods have heavy reliance on annotations labeled by human experts,
which are costly and time-consuming. To overcome this drawback, we propose a
novel framework, REHession, to conduct relation extractor learning using
annotations from heterogeneous information source, e.g., knowledge base and
domain heuristics. These annotations, referred as heterogeneous supervision,
often conflict with each other, which brings a new challenge to the original
relation extraction task: how to infer the true label from noisy labels for a
given instance. Identifying context information as the backbone of both
relation extraction and true label discovery, we adopt embedding techniques to
learn the distributed representations of context, which bridges all components
with mutual enhancement in an iterative fashion. Extensive experimental results
demonstrate the superiority of REHession over the state-of-the-art.
| 2,017 | Computation and Language |
DAG-based Long Short-Term Memory for Neural Word Segmentation | Neural word segmentation has attracted more and more research interests for
its ability to alleviate the effort of feature engineering and utilize the
external resource by the pre-trained character or word embeddings. In this
paper, we propose a new neural model to incorporate the word-level information
for Chinese word segmentation. Unlike the previous word-based models, our model
still adopts the framework of character-based sequence labeling, which has
advantages on both effectiveness and efficiency at the inference stage. To
utilize the word-level information, we also propose a new long short-term
memory (LSTM) architecture over directed acyclic graph (DAG). Experimental
results demonstrate that our model leads to better performances than the
baseline models.
| 2,017 | Computation and Language |
Grammatical Error Correction with Neural Reinforcement Learning | We propose a neural encoder-decoder model with reinforcement learning (NRL)
for grammatical error correction (GEC). Unlike conventional maximum likelihood
estimation (MLE), the model directly optimizes towards an objective that
considers a sentence-level, task-specific evaluation metric, avoiding the
exposure bias issue in MLE. We demonstrate that NRL outperforms MLE both in
human and automated evaluation metrics, achieving the state-of-the-art on a
fluency-oriented GEC corpus.
| 2,017 | Computation and Language |
Including Dialects and Language Varieties in Author Profiling | This paper presents a computational approach to author profiling taking
gender and language variety into account. We apply an ensemble system with the
output of multiple linear SVM classifiers trained on character and word
$n$-grams. We evaluate the system using the dataset provided by the organizers
of the 2017 PAN lab on author profiling. Our approach achieved 75% average
accuracy on gender identification on tweets written in four languages and 97%
accuracy on language variety identification for Portuguese.
| 2,017 | Computation and Language |
Improving LSTM-CTC based ASR performance in domains with limited
training data | This paper addresses the observed performance gap between automatic speech
recognition (ASR) systems based on Long Short Term Memory (LSTM) neural
networks trained with the connectionist temporal classification (CTC) loss
function and systems based on hybrid Deep Neural Networks (DNNs) trained with
the cross entropy (CE) loss function on domains with limited data. We step
through a number of experiments that show incremental improvements on a
baseline EESEN toolkit based LSTM-CTC ASR system trained on the Librispeech
100hr (train-clean-100) corpus. Our results show that with effective
combination of data augmentation and regularization, a LSTM-CTC based system
can exceed the performance of a strong Kaldi based baseline trained on the same
data.
| 2,018 | Computation and Language |
Mapping the Americanization of English in Space and Time | As global political preeminence gradually shifted from the United Kingdom to
the United States, so did the capacity to culturally influence the rest of the
world. In this work, we analyze how the world-wide varieties of written English
are evolving. We study both the spatial and temporal variations of vocabulary
and spelling of English using a large corpus of geolocated tweets and the
Google Books datasets corresponding to books published in the US and the UK.
The advantage of our approach is that we can address both standard written
language (Google Books) and the more colloquial forms of microblogging messages
(Twitter). We find that American English is the dominant form of English
outside the UK and that its influence is felt even within the UK borders.
Finally, we analyze how this trend has evolved over time and the impact that
some cultural events have had in shaping it.
| 2,018 | Computation and Language |
Multilingual Hierarchical Attention Networks for Document Classification | Hierarchical attention networks have recently achieved remarkable performance
for document classification in a given language. However, when multilingual
document collections are considered, training such models separately for each
language entails linear parameter growth and lack of cross-language transfer.
Learning a single multilingual model with fewer parameters is therefore a
challenging but potentially beneficial objective. To this end, we propose
multilingual hierarchical attention networks for learning document structures,
with shared encoders and/or shared attention mechanisms across languages, using
multi-task learning and an aligned semantic space as input. We evaluate the
proposed models on multilingual document classification with disjoint label
sets, on a large dataset which we provide, with 600k news documents in 8
languages, and 5k labels. The multilingual models outperform monolingual ones
in low-resource as well as full-resource settings, and use fewer parameters,
thus confirming their computational efficiency and the utility of
cross-language transfer.
| 2,017 | Computation and Language |
An empirical study on the effectiveness of images in Multimodal Neural
Machine Translation | In state-of-the-art Neural Machine Translation (NMT), an attention mechanism
is used during decoding to enhance the translation. At every step, the decoder
uses this mechanism to focus on different parts of the source sentence to
gather the most useful information before outputting its target word. Recently,
the effectiveness of the attention mechanism has also been explored for
multimodal tasks, where it becomes possible to focus both on sentence parts and
image regions that they describe. In this paper, we compare several attention
mechanism on the multimodal translation task (English, image to German) and
evaluate the ability of the model to make use of images to improve translation.
We surpass state-of-the-art scores on the Multi30k data set, we nevertheless
identify and report different misbehavior of the machine while translating.
| 2,017 | Computation and Language |
Visually Grounded Word Embeddings and Richer Visual Features for
Improving Multimodal Neural Machine Translation | In Multimodal Neural Machine Translation (MNMT), a neural model generates a
translated sentence that describes an image, given the image itself and one
source descriptions in English. This is considered as the multimodal image
caption translation task. The images are processed with Convolutional Neural
Network (CNN) to extract visual features exploitable by the translation model.
So far, the CNNs used are pre-trained on object detection and localization
task. We hypothesize that richer architecture, such as dense captioning models,
may be more suitable for MNMT and could lead to improved translations. We
extend this intuition to the word-embeddings, where we compute both linguistic
and visual representation for our corpus vocabulary. We combine and compare
different confi
| 2,017 | Computation and Language |
Zero-Shot Transfer Learning for Event Extraction | Most previous event extraction studies have relied heavily on features
derived from annotated event mentions, thus cannot be applied to new event
types without annotation effort. In this work, we take a fresh look at event
extraction and model it as a grounding problem. We design a transferable neural
architecture, mapping event mentions and types jointly into a shared semantic
space using structural and compositional neural networks, where the type of
each event mention can be determined by the closest of all candidate types . By
leveraging (1)~available manual annotations for a small set of existing event
types and (2)~existing event ontologies, our framework applies to new event
types without requiring additional annotation. Experiments on both existing
event types (e.g., ACE, ERE) and new event types (e.g., FrameNet) demonstrate
the effectiveness of our approach. \textit{Without any manual annotations} for
23 new event types, our zero-shot framework achieved performance comparable to
a state-of-the-art supervised model which is trained from the annotations of
500 event mentions.
| 2,017 | Computation and Language |
Improving Slot Filling Performance with Attentive Neural Networks on
Dependency Structures | Slot Filling (SF) aims to extract the values of certain types of attributes
(or slots, such as person:cities\_of\_residence) for a given entity from a
large collection of source documents. In this paper we propose an effective DNN
architecture for SF with the following new strategies: (1). Take a regularized
dependency graph instead of a raw sentence as input to DNN, to compress the
wide contexts between query and candidate filler; (2). Incorporate two
attention mechanisms: local attention learned from query and candidate filler,
and global attention learned from external knowledge bases, to guide the model
to better select indicative contexts to determine slot type. Experiments show
that this framework outperforms state-of-the-art on both relation extraction
(16\% absolute F-score gain) and slot filling validation for each individual
system (up to 8.5\% absolute F-score gain).
| 2,017 | Computation and Language |
Shakespearizing Modern Language Using Copy-Enriched Sequence-to-Sequence
Models | Variations in writing styles are commonly used to adapt the content to a
specific context, audience, or purpose. However, applying stylistic variations
is still by and large a manual process, and there have been little efforts
towards automating it. In this paper we explore automated methods to transform
text from modern English to Shakespearean English using an end to end trainable
neural model with pointers to enable copy action. To tackle limited amount of
parallel data, we pre-train embeddings of words by leveraging external
dictionaries mapping Shakespearean words to modern English words as well as
additional text. Our methods are able to get a BLEU score of 31+, an
improvement of ~6 points above the strongest baseline. We publicly release our
code to foster further research in this area.
| 2,017 | Computation and Language |
CharManteau: Character Embedding Models For Portmanteau Creation | Portmanteaus are a word formation phenomenon where two words are combined to
form a new word. We propose character-level neural sequence-to-sequence (S2S)
methods for the task of portmanteau generation that are end-to-end-trainable,
language independent, and do not explicitly use additional phonetic
information. We propose a noisy-channel-style model, which allows for the
incorporation of unsupervised word lists, improving performance over a standard
source-to-target model. This model is made possible by an exhaustive candidate
generation strategy specifically enabled by the features of the portmanteau
task. Experiments find our approach superior to a state-of-the-art FST-based
baseline with respect to ground truth accuracy and human evaluation.
| 2,017 | Computation and Language |
Complexity Metric for Code-Mixed Social Media Text | An evaluation metric is an absolute necessity for measuring the performance
of any system and complexity of any data. In this paper, we have discussed how
to determine the level of complexity of code-mixed social media texts that are
growing rapidly due to multilingual interference. In general, texts written in
multiple languages are often hard to comprehend and analyze. At the same time,
in order to meet the demands of analysis, it is also necessary to determine the
complexity of a particular document or a text segment. Thus, in the present
paper, we have discussed the existing metrics for determining the code-mixing
complexity of a corpus, their advantages, and shortcomings as well as proposed
several improvements on the existing metrics. The new index better reflects the
variety and complexity of a multilingual document. Also, the index can be
applied to a sentence and seamlessly extended to a paragraph or an entire
document. We have employed two existing code-mixed corpora to suit the
requirements of our study.
| 2,017 | Computation and Language |
Sentiment Identification in Code-Mixed Social Media Text | Sentiment analysis is the Natural Language Processing (NLP) task dealing with
the detection and classification of sentiments in texts. While some tasks deal
with identifying the presence of sentiment in the text (Subjectivity analysis),
other tasks aim at determining the polarity of the text categorizing them as
positive, negative and neutral. Whenever there is a presence of sentiment in
the text, it has a source (people, group of people or any entity) and the
sentiment is directed towards some entity, object, event or person. Sentiment
analysis tasks aim to determine the subject, the target and the polarity or
valence of the sentiment. In our work, we try to automatically extract
sentiment (positive or negative) from Facebook posts using a machine learning
approach.While some works have been done in code-mixed social media data and in
sentiment analysis separately, our work is the first attempt (as of now) which
aims at performing sentiment analysis of code-mixed social media text. We have
used extensive pre-processing to remove noise from raw text. Multilayer
Perceptron model has been used to determine the polarity of the sentiment. We
have also developed the corpus for this task by manually labeling Facebook
posts with their associated sentiments.
| 2,017 | Computation and Language |
Multiple Range-Restricted Bidirectional Gated Recurrent Units with
Attention for Relation Classification | Most of neural approaches to relation classification have focused on finding
short patterns that represent the semantic relation using Convolutional Neural
Networks (CNNs) and those approaches have generally achieved better
performances than using Recurrent Neural Networks (RNNs). In a similar
intuition to the CNN models, we propose a novel RNN-based model that strongly
focuses on only important parts of a sentence using multiple range-restricted
bidirectional layers and attention for relation classification. Experimental
results on the SemEval-2010 relation classification task show that our model is
comparable to the state-of-the-art CNN-based and RNN-based models that use
additional linguistic information.
| 2,017 | Computation and Language |
The Influence of Feature Representation of Text on the Performance of
Document Classification | In this paper we perform a comparative analysis of three models for feature
representation of text documents in the context of document classification. In
particular, we consider the most often used family of models bag-of-words,
recently proposed continuous space models word2vec and doc2vec, and the model
based on the representation of text documents as language networks. While the
bag-of-word models have been extensively used for the document classification
task, the performance of the other two models for the same task have not been
well understood. This is especially true for the network-based model that have
been rarely considered for representation of text documents for classification.
In this study, we measure the performance of the document classifiers trained
using the method of random forests for features generated the three models and
their variants. The results of the empirical comparison show that the commonly
used bag-of-words model has performance comparable to the one obtained by the
emerging continuous-space model of doc2vec. In particular, the low-dimensional
variants of doc2vec generating up to 75 features are among the top-performing
document representation models. The results finally point out that doc2vec
shows a superior performance in the tasks of classifying large documents.
| 2,017 | Computation and Language |
Align and Copy: UZH at SIGMORPHON 2017 Shared Task for Morphological
Reinflection | This paper presents the submissions by the University of Zurich to the
SIGMORPHON 2017 shared task on morphological reinflection. The task is to
predict the inflected form given a lemma and a set of morpho-syntactic
features. We focus on neural network approaches that can tackle the task in a
limited-resource setting. As the transduction of the lemma into the inflected
form is dominated by copying over lemma characters, we propose two recurrent
neural network architectures with hard monotonic attention that are strong at
copying and, yet, substantially different in how they achieve this. The first
approach is an encoder-decoder model with a copy mechanism. The second approach
is a neural state-transition system over a set of explicit edit actions,
including a designated COPY action. We experiment with character alignment and
find that naive, greedy alignment consistently produces strong results for some
languages. Our best system combination is the overall winner of the SIGMORPHON
2017 Shared Task 1 without external resources. At a setting with 100 training
samples, both our approaches, as ensembles of models, outperform the next best
competitor.
| 2,017 | Computation and Language |
An Attention Mechanism for Answer Selection Using a Combined Global and
Local View | We propose a new attention mechanism for neural based question answering,
which depends on varying granularities of the input. Previous work focused on
augmenting recurrent neural networks with simple attention mechanisms which are
a function of the similarity between a question embedding and an answer
embeddings across time. We extend this by making the attention mechanism
dependent on a global embedding of the answer attained using a separate
network.
We evaluate our system on InsuranceQA, a large question answering dataset.
Our model outperforms current state-of-the-art results on InsuranceQA. Further,
we visualize which sections of text our attention mechanism focuses on, and
explore its performance across different parameter settings.
| 2,017 | Computation and Language |
Context Aware Document Embedding | Recently, doc2vec has achieved excellent results in different tasks. In this
paper, we present a context aware variant of doc2vec. We introduce a novel
weight estimating mechanism that generates weights for each word occurrence
according to its contribution in the context, using deep neural networks. Our
context aware model can achieve similar results compared to doc2vec initialized
byWikipedia trained vectors, while being much more efficient and free from
heavy external corpus. Analysis of context aware weights shows they are a kind
of enhanced IDF weights that capture sub-topic level keywords in documents.
They might result from deep neural networks that learn hidden representations
with the least entropy.
| 2,017 | Computation and Language |
A Deep Network with Visual Text Composition Behavior | While natural languages are compositional, how state-of-the-art neural models
achieve compositionality is still unclear. We propose a deep network, which not
only achieves competitive accuracy for text classification, but also exhibits
compositional behavior. That is, while creating hierarchical representations of
a piece of text, such as a sentence, the lower layers of the network distribute
their layer-specific attention weights to individual words. In contrast, the
higher layers compose meaningful phrases and clauses, whose lengths increase as
the networks get deeper until fully composing the sentence.
| 2,017 | Computation and Language |
Automatic Generation of Natural Language Explanations | An important task for recommender system is to generate explanations
according to a user's preferences. Most of the current methods for explainable
recommendations use structured sentences to provide descriptions along with the
recommendations they produce. However, those methods have neglected the
review-oriented way of writing a text, even though it is known that these
reviews have a strong influence over user's decision.
In this paper, we propose a method for the automatic generation of natural
language explanations, for predicting how a user would write about an item,
based on user ratings from different items' features. We design a
character-level recurrent neural network (RNN) model, which generates an item's
review explanations using long-short term memories (LSTM). The model generates
text reviews given a combination of the review and ratings score that express
opinions about different factors or aspects of an item. Our network is trained
on a sub-sample from the large real-world dataset BeerAdvocate. Our empirical
evaluation using natural language processing metrics shows the generated text's
quality is close to a real user written review, identifying negation,
misspellings, and domain specific vocabulary.
| 2,017 | Computation and Language |
Cross-Lingual Sentiment Analysis Without (Good) Translation | Current approaches to cross-lingual sentiment analysis try to leverage the
wealth of labeled English data using bilingual lexicons, bilingual vector space
embeddings, or machine translation systems. Here we show that it is possible to
use a single linear transformation, with as few as 2000 word pairs, to capture
fine-grained sentiment relationships between words in a cross-lingual setting.
We apply these cross-lingual sentiment models to a diverse set of tasks to
demonstrate their functionality in a non-English context. By effectively
leveraging English sentiment knowledge without the need for accurate
translation, we can analyze and extract features from other languages with
scarce data at a very low cost, thus making sentiment and related analyses for
many languages inexpensive.
| 2,017 | Computation and Language |
An Embedded Deep Learning based Word Prediction | Recent developments in deep learning with application to language modeling
have led to success in tasks of text processing, summarizing and machine
translation. However, deploying huge language models for mobile device such as
on-device keyboards poses computation as a bottle-neck due to their puny
computation capacities. In this work we propose an embedded deep learning based
word prediction method that optimizes run-time memory and also provides a real
time prediction environment. Our model size is 7.40MB and has average
prediction time of 6.47 ms. We improve over the existing methods for word
prediction in terms of key stroke savings and word prediction rate.
| 2,017 | Computation and Language |
Cross-linguistic differences and similarities in image descriptions | Automatic image description systems are commonly trained and evaluated on
large image description datasets. Recently, researchers have started to collect
such datasets for languages other than English. An unexplored question is how
different these datasets are from English and, if there are any differences,
what causes them to differ. This paper provides a cross-linguistic comparison
of Dutch, English, and German image descriptions. We find that these
descriptions are similar in many respects, but the familiarity of crowd workers
with the subjects of the images has a noticeable influence on description
specificity.
| 2,017 | Computation and Language |
On the Role of Text Preprocessing in Neural Network Architectures: An
Evaluation Study on Text Categorization and Sentiment Analysis | Text preprocessing is often the first step in the pipeline of a Natural
Language Processing (NLP) system, with potential impact in its final
performance. Despite its importance, text preprocessing has not received much
attention in the deep learning literature. In this paper we investigate the
impact of simple text preprocessing decisions (particularly tokenizing,
lemmatizing, lowercasing and multiword grouping) on the performance of a
standard neural text classifier. We perform an extensive evaluation on standard
benchmarks from text categorization and sentiment analysis. While our
experiments show that a simple tokenization of input text is generally
adequate, they also highlight significant degrees of variability across
preprocessing techniques. This reveals the importance of paying attention to
this usually-overlooked step in the pipeline, particularly when comparing
different models. Finally, our evaluation provides insights into the best
preprocessing practices for training word embeddings.
| 2,018 | Computation and Language |
A Simple Approach to Learn Polysemous Word Embeddings | Many NLP applications require disambiguating polysemous words. Existing
methods that learn polysemous word vector representations involve first
detecting various senses and optimizing the sense-specific embeddings
separately, which are invariably more involved than single sense learning
methods such as word2vec. Evaluating these methods is also problematic, as
rigorous quantitative evaluations in this space is limited, especially when
compared with single-sense embeddings. In this paper, we propose a simple
method to learn a word representation, given any context. Our method only
requires learning the usual single sense representation, and coefficients that
can be learnt via a single pass over the data. We propose several new test sets
for evaluating word sense induction, relevance detection, and contextual word
similarity, significantly supplementing the currently available tests. Results
on these and other tests show that while our method is embarrassingly simple,
it achieves excellent results when compared to the state of the art models for
unsupervised polysemous word representation learning.
| 2,017 | Computation and Language |
Single-Queue Decoding for Neural Machine Translation | Neural machine translation models rely on the beam search algorithm for
decoding. In practice, we found that the quality of hypotheses in the search
space is negatively affected owing to the fixed beam size. To mitigate this
problem, we store all hypotheses in a single priority queue and use a universal
score function for hypothesis selection. The proposed algorithm is more
flexible as the discarded hypotheses can be revisited in a later step. We
further design a penalty function to punish the hypotheses that tend to produce
a final translation that is much longer or shorter than expected. Despite its
simplicity, we show that the proposed decoding algorithm is able to select
hypotheses with better qualities and improve the translation performance.
| 2,017 | Computation and Language |
Higher-order Relation Schema Induction using Tensor Factorization with
Back-off and Aggregation | Relation Schema Induction (RSI) is the problem of identifying type signatures
of arguments of relations from unlabeled text. Most of the previous work in
this area have focused only on binary RSI, i.e., inducing only the subject and
object type signatures per relation. However, in practice, many relations are
high-order, i.e., they have more than two arguments and inducing type
signatures of all arguments is necessary. For example, in the sports domain,
inducing a schema win(WinningPlayer, OpponentPlayer, Tournament, Location) is
more informative than inducing just win(WinningPlayer, OpponentPlayer). We
refer to this problem as Higher-order Relation Schema Induction (HRSI). In this
paper, we propose Tensor Factorization with Back-off and Aggregation (TFBA), a
novel framework for the HRSI problem. To the best of our knowledge, this is the
first attempt at inducing higher-order relation schemata from unlabeled text.
Using the experimental analysis on three real world datasets, we show how TFBA
helps in dealing with sparsity and induce higher order schemata.
| 2,018 | Computation and Language |
Long-Term Memory Networks for Question Answering | Question answering is an important and difficult task in the natural language
processing domain, because many basic natural language processing tasks can be
cast into a question answering task. Several deep neural network architectures
have been developed recently, which employ memory and inference components to
memorize and reason over text information, and generate answers to questions.
However, a major drawback of many such models is that they are capable of only
generating single-word answers. In addition, they require large amount of
training data to generate accurate answers. In this paper, we introduce the
Long-Term Memory Network (LTMN), which incorporates both an external memory
module and a Long Short-Term Memory (LSTM) module to comprehend the input data
and generate multi-word answers. The LTMN model can be trained end-to-end using
back-propagation and requires minimal supervision. We test our model on two
synthetic data sets (based on Facebook's bAbI data set) and the real-world
Stanford question answering data set, and show that it can achieve
state-of-the-art performance.
| 2,017 | Computation and Language |
A Nested Attention Neural Hybrid Model for Grammatical Error Correction | Grammatical error correction (GEC) systems strive to correct both global
errors in word order and usage, and local errors in spelling and inflection.
Further developing upon recent work on neural machine translation, we propose a
new hybrid neural model with nested attention layers for GEC. Experiments show
that the new model can effectively correct errors of both types by
incorporating word and character-level information,and that the model
significantly outperforms previous neural models for GEC as measured on the
standard CoNLL-14 benchmark dataset. Further analysis also shows that the
superiority of the proposed model can be largely attributed to the use of the
nested attention mechanism, which has proven particularly effective in
correcting local errors that involve small edits in orthography.
| 2,017 | Computation and Language |
External Evaluation of Event Extraction Classifiers for Automatic
Pathway Curation: An extended study of the mTOR pathway | This paper evaluates the impact of various event extraction systems on
automatic pathway curation using the popular mTOR pathway. We quantify the
impact of training data sets as well as different machine learning classifiers
and show that some improve the quality of automatically extracted pathways.
| 2,017 | Computation and Language |
Computational Models of Tutor Feedback in Language Acquisition | This paper investigates the role of tutor feedback in language learning using
computational models. We compare two dominant paradigms in language learning:
interactive learning and cross-situational learning - which differ primarily in
the role of social feedback such as gaze or pointing. We analyze the
relationship between these two paradigms and propose a new mixed paradigm that
combines the two paradigms and allows to test algorithms in experiments that
combine no feedback and social feedback. To deal with mixed feedback
experiments, we develop new algorithms and show how they perform with respect
to traditional knn and prototype approaches.
| 2,018 | Computation and Language |
Text Summarization Techniques: A Brief Survey | In recent years, there has been a explosion in the amount of text data from a
variety of sources. This volume of text is an invaluable source of information
and knowledge which needs to be effectively summarized to be useful. In this
review, the main approaches to automatic text summarization are described. We
review the different processes for summarization and describe the effectiveness
and shortcomings of the different methods.
| 2,017 | Computation and Language |
A parallel corpus of Python functions and documentation strings for
automated code documentation and code generation | Automated documentation of programming source code and automated code
generation from natural language are challenging tasks of both practical and
scientific interest. Progress in these areas has been limited by the low
availability of parallel corpora of code and natural language descriptions,
which tend to be small and constrained to specific domains.
In this work we introduce a large and diverse parallel corpus of a hundred
thousands Python functions with their documentation strings ("docstrings")
generated by scraping open source repositories on GitHub. We describe baseline
results for the code documentation and code generation tasks obtained by neural
machine translation. We also experiment with data augmentation techniques to
further increase the amount of training data.
We release our datasets and processing scripts in order to stimulate research
in these areas.
| 2,017 | Computation and Language |
Efficient Vector Representation for Documents through Corruption | We present an efficient document representation learning framework, Document
Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a
simple average of word embeddings. It ensures a representation generated as
such captures the semantic meanings of the document during learning. A
corruption model is included, which introduces a data-dependent regularization
that favors informative or rare words while forcing the embeddings of common
and non-discriminative ones to be close to zero. Doc2VecC produces
significantly better word embeddings than Word2Vec. We compare Doc2VecC with
several state-of-the-art document representation learning algorithms. The
simple model architecture introduced by Doc2VecC matches or out-performs the
state-of-the-art in generating high-quality document representations for
sentiment analysis, document classification as well as semantic relatedness
tasks. The simplicity of the model enables training on billions of words per
hour on a single machine. At the same time, the model is very efficient in
generating representations of unseen documents at test time.
| 2,017 | Computation and Language |
Improving Multilingual Named Entity Recognition with Wikipedia Entity
Type Mapping | The state-of-the-art named entity recognition (NER) systems are statistical
machine learning models that have strong generalization capability (i.e., can
recognize unseen entities that do not appear in training data) based on lexical
and contextual information. However, such a model could still make mistakes if
its features favor a wrong entity type. In this paper, we utilize Wikipedia as
an open knowledge base to improve multilingual NER systems. Central to our
approach is the construction of high-accuracy, high-coverage multilingual
Wikipedia entity type mappings. These mappings are built from weakly annotated
data and can be extended to new languages with no human annotation or
language-dependent knowledge involved. Based on these mappings, we develop
several approaches to improve an NER system. We evaluate the performance of the
approaches via experiments on NER systems trained for 6 languages. Experimental
results show that the proposed approaches are effective in improving the
accuracy of such systems on unseen entities, especially when a system is
applied to a new domain or it is trained with little training data (up to 18.3
F1 score improvement).
| 2,019 | Computation and Language |
Weakly Supervised Cross-Lingual Named Entity Recognition via Effective
Annotation and Representation Projection | The state-of-the-art named entity recognition (NER) systems are supervised
machine learning models that require large amounts of manually annotated data
to achieve high accuracy. However, annotating NER data by human is expensive
and time-consuming, and can be quite difficult for a new language. In this
paper, we present two weakly supervised approaches for cross-lingual NER with
no human annotation in a target language. The first approach is to create
automatically labeled NER data for a target language via annotation projection
on comparable corpora, where we develop a heuristic scheme that effectively
selects good-quality projection-labeled data from noisy data. The second
approach is to project distributed representations of words (word embeddings)
from a target language to a source language, so that the source-language NER
system can be applied to the target language without re-training. We also
design two co-decoding schemes that effectively combine the outputs of the two
projection-based approaches. We evaluate the performance of the proposed
approaches on both in-house and open NER data for several target languages. The
results show that the combined systems outperform three other weakly supervised
approaches on the CoNLL data.
| 2,019 | Computation and Language |
Predicting the Quality of Short Narratives from Social Media | An important and difficult challenge in building computational models for
narratives is the automatic evaluation of narrative quality. Quality evaluation
connects narrative understanding and generation as generation systems need to
evaluate their own products. To circumvent difficulties in acquiring
annotations, we employ upvotes in social media as an approximate measure for
story quality. We collected 54,484 answers from a crowd-powered
question-and-answer website, Quora, and then used active learning to build a
classifier that labeled 28,320 answers as stories. To predict the number of
upvotes without the use of social network features, we create neural networks
that model textual regions and the interdependence among regions, which serve
as strong benchmarks for future research. To our best knowledge, this is the
first large-scale study for automatic evaluation of narrative quality.
| 2,017 | Computation and Language |
Neural Machine Translation between Herbal Prescriptions and Diseases | The current study applies deep learning to herbalism. Toward the goal, we
acquired the de-identified health insurance reimbursements that were claimed in
a 10-year period from 2004 to 2013 in the National Health Insurance Database of
Taiwan, the total number of reimbursement records equaling 340 millions. Two
artificial intelligence techniques were applied to the dataset: residual
convolutional neural network multitask classifier and attention-based recurrent
neural network. The former works to translate from herbal prescriptions to
diseases; and the latter from diseases to herbal prescriptions. Analysis of the
classification results indicates that herbal prescriptions are specific to:
anatomy, pathophysiology, sex and age of the patient, and season and year of
the prescription. Further analysis identifies temperature and gross domestic
product as the meteorological and socioeconomic factors that are associated
with herbal prescriptions. Analysis of the neural machine transitional result
indicates that the recurrent neural network learnt not only syntax but also
semantics of diseases and herbal prescriptions.
| 2,017 | Computation and Language |
Controlling Linguistic Style Aspects in Neural Language Generation | Most work on neural natural language generation (NNLG) focus on controlling
the content of the generated text. We experiment with controlling several
stylistic aspects of the generated text, in addition to its content. The method
is based on conditioned RNN language model, where the desired content as well
as the stylistic parameters serve as conditioning contexts. We demonstrate the
approach on the movie reviews domain and show that it is successful in
generating coherent sentences corresponding to the required linguistic style
and content.
| 2,017 | Computation and Language |
PELESent: Cross-domain polarity classification using distant supervision | The enormous amount of texts published daily by Internet users has fostered
the development of methods to analyze this content in several natural language
processing areas, such as sentiment analysis. The main goal of this task is to
classify the polarity of a message. Even though many approaches have been
proposed for sentiment analysis, some of the most successful ones rely on the
availability of large annotated corpus, which is an expensive and
time-consuming process. In recent years, distant supervision has been used to
obtain larger datasets. So, inspired by these techniques, in this paper we
extend such approaches to incorporate popular graphic symbols used in
electronic messages, the emojis, in order to create a large sentiment corpus
for Portuguese. Trained on almost one million tweets, several models were
tested in both same domain and cross-domain corpora. Our methods obtained very
competitive results in five annotated corpora from mixed domains (Twitter and
product reviews), which proves the domain-independent property of such
approach. In addition, our results suggest that the combination of emoticons
and emojis is able to properly capture the sentiment of a message.
| 2,017 | Computation and Language |
Understanding State Preferences With Text As Data: Introducing the UN
General Debate Corpus | Every year at the United Nations, member states deliver statements during the
General Debate discussing major issues in world politics. These speeches
provide invaluable information on governments' perspectives and preferences on
a wide range of issues, but have largely been overlooked in the study of
international politics. This paper introduces a new dataset consisting of over
7,701 English-language country statements from 1970-2016. We demonstrate how
the UN General Debate Corpus (UNGDC) can be used to derive country positions on
different policy dimensions using text analytic methods. The paper provides
applications of these estimates, demonstrating the contribution the UNGDC can
make to the study of international politics.
| 2,017 | Computation and Language |
Learning to Compose Task-Specific Tree Structures | For years, recursive neural networks (RvNNs) have been shown to be suitable
for representing text into fixed-length vectors and achieved good performance
on several natural language processing tasks. However, the main drawback of
RvNNs is that they require structured input, which makes data preparation and
model implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel
tree-structured long short-term memory architecture that learns how to compose
task-specific tree structures only from plain text data efficiently. Our model
uses Straight-Through Gumbel-Softmax estimator to decide the parent node among
candidates dynamically and to calculate gradients of the discrete decision. We
evaluate the proposed model on natural language inference and sentiment
analysis, and show that our model outperforms or is at least comparable to
previous models. We also find that our model converges significantly faster
than other models.
| 2,017 | Computation and Language |
A Generalized Recurrent Neural Architecture for Text Classification with
Multi-Task Learning | Multi-task learning leverages potential correlations among related tasks to
extract common features and yield performance gains. However, most previous
works only consider simple or weak interactions, thereby failing to model
complex correlations among three or more tasks. In this paper, we propose a
multi-task learning architecture with four types of recurrent neural layers to
fuse information across multiple related tasks. The architecture is
structurally flexible and considers various interactions among tasks, which can
be regarded as a generalized case of many previous works. Extensive experiments
on five benchmark datasets for text classification show that our model can
significantly improve performances of related tasks with additional information
from others.
| 2,017 | Computation and Language |
A Brief Survey of Text Mining: Classification, Clustering and Extraction
Techniques | The amount of text that is generated every day is increasing dramatically.
This tremendous volume of mostly unstructured text cannot be simply processed
and perceived by computers. Therefore, efficient and effective techniques and
algorithms are required to discover useful patterns. Text mining is the task of
extracting meaningful information from text, which has gained significant
attentions in recent years. In this paper, we describe several of the most
fundamental text mining tasks and techniques including text pre-processing,
classification and clustering. Additionally, we briefly explain text mining in
biomedical and health care domains.
| 2,017 | Computation and Language |
Improving Neural Parsing by Disentangling Model Combination and
Reranking Effects | Recent work has proposed several generative neural models for constituency
parsing that achieve state-of-the-art results. Since direct search in these
generative models is difficult, they have primarily been used to rescore
candidate outputs from base parsers in which decoding is more straightforward.
We first present an algorithm for direct search in these generative models. We
then demonstrate that the rescoring results are at least partly due to implicit
model combination rather than reranking effects. Finally, we show that explicit
model combination can improve performance even further, resulting in new
state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data
and 94.66 F1 when using external data.
| 2,017 | Computation and Language |
Look Who's Talking: Bipartite Networks as Representations of a Topic
Model of New Zealand Parliamentary Speeches | Quantitative methods to measure the participation to parliamentary debate and
discourse of elected Members of Parliament (MPs) and the parties they belong to
are lacking. This is an exploratory study in which we propose the development
of a new approach for a quantitative analysis of such participation. We utilize
the New Zealand government's digital Hansard database to construct a topic
model of parliamentary speeches consisting of nearly 40 million words in the
period 2003-2016. A Latent Dirichlet Allocation topic model is implemented in
order to reveal the thematic structure of our set of documents. This generative
statistical model enables the detection of major themes or topics that are
publicly discussed in the New Zealand parliament, as well as permitting their
classification by MP. Information on topic proportions is subsequently analyzed
using a combination of statistical methods. We observe patterns arising from
time-series analysis of topic frequencies which can be related to specific
social, economic and legislative events. We then construct a bipartite network
representation, linking MPs to topics, for each of four parliamentary terms in
this time frame. We build projected networks (onto the set of nodes represented
by MPs) and proceed to the study of the dynamical changes of their topology,
including community structure. By performing this longitudinal network
analysis, we can observe the evolution of the New Zealand parliamentary topic
network and its main parties in the period studied.
| 2,018 | Computation and Language |
Refining Raw Sentence Representations for Textual Entailment Recognition
via Attention | In this paper we present the model used by the team Rivercorners for the 2017
RepEval shared task. First, our model separately encodes a pair of sentences
into variable-length representations by using a bidirectional LSTM. Later, it
creates fixed-length raw representations by means of simple aggregation
functions, which are then refined using an attention mechanism. Finally it
combines the refined representations of both sentences into a single vector to
be used for classification. With this model we obtained test accuracies of
72.057% and 72.055% in the matched and mismatched evaluation tracks
respectively, outperforming the LSTM baseline, and obtaining performances
similar to a model that relies on shared information between sentences (ESIM).
When using an ensemble both accuracies increased to 72.247% and 72.827%
respectively.
| 2,017 | Computation and Language |
Dataset for a Neural Natural Language Interface for Databases (NNLIDB) | Progress in natural language interfaces to databases (NLIDB) has been slow
mainly due to linguistic issues (such as language ambiguity) and domain
portability. Moreover, the lack of a large corpus to be used as a standard
benchmark has made data-driven approaches difficult to develop and compare. In
this paper, we revisit the problem of NLIDBs and recast it as a sequence
translation problem. To this end, we introduce a large dataset extracted from
the Stack Exchange Data Explorer website, which can be used for training neural
natural language interfaces for databases. We also report encouraging baseline
results on a smaller manually annotated test corpus, obtained using an
attention-based sequence-to-sequence neural network.
| 2,017 | Computation and Language |
A non-projective greedy dependency parser with bidirectional LSTMs | The LyS-FASTPARSE team presents BIST-COVINGTON, a neural implementation of
the Covington (2001) algorithm for non-projective dependency parsing. The
bidirectional LSTM approach by Kipperwasser and Goldberg (2016) is used to
train a greedy parser with a dynamic oracle to mitigate error propagation. The
model participated in the CoNLL 2017 UD Shared Task. In spite of not using any
ensemble methods and using the baseline segmentation and PoS tagging, the
parser obtained good results on both macro-average LAS and UAS in the big
treebanks category (55 languages), ranking 7th out of 33 teams. In the all
treebanks category (LAS and UAS) we ranked 16th and 12th. The gap between the
all and big categories is mainly due to the poor performance on four parallel
PUD treebanks, suggesting that some `suffixed' treebanks (e.g. Spanish-AnCora)
perform poorly on cross-treebank settings, which does not occur with the
corresponding `unsuffixed' treebank (e.g. Spanish). By changing that, we obtain
the 11th best LAS among all runs (official and unofficial). The code is made
available at https://github.com/CoNLL-UD-2017/LyS-FASTPARSE
| 2,017 | Computation and Language |
Leipzig Corpus Miner - A Text Mining Infrastructure for Qualitative Data
Analysis | This paper presents the "Leipzig Corpus Miner", a technical infrastructure
for supporting qualitative and quantitative content analysis. The
infrastructure aims at the integration of 'close reading' procedures on
individual documents with procedures of 'distant reading', e.g. lexical
characteristics of large document collections. Therefore information retrieval
systems, lexicometric statistics and machine learning procedures are combined
in a coherent framework which enables qualitative data analysts to make use of
state-of-the-art Natural Language Processing techniques on very large document
collections. Applicability of the framework ranges from social sciences to
media studies and market research. As an example we introduce the usage of the
framework in a political science study on post-democracy and neoliberalism.
| 2,017 | Computation and Language |
Modeling the dynamics of domain specific terminology in diachronic
corpora | In terminology work, natural language processing, and digital humanities,
several studies address the analysis of variations in context and meaning of
terms in order to detect semantic change and the evolution of terms. We
distinguish three different approaches to describe contextual variations:
methods based on the analysis of patterns and linguistic clues, methods
exploring the latent semantic space of single words, and methods for the
analysis of topic membership. The paper presents the notion of context
volatility as a new measure for detecting semantic change and applies it to key
term extraction in a political science case study. The measure quantifies the
dynamics of a term's contextual variation within a diachronic corpus to
identify periods of time that are characterised by intense controversial
debates or substantial semantic transformations.
| 2,017 | Computation and Language |
A simple but tough-to-beat baseline for the Fake News Challenge stance
detection task | Identifying public misinformation is a complicated and challenging task. An
important part of checking the veracity of a specific claim is to evaluate the
stance different news sources take towards the assertion. Automatic stance
evaluation, i.e. stance detection, would arguably facilitate the process of
fact checking. In this paper, we present our stance detection system which
claimed third place in Stage 1 of the Fake News Challenge. Despite our
straightforward approach, our system performs at a competitive level with the
complex ensembles of the top two winning teams. We therefore propose our system
as the 'simple but tough-to-beat baseline' for the Fake News Challenge stance
detection task.
| 2,018 | Computation and Language |
Detecting Policy Preferences and Dynamics in the UN General Debate with
Neural Word Embeddings | Foreign policy analysis has been struggling to find ways to measure policy
preferences and paradigm shifts in international political systems. This paper
presents a novel, potential solution to this challenge, through the application
of a neural word embedding (Word2vec) model on a dataset featuring speeches by
heads of state or government in the United Nations General Debate. The paper
provides three key contributions based on the output of the Word2vec model.
First, it presents a set of policy attention indices, synthesizing the semantic
proximity of political speeches to specific policy themes. Second, it
introduces country-specific semantic centrality indices, based on topological
analyses of countries' semantic positions with respect to each other. Third, it
tests the hypothesis that there exists a statistical relation between the
semantic content of political speeches and UN voting behavior, falsifying it
and suggesting that political speeches contain information of different nature
then the one behind voting outcomes. The paper concludes with a discussion of
the practical use of its results and consequences for foreign policy analysis,
public accountability, and transparency.
| 2,017 | Computation and Language |
Geospatial Semantics | Geospatial semantics is a broad field that involves a variety of research
areas. The term semantics refers to the meaning of things, and is in contrast
with the term syntactics. Accordingly, studies on geospatial semantics usually
focus on understanding the meaning of geographic entities as well as their
counterparts in the cognitive and digital world, such as cognitive geographic
concepts and digital gazetteers. Geospatial semantics can also facilitate the
design of geographic information systems (GIS) by enhancing the
interoperability of distributed systems and developing more intelligent
interfaces for user interactions. During the past years, a lot of research has
been conducted, approaching geospatial semantics from different perspectives,
using a variety of methods, and targeting different problems. Meanwhile, the
arrival of big geo data, especially the large amount of unstructured text data
on the Web, and the fast development of natural language processing methods
enable new research directions in geospatial semantics. This chapter,
therefore, provides a systematic review on the existing geospatial semantic
research. Six major research areas are identified and discussed, including
semantic interoperability, digital gazetteers, geographic information
retrieval, geospatial Semantic Web, place semantics, and cognitive geographic
concepts.
| 2,017 | Computation and Language |
The Case for Being Average: A Mediocrity Approach to Style Masking and
Author Obfuscation | Users posting online expect to remain anonymous unless they have logged in,
which is often needed for them to be able to discuss freely on various topics.
Preserving the anonymity of a text's writer can be also important in some other
contexts, e.g., in the case of witness protection or anonymity programs.
However, each person has his/her own style of writing, which can be analyzed
using stylometry, and as a result, the true identity of the author of a piece
of text can be revealed even if s/he has tried to hide it. Thus, it could be
helpful to design automatic tools that can help a person obfuscate his/her
identity when writing text. In particular, here we propose an approach that
changes the text, so that it is pushed towards average values for some general
stylometric characteristics, thus making the use of these characteristics less
discriminative. The approach consists of three main steps: first, we calculate
the values for some popular stylometric metrics that can indicate authorship;
then we apply various transformations to the text, so that these metrics are
adjusted towards the average level, while preserving the semantics and the
soundness of the text; and finally, we add random noise. This approach turned
out to be very efficient, and yielded the best performance on the Author
Obfuscation task at the PAN-2016 competition.
| 2,017 | Computation and Language |
N-GrAM: New Groningen Author-profiling Model | We describe our participation in the PAN 2017 shared task on Author
Profiling, identifying authors' gender and language variety for English,
Spanish, Arabic and Portuguese. We describe both the final, submitted system,
and a series of negative results. Our aim was to create a single model for both
gender and language, and for all language varieties. Our best-performing system
(on cross-validated results) is a linear support vector machine (SVM) with word
unigrams and character 3- to 5-grams as features. A set of additional features,
including POS tags, additional datasets, geographic entities, and Twitter
handles, hurt, rather than improve, performance. Results from cross-validation
indicated high performance overall and results on the test set confirmed them,
at 0.86 averaged accuracy, with performance on sub-tasks ranging from 0.68 to
0.98.
| 2,017 | Computation and Language |
Source-Target Inference Models for Spatial Instruction Understanding | Models that can execute natural language instructions for situated robotic
tasks such as assembly and navigation have several useful applications in
homes, offices, and remote scenarios. We study the semantics of
spatially-referred configuration and arrangement instructions, based on the
challenging Bisk-2016 blank-labeled block dataset. This task involves finding a
source block and moving it to the target position (mentioned via a reference
block and offset), where the blocks have no names or colors and are just
referred to via spatial location features. We present novel models for the
subtasks of source block classification and target position regression, based
on joint-loss language and spatial-world representation learning, as well as
CNN-based and dual attention models to compute the alignment between the world
blocks and the instruction phrases. For target position prediction, we compare
two inference approaches: annealed sampling via policy gradient versus
expectation inference via supervised regression. Our models achieve the new
state-of-the-art on this task, with an improvement of 47% on source block
accuracy and 22% on target position distance.
| 2,017 | Computation and Language |
A Critique of a Critique of Word Similarity Datasets: Sanity Check or
Unnecessary Confusion? | Critical evaluation of word similarity datasets is very important for
computational lexical semantics. This short report concerns the sanity check
proposed in Batchkarov et al. (2016) to evaluate several popular datasets such
as MC, RG and MEN -- the first two reportedly failed. I argue that this test is
unstable, offers no added insight, and needs major revision in order to fulfill
its purported goal.
| 2,017 | Computation and Language |
Negative Sampling Improves Hypernymy Extraction Based on Projection
Learning | We present a new approach to extraction of hypernyms based on projection
learning and word embeddings. In contrast to classification-based approaches,
projection-based methods require no candidate hyponym-hypernym pairs. While it
is natural to use both positive and negative training examples in supervised
relation extraction, the impact of negative examples on hypernym prediction was
not studied so far. In this paper, we show that explicit negative examples used
for regularization of the model significantly improve performance compared to
the state-of-the-art approach of Fu et al. (2014) on three datasets from
different languages.
| 2,018 | Computation and Language |
Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar .
| 2,017 | Computation and Language |
Representation Learning for Grounded Spatial Reasoning | The interpretation of spatial references is highly contextual, requiring
joint inference over both language and the environment. We consider the task of
spatial reasoning in a simulated environment, where an agent can act and
receive rewards. The proposed model learns a representation of the world
steered by instruction text. This design allows for precise alignment of local
neighborhoods with corresponding verbalizations, while also handling global
references in the instructions. We train our model with reinforcement learning
using a variant of generalized value iteration. The model outperforms
state-of-the-art approaches on several metrics, yielding a 45% reduction in
goal localization error.
| 2,017 | Computation and Language |
Predicting Causes of Reformulation in Intelligent Assistants | Intelligent assistants (IAs) such as Siri and Cortana conversationally
interact with users and execute a wide range of actions (e.g., searching the
Web, setting alarms, and chatting). IAs can support these actions through the
combination of various components such as automatic speech recognition, natural
language understanding, and language generation. However, the complexity of
these components hinders developers from determining which component causes an
error. To remove this hindrance, we focus on reformulation, which is a useful
signal of user dissatisfaction, and propose a method to predict the
reformulation causes. We evaluate the method using the user logs of a
commercial IA. The experimental results have demonstrated that features
designed to detect the error of a specific component improve the performance of
reformulation cause detection.
| 2,017 | Computation and Language |
A Web-Based Tool for Analysing Normative Documents in English | Our goal is to use formal methods to analyse normative documents written in
English, such as privacy policies and service-level agreements. This requires
the combination of a number of different elements, including information
extraction from natural language, formal languages for model representation,
and an interface for property specification and verification. We have worked on
a collection of components for this task: a natural language extraction tool, a
suitable formalism for representing such documents, an interface for building
models in this formalism, and methods for answering queries asked of a given
model. In this work, each of these concerns is brought together in a web-based
tool, providing a single interface for analysing normative texts in English.
Through the use of a running example, we describe each component and
demonstrate the workflow established by our tool.
| 2,017 | Computation and Language |
Is writing style predictive of scientific fraud? | The problem of detecting scientific fraud using machine learning was recently
introduced, with initial, positive results from a model taking into account
various general indicators. The results seem to suggest that writing style is
predictive of scientific fraud. We revisit these initial experiments, and show
that the leave-one-out testing procedure they used likely leads to a slight
over-estimate of the predictability, but also that simple models can outperform
their proposed model by some margin. We go on to explore more abstract
linguistic features, such as linguistic complexity and discourse structure,
only to obtain negative results. Upon analyzing our models, we do see some
interesting patterns, though: Scientific fraud, for examples, contains less
comparison, as well as different types of hedging and ways of presenting
logical reasoning.
| 2,017 | Computation and Language |
Do Convolutional Networks need to be Deep for Text Classification ? | We study in this work the importance of depth in convolutional models for
text classification, either when character or word inputs are considered. We
show on 5 standard text classification and sentiment analysis tasks that deep
models indeed give better performances than shallow networks when the text
input is represented as a sequence of characters. However, a simple
shallow-and-wide network outperforms deep models such as DenseNet with word
inputs. Our shallow word model further establishes new state-of-the-art
performances on two datasets: Yelp Binary (95.9\%) and Yelp Full (64.9\%).
| 2,017 | Computation and Language |
Learning Features from Co-occurrences: A Theoretical Analysis | Representing a word by its co-occurrences with other words in context is an
effective way to capture the meaning of the word. However, the theory behind
remains a challenge. In this work, taking the example of a word classification
task, we give a theoretical analysis of the approaches that represent a word X
by a function f(P(C|X)), where C is a context feature, P(C|X) is the
conditional probability estimated from a text corpus, and the function f maps
the co-occurrence measure to a prediction score. We investigate the impact of
context feature C and the function f. We also explain the reasons why using the
co-occurrences with multiple context features may be better than just using a
single one. In addition, some of the results shed light on the theory of
feature learning and machine learning in general.
| 2,017 | Computation and Language |
Parsing with Traces: An $O(n^4)$ Algorithm and a Structural
Representation | General treebank analyses are graph structured, but parsers are typically
restricted to tree structures for efficiency and modeling reasons. We propose a
new representation and algorithm for a class of graph structures that is
flexible enough to cover almost all treebank structures, while still admitting
efficient learning and inference. In particular, we consider directed, acyclic,
one-endpoint-crossing graph structures, which cover most long-distance
dislocation, shared argumentation, and similar tree-violating linguistic
phenomena. We describe how to convert phrase structure parses, including
traces, to our new representation in a reversible manner. Our dynamic program
uniquely decomposes structures, is sound and complete, and covers 97.3% of the
Penn English Treebank. We also implement a proof-of-concept parser that
recovers a range of null elements and trace types.
| 2,017 | Computation and Language |
Automatic Speech Recognition with Very Large Conversational Finnish and
Estonian Vocabularies | Today, the vocabulary size for language models in large vocabulary speech
recognition is typically several hundreds of thousands of words. While this is
already sufficient in some applications, the out-of-vocabulary words are still
limiting the usability in others. In agglutinative languages the vocabulary for
conversational speech should include millions of word forms to cover the
spelling variations due to colloquial pronunciations, in addition to the word
compounding and inflections. Very large vocabularies are also needed, for
example, when the recognition of rare proper names is important.
| 2,017 | Computation and Language |
Developing a concept-level knowledge base for sentiment analysis in
Singlish | In this paper, we present Singlish sentiment lexicon, a concept-level
knowledge base for sentiment analysis that associates multiword expressions to
a set of emotion labels and a polarity value. Unlike many other sentiment
analysis resources, this lexicon is not built by manually labeling pieces of
knowledge coming from general NLP resources such as WordNet or DBPedia.
Instead, it is automatically constructed by applying graph-mining and
multi-dimensional scaling techniques on the affective common-sense knowledge
collected from three different sources. This knowledge is represented
redundantly at three levels: semantic network, matrix, and vector space.
Subsequently, the concepts are labeled by emotions and polarity through the
ensemble application of spreading activation, neural networks and an emotion
categorization model.
| 2,017 | Computation and Language |
Evaluating Semantic Parsing against a Simple Web-based Question
Answering Model | Semantic parsing shines at analyzing complex natural language that involves
composition and computation over multiple pieces of evidence. However, datasets
for semantic parsing contain many factoid questions that can be answered from a
single web document. In this paper, we propose to evaluate semantic
parsing-based question answering models by comparing them to a question
answering baseline that queries the web and extracts the answer only from web
snippets, without access to the target knowledge-base. We investigate this
approach on COMPLEXQUESTIONS, a dataset designed to focus on compositional
language, and find that our model obtains reasonable performance (35 F1
compared to 41 F1 of state-of-the-art). We find in our analysis that our model
performs well on complex questions involving conjunctions, but struggles on
questions that involve relation composition and superlatives.
| 2,017 | Computation and Language |
LIUM-CVC Submissions for WMT17 Multimodal Translation Task | This paper describes the monomodal and multimodal Neural Machine Translation
systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal
Translation. We mainly explored two multimodal architectures where either
global visual features or convolutional feature maps are integrated in order to
benefit from visual context. Our final systems ranked first for both En-De and
En-Fr language pairs according to the automatic evaluation metrics METEOR and
BLEU.
| 2,017 | Computation and Language |
LIUM Machine Translation Systems for WMT17 News Translation Task | This paper describes LIUM submissions to WMT17 News Translation Task for
English-German, English-Turkish, English-Czech and English-Latvian language
pairs. We train BPE-based attentive Neural Machine Translation systems with and
without factored outputs using the open source nmtpy framework. Competitive
scores were obtained by ensembling various systems and exploiting the
availability of target monolingual corpora for back-translation. The impact of
back-translation quantity and quality is also analyzed for English-Turkish
where our post-deadline submission surpassed the best entry by +1.6 BLEU.
| 2,017 | Computation and Language |
Cross-genre Document Retrieval: Matching between Conversational and
Formal Writings | This paper challenges a cross-genre document retrieval task, where the
queries are in formal writing and the target documents are in conversational
writing. In this task, a query, is a sentence extracted from either a summary
or a plot of an episode in a TV show, and the target document consists of
transcripts from the corresponding episode. To establish a strong baseline, we
employ the current state-of-the-art search engine to perform document retrieval
on the dataset collected for this work. We then introduce a structure reranking
approach to improve the initial ranking by utilizing syntactic and semantic
structures generated by NLP tools. Our evaluation shows an improvement of more
than 4% when the structure reranking is applied, which is very promising.
| 2,017 | Computation and Language |
Linguistic Markers of Influence in Informal Interactions | There has been a long standing interest in understanding `Social Influence'
both in Social Sciences and in Computational Linguistics. In this paper, we
present a novel approach to study and measure interpersonal influence in daily
interactions. Motivated by the basic principles of influence, we attempt to
identify indicative linguistic features of the posts in an online knitting
community. We present the scheme used to operationalize and label the posts
with indicator features. Experiments with the identified features show an
improvement in the classification accuracy of influence by 3.15%. Our results
illustrate the important correlation between the characteristics of the
language and its potential to influence others.
| 2,017 | Computation and Language |
CUNI System for the WMT17 Multimodal Translation Task | In this paper, we describe our submissions to the WMT17 Multimodal
Translation Task. For Task 1 (multimodal translation), our best scoring system
is a purely textual neural translation of the source image caption to the
target language. The main feature of the system is the use of additional data
that was acquired by selecting similar sentences from parallel corpora and by
data synthesis with back-translation. For Task 2 (cross-lingual image
captioning), our best submitted system generates an English caption which is
then translated by the best system used in Task 1. We also present negative
results, which are based on ideas that we believe have potential of making
improvements, but did not prove to be useful in our particular setup.
| 2,017 | Computation and Language |
DocTag2Vec: An Embedding Based Multi-label Learning Approach for
Document Tagging | Tagging news articles or blog posts with relevant tags from a collection of
predefined ones is coined as document tagging in this work. Accurate tagging of
articles can benefit several downstream applications such as recommendation and
search. In this work, we propose a novel yet simple approach called DocTag2Vec
to accomplish this task. We substantially extend Word2Vec and Doc2Vec---two
popular models for learning distributed representation of words and documents.
In DocTag2Vec, we simultaneously learn the representation of words, documents,
and tags in a joint vector space during training, and employ the simple
$k$-nearest neighbor search to predict tags for unseen documents. In contrast
to previous multi-label learning methods, DocTag2Vec directly deals with raw
text instead of provided feature vector, and in addition, enjoys advantages
like the learning of tag representation, and the ability of handling newly
created tags. To demonstrate the effectiveness of our approach, we conduct
experiments on several datasets and show promising results against
state-of-the-art methods.
| 2,017 | Computation and Language |
EmojiNet: An Open Service and API for Emoji Sense Discovery | This paper presents the release of EmojiNet, the largest machine-readable
emoji sense inventory that links Unicode emoji representations to their English
meanings extracted from the Web. EmojiNet is a dataset consisting of: (i)
12,904 sense labels over 2,389 emoji, which were extracted from the web and
linked to machine-readable sense definitions seen in BabelNet, (ii) context
words associated with each emoji sense, which are inferred through word
embedding models trained over Google News corpus and a Twitter message corpus
for each emoji sense definition, and (iii) recognizing discrepancies in the
presentation of emoji on different platforms, specification of the most likely
platform-based emoji sense for a selected set of emoji. The dataset is hosted
as an open service with a REST API and is available at
http://emojinet.knoesis.org/. The development of this dataset, evaluation of
its quality, and its applications including emoji sense disambiguation and
emoji sense similarity are discussed.
| 2,017 | Computation and Language |
A Semantics-Based Measure of Emoji Similarity | Emoji have grown to become one of the most important forms of communication
on the web. With its widespread use, measuring the similarity of emoji has
become an important problem for contemporary text processing since it lies at
the heart of sentiment analysis, search, and interface design tasks. This paper
presents a comprehensive analysis of the semantic similarity of emoji through
embedding models that are learned over machine-readable emoji meanings in the
EmojiNet knowledge base. Using emoji descriptions, emoji sense labels and emoji
sense definitions, and with different training corpora obtained from Twitter
and Google News, we develop and test multiple embedding models to measure emoji
similarity. To evaluate our work, we create a new dataset called EmoSim508,
which assigns human-annotated semantic similarity scores to a set of 508
carefully selected emoji pairs. After validation with EmoSim508, we present a
real-world use-case of our emoji embedding models using a sentiment analysis
task and show that our models outperform the previous best-performing emoji
embedding model on this task. The EmoSim508 dataset and our emoji embedding
models are publicly released with this paper and can be downloaded from
http://emojinet.knoesis.org/.
| 2,017 | Computation and Language |
Rotations and Interpretability of Word Embeddings: the Case of the
Russian Language | Consider a continuous word embedding model. Usually, the cosines between word
vectors are used as a measure of similarity of words. These cosines do not
change under orthogonal transformations of the embedding space. We demonstrate
that, using some canonical orthogonal transformations from SVD, it is possible
both to increase the meaning of some components and to make the components more
stable under re-learning. We study the interpretability of components for
publicly available models for the Russian language (RusVectores, fastText,
RDT).
| 2,019 | Computation and Language |
Open-Set Language Identification | We present the first open-set language identification experiments using
one-class classification. We first highlight the shortcomings of traditional
feature extraction methods and propose a hashing-based feature vectorization
approach as a solution. Using a dataset of 10 languages from different writing
systems, we train a One- Class Support Vector Machine using only a monolingual
corpus for each language. Each model is evaluated against a test set of data
from all 10 languages and we achieve an average F-score of 0.99, highlighting
the effectiveness of this approach for open-set language identification.
| 2,017 | Computation and Language |
Do Neural Nets Learn Statistical Laws behind Natural Language? | The performance of deep learning in natural language processing has been
spectacular, but the reasons for this success remain unclear because of the
inherent complexity of deep learning. This paper provides empirical evidence of
its effectiveness and of a limitation of neural networks for language
engineering. Precisely, we demonstrate that a neural language model based on
long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law,
two representative statistical properties underlying natural language. We
discuss the quality of reproducibility and the emergence of Zipf's law and
Heaps' law as training progresses. We also point out that the neural language
model has a limitation in reproducing long-range correlation, another
statistical property of natural language. This understanding could provide a
direction for improving the architectures of neural networks.
| 2,018 | Computation and Language |
Automated Detection of Non-Relevant Posts on the Russian Imageboard
"2ch": Importance of the Choice of Word Representations | This study considers the problem of automated detection of non-relevant posts
on Web forums and discusses the approach of resolving this problem by
approximation it with the task of detection of semantic relatedness between the
given post and the opening post of the forum discussion thread. The
approximated task could be resolved through learning the supervised classifier
with a composed word embeddings of two posts. Considering that the success in
this task could be quite sensitive to the choice of word representations, we
propose a comparison of the performance of different word embedding models. We
train 7 models (Word2Vec, Glove, Word2Vec-f, Wang2Vec, AdaGram, FastText,
Swivel), evaluate embeddings produced by them on dataset of human judgements
and compare their performance on the task of non-relevant posts detection. To
make the comparison, we propose a dataset of semantic relatedness with posts
from one of the most popular Russian Web forums, imageboard "2ch", which has
challenging lexical and grammatical features.
| 2,018 | Computation and Language |
Listening while Speaking: Speech Chain by Deep Learning | Despite the close relationship between speech perception and production,
research in automatic speech recognition (ASR) and text-to-speech synthesis
(TTS) has progressed more or less independently without exerting much mutual
influence on each other. In human communication, on the other hand, a
closed-loop speech chain mechanism with auditory feedback from the speaker's
mouth to her ear is crucial. In this paper, we take a step further and develop
a closed-loop speech chain model based on deep learning. The
sequence-to-sequence model in close-loop architecture allows us to train our
model on the concatenation of both labeled and unlabeled data. While ASR
transcribes the unlabeled speech features, TTS attempts to reconstruct the
original speech waveform based on the text from ASR. In the opposite direction,
ASR also attempts to reconstruct the original text transcription given the
synthesized speech. To the best of our knowledge, this is the first deep
learning model that integrates human speech perception and production
behaviors. Our experimental results show that the proposed approach
significantly improved the performance more than separate systems that were
only trained with labeled data.
| 2,017 | Computation and Language |
End-to-End Information Extraction without Token-Level Supervision | Most state-of-the-art information extraction approaches rely on token-level
labels to find the areas of interest in text. Unfortunately, these labels are
time-consuming and costly to create, and consequently, not available for many
real-life IE tasks. To make matters worse, token-level labels are usually not
the desired output, but just an intermediary step. End-to-end (E2E) models,
which take raw text as input and produce the desired output directly, need not
depend on token-level labels. We propose an E2E model based on pointer
networks, which can be trained directly on pairs of raw input and output text.
We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT
movie corpus and compare to neural baselines that do use token-level labels. We
achieve competitive results, within a few percentage points of the baselines,
showing the feasibility of E2E information extraction without the need for
token-level labels. This opens up new possibilities, as for many tasks
currently addressed by human extractors, raw input and output data are
available, but not token-level labels.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.