Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Entity Type Recognition using an Ensemble of Distributional Semantic
Models to Enhance Query Understanding | We present an ensemble approach for categorizing search query entities in the
recruitment domain. Understanding the types of entities expressed in a search
query (Company, Skill, Job Title, etc.) enables more intelligent information
retrieval based upon those entities compared to a traditional keyword-based
search. Because search queries are typically very short, leveraging a
traditional bag-of-words model to identify entity types would be inappropriate
due to the lack of contextual information. Our approach instead combines clues
from different sources of varying complexity in order to collect real-world
knowledge about query entities. We employ distributional semantic
representations of query entities through two models: 1) contextual vectors
generated from encyclopedic corpora like Wikipedia, and 2) high dimensional
word embedding vectors generated from millions of job postings using word2vec.
Additionally, our approach utilizes both entity linguistic properties obtained
from WordNet and ontological properties extracted from DBpedia. We evaluate our
approach on a data set created at CareerBuilder; the largest job board in the
US. The data set contains entities extracted from millions of job
seekers/recruiters search queries, job postings, and resume documents. After
constructing the distributional vectors of search entities, we use supervised
machine learning to infer search entity types. Empirical results show that our
approach outperforms the state-of-the-art word2vec distributional semantics
model trained on Wikipedia. Moreover, we achieve micro-averaged F 1 score of
97% using the proposed distributional representations ensemble.
| 2,016 | Computation and Language |
Multi-Field Structural Decomposition for Question Answering | This paper presents a precursory yet novel approach to the question answering
task using structural decomposition. Our system first generates linguistic
structures such as syntactic and semantic trees from text, decomposes them into
multiple fields, then indexes the terms in each field. For each question, it
decomposes the question into multiple fields, measures the relevance score of
each field to the indexed ones, then ranks all documents by their relevance
scores and weights associated with the fields, where the weights are learned
through statistical modeling. Our final model gives an absolute improvement of
over 40% to the baseline approach using simple search for detecting documents
containing answers.
| 2,016 | Computation and Language |
Modeling Relational Information in Question-Answer Pairs with
Convolutional Neural Networks | In this paper, we propose convolutional neural networks for learning an
optimal representation of question and answer sentences. Their main aspect is
the use of relational information given by the matches between words from the
two members of the pair. The matches are encoded as embeddings with additional
parameters (dimensions), which are tuned by the network. These allows for
better capturing interactions between questions and answers, resulting in a
significant boost in accuracy. We test our models on two widely used answer
sentence selection benchmarks. The results clearly show the effectiveness of
our relational information, which allows our relatively simple network to
approach the state of the art.
| 2,016 | Computation and Language |
Character-Level Neural Translation for Multilingual Media Monitoring in
the SUMMA Project | The paper steps outside the comfort-zone of the traditional NLP tasks like
automatic speech recognition (ASR) and machine translation (MT) to addresses
two novel problems arising in the automated multilingual news monitoring:
segmentation of the TV and radio program ASR transcripts into individual
stories, and clustering of the individual stories coming from various sources
and languages into storylines. Storyline clustering of stories covering the
same events is an essential task for inquisitorial media monitoring. We address
these two problems jointly by engaging the low-dimensional semantic
representation capabilities of the sequence to sequence neural translation
models. To enable joint multi-task learning for multilingual neural translation
of morphologically rich languages we replace the attention mechanism with the
sliding-window mechanism and operate the sequence to sequence neural
translation model on the character-level rather than on the word-level. The
story segmentation and storyline clustering problem is tackled by examining the
low-dimensional vectors produced as a side-product of the neural translation
process. The results of this paper describe a novel approach to the automatic
story segmentation and storyline clustering problem.
| 2,016 | Computation and Language |
A new TAG Formalism for Tamil and Parser Analytics | Tree adjoining grammar (TAG) is specifically suited for morph rich and
agglutinated languages like Tamil due to its psycho linguistic features and
parse time dependency and morph resolution. Though TAG and LTAG formalisms have
been known for about 3 decades, efforts on designing TAG Syntax for Tamil have
not been entirely successful due to the complexity of its specification and the
rich morphology of Tamil language. In this paper we present a minimalistic TAG
for Tamil without much morphological considerations and also introduce a parser
implementation with some obvious variations from the XTAG system
| 2,016 | Computation and Language |
Feature extraction using Latent Dirichlet Allocation and Neural
Networks: A case study on movie synopses | Feature extraction has gained increasing attention in the field of machine
learning, as in order to detect patterns, extract information, or predict
future observations from big data, the urge of informative features is crucial.
The process of extracting features is highly linked to dimensionality reduction
as it implies the transformation of the data from a sparse high-dimensional
space, to higher level meaningful abstractions. This dissertation employs
Neural Networks for distributed paragraph representations, and Latent Dirichlet
Allocation to capture higher level features of paragraph vectors. Although
Neural Networks for distributed paragraph representations are considered the
state of the art for extracting paragraph vectors, we show that a quick topic
analysis model such as Latent Dirichlet Allocation can provide meaningful
features too. We evaluate the two methods on the CMU Movie Summary Corpus, a
collection of 25,203 movie plot summaries extracted from Wikipedia. Finally,
for both approaches, we use K-Nearest Neighbors to discover similar movies, and
plot the projected representations using T-Distributed Stochastic Neighbor
Embedding to depict the context similarities. These similarities, expressed as
movie distances, can be used for movies recommendation. The recommended movies
of this approach are compared with the recommended movies from IMDB, which use
a collaborative filtering recommendation approach, to show that our two models
could constitute either an alternative or a supplementary recommendation
approach.
| 2,016 | Computation and Language |
RIGA at SemEval-2016 Task 8: Impact of Smatch Extensions and
Character-Level Neural Translation on AMR Parsing Accuracy | Two extensions to the AMR smatch scoring script are presented. The first
extension com-bines the smatch scoring script with the C6.0 rule-based
classifier to produce a human-readable report on the error patterns frequency
observed in the scored AMR graphs. This first extension results in 4% gain over
the state-of-art CAMR baseline parser by adding to it a manually crafted
wrapper fixing the identified CAMR parser errors. The second extension combines
a per-sentence smatch with an en-semble method for selecting the best AMR graph
among the set of AMR graphs for the same sentence. This second modification
au-tomatically yields further 0.4% gain when ap-plied to outputs of two
nondeterministic AMR parsers: a CAMR+wrapper parser and a novel character-level
neural translation AMR parser. For AMR parsing task the character-level neural
translation attains surprising 7% gain over the carefully optimized word-level
neural translation. Overall, we achieve smatch F1=62% on the SemEval-2016
official scor-ing set and F1=67% on the LDC2015E86 test set.
| 2,016 | Computation and Language |
Generating Chinese Classical Poems with RNN Encoder-Decoder | We take the generation of Chinese classical poem lines as a
sequence-to-sequence learning problem, and build a novel system based on the
RNN Encoder-Decoder structure to generate quatrains (Jueju in Chinese), with a
topic word as input. Our system can jointly learn semantic meaning within a
single line, semantic relevance among lines in a poem, and the use of
structural, rhythmical and tonal patterns, without utilizing any constraint
templates. Experimental results show that our system outperforms other
competitive systems. We also find that the attention mechanism can capture the
word associations in Chinese classical poetry and inverting target lines in
training can improve performance.
| 2,016 | Computation and Language |
An Ensemble Method to Produce High-Quality Word Embeddings (2016) | A currently successful approach to computational semantics is to represent
words as embeddings in a machine-learned vector space. We present an ensemble
method that combines embeddings produced by GloVe (Pennington et al., 2014) and
word2vec (Mikolov et al., 2013) with structured knowledge from the semantic
networks ConceptNet (Speer and Havasi, 2012) and PPDB (Ganitkevitch et al.,
2013), merging their information into a common representation with a large,
multilingual vocabulary. The embeddings it produces achieve state-of-the-art
performance on many word-similarity evaluations. Its score of $\rho = .596$ on
an evaluation of rare words (Luong et al., 2013) is 16% higher than the
previous best known system.
| 2,019 | Computation and Language |
A Corpus and Evaluation Framework for Deeper Understanding of
Commonsense Stories | Representation and learning of commonsense knowledge is one of the
foundational problems in the quest to enable deep language understanding. This
issue is particularly challenging for understanding casual and correlational
relationships between events. While this topic has received a lot of interest
in the NLP community, research has been hindered by the lack of a proper
evaluation framework. This paper attempts to address this problem with a new
framework for evaluating story understanding and script learning: the 'Story
Cloze Test'. This test requires a system to choose the correct ending to a
four-sentence story. We created a new corpus of ~50k five-sentence commonsense
stories, ROCStories, to enable this evaluation. This corpus is unique in two
ways: (1) it captures a rich set of causal and temporal commonsense relations
between daily events, and (2) it is a high quality collection of everyday life
stories that can also be used for story generation. Experimental evaluation
shows that a host of baselines and state-of-the-art models based on shallow
language understanding struggle to achieve a high score on the Story Cloze
Test. We discuss these implications for script and story learning, and offer
suggestions for deeper language understanding.
| 2,016 | Computation and Language |
Improving LSTM-based Video Description with Linguistic Knowledge Mined
from Text | This paper investigates how linguistic knowledge mined from large text
corpora can aid the generation of natural language descriptions of videos.
Specifically, we integrate both a neural language model and distributional
semantics trained on large text corpora into a recent LSTM-based architecture
for video description. We evaluate our approach on a collection of Youtube
videos as well as two large movie description datasets showing significant
improvements in grammaticality while modestly improving descriptive quality.
| 2,016 | Computation and Language |
Advances in Very Deep Convolutional Neural Networks for LVCSR | Very deep CNNs with small 3x3 kernels have recently been shown to achieve
very strong performance as acoustic models in hybrid NN-HMM speech recognition
systems. In this paper we investigate how to efficiently scale these models to
larger datasets. Specifically, we address the design choice of pooling and
padding along the time dimension which renders convolutional evaluation of
sequences highly inefficient. We propose a new CNN design without timepadding
and without timepooling, which is slightly suboptimal for accuracy, but has two
significant advantages: it enables sequence training and deployment by allowing
efficient convolutional evaluation of full utterances, and, it allows for batch
normalization to be straightforwardly adopted to CNNs on sequence data. Through
batch normalization, we recover the lost peformance from removing the
time-pooling, while keeping the benefit of efficient convolutional evaluation.
We demonstrate the performance of our models both on larger scale data than
before, and after sequence training. Our very deep CNN model sequence trained
on the 2000h switchboard dataset obtains 9.4 word error rate on the Hub5
test-set, matching with a single model the performance of the 2015 IBM system
combination, which was the previous best published result.
| 2,016 | Computation and Language |
Neural Headline Generation with Sentence-wise Optimization | Recently, neural models have been proposed for headline generation by
learning to map documents to headlines with recurrent neural networks.
Nevertheless, as traditional neural network utilizes maximum likelihood
estimation for parameter optimization, it essentially constrains the expected
training objective within word level rather than sentence level. Moreover, the
performance of model prediction significantly relies on training data
distribution. To overcome these drawbacks, we employ minimum risk training
strategy in this paper, which directly optimizes model parameters in sentence
level with respect to evaluation metrics and leads to significant improvements
for headline generation. Experiment results show that our models outperforms
state-of-the-art systems on both English and Chinese headline generation tasks.
| 2,016 | Computation and Language |
Transfer Learning for Low-Resource Neural Machine Translation | The encoder-decoder framework for neural machine translation (NMT) has been
shown effective in large data scenarios, but is much less effective for
low-resource languages. We present a transfer learning method that
significantly improves Bleu scores across a range of low-resource languages.
Our key idea is to first train a high-resource language pair (the parent
model), then transfer some of the learned parameters to the low-resource pair
(the child model) to initialize and constrain training. Using our transfer
learning method we improve baseline NMT models by an average of 5.6 Bleu on
four low-resource language pairs. Ensembling and unknown word replacement add
another 2 Bleu which brings the NMT performance on low-resource machine
translation close to a strong syntax based machine translation (SBMT) system,
exceeding its performance on one language pair. Additionally, using the
transfer learning model for re-scoring, we can improve the SBMT system by an
average of 1.3 Bleu, improving the state-of-the-art on low-resource machine
translation.
| 2,016 | Computation and Language |
Word embeddings and recurrent neural networks based on Long-Short Term
Memory nodes in supervised biomedical word sense disambiguation | Word sense disambiguation helps identifying the proper sense of ambiguous
words in text. With large terminologies such as the UMLS Metathesaurus
ambiguities appear and highly effective disambiguation methods are required.
Supervised learning algorithm methods are used as one of the approaches to
perform disambiguation. Features extracted from the context of an ambiguous
word are used to identify the proper sense of such a word. The type of features
have an impact on machine learning methods, thus affect disambiguation
performance. In this work, we have evaluated several types of features derived
from the context of the ambiguous word and we have explored as well more global
features derived from MEDLINE using word embeddings. Results show that word
embeddings improve the performance of more traditional features and allow as
well using recurrent neural network classifiers based on Long-Short Term Memory
(LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets
a new state of the art performance with a macro accuracy of 95.97 in the MSH
WSD data set.
| 2,016 | Computation and Language |
Fusing Audio, Textual and Visual Features for Sentiment Analysis of News
Videos | This paper presents a novel approach to perform sentiment analysis of news
videos, based on the fusion of audio, textual and visual clues extracted from
their contents. The proposed approach aims at contributing to the
semiodiscoursive study regarding the construction of the ethos (identity) of
this media universe, which has become a central part of the modern-day lives of
millions of people. To achieve this goal, we apply state-of-the-art
computational methods for (1) automatic emotion recognition from facial
expressions, (2) extraction of modulations in the participants' speeches and
(3) sentiment analysis from the closed caption associated to the videos of
interest. More specifically, we compute features, such as, visual intensities
of recognized emotions, field sizes of participants, voicing probability, sound
loudness, speech fundamental frequencies and the sentiment scores (polarities)
from text sentences in the closed caption. Experimental results with a dataset
containing 520 annotated news videos from three Brazilian and one American
popular TV newscasts show that our approach achieves an accuracy of up to 84%
in the sentiments (tension levels) classification task, thus demonstrating its
high potential to be used by media analysts in several applications,
especially, in the journalistic domain.
| 2,016 | Computation and Language |
Method of Tibetan Person Knowledge Extraction | Person knowledge extraction is the foundation of the Tibetan knowledge graph
construction, which provides support for Tibetan question answering system,
information retrieval, information extraction and other researches, and
promotes national unity and social stability. This paper proposes a SVM and
template based approach to Tibetan person knowledge extraction. Through
constructing the training corpus, we build the templates based the shallow
parsing analysis of Tibetan syntactic, semantic features and verbs. Using the
training corpus, we design a hierarchical SVM classifier to realize the entity
knowledge extraction. Finally, experimental results prove the method has
greater improvement in Tibetan person knowledge extraction.
| 2,016 | Computation and Language |
Using Sentence-Level LSTM Language Models for Script Inference | There is a small but growing body of research on statistical scripts, models
of event sequences that allow probabilistic inference of implicit events from
documents. These systems operate on structured verb-argument events produced by
an NLP pipeline. We compare these systems with recent Recurrent Neural Net
models that directly operate on raw tokens to predict sentences, finding the
latter to be roughly comparable to the former in terms of predicting missing
events in documents.
| 2,016 | Computation and Language |
Mapping Out Narrative Structures and Dynamics Using Networks and Textual
Information | Human communication is often executed in the form of a narrative, an account
of connected events composed of characters, actions, and settings. A coherent
narrative structure is therefore a requisite for a well-formulated narrative --
be it fictional or nonfictional -- for informative and effective communication,
opening up the possibility of a deeper understanding of a narrative by studying
its structural properties. In this paper we present a network-based framework
for modeling and analyzing the structure of a narrative, which is further
expanded by incorporating methods from computational linguistics to utilize the
narrative text. Modeling a narrative as a dynamically unfolding system, we
characterize its progression via the growth patterns of the character network,
and use sentiment analysis and topic modeling to represent the actual content
of the narrative in the form of interaction maps between characters with
associated sentiment values and keywords. This is a network framework advanced
beyond the simple occurrence-based one most often used until now, allowing one
to utilize the unique characteristics of a given narrative to a high degree.
Given the ubiquity and importance of narratives, such advanced network-based
representation and analysis framework may lead to a more systematic modeling
and understanding of narratives for social interactions, expression of human
sentiments, and communication.
| 2,020 | Computation and Language |
Learning Global Features for Coreference Resolution | There is compelling evidence that coreference prediction would benefit from
modeling global information about entity-clusters. Yet, state-of-the-art
performance can be achieved with systems treating each mention prediction
independently, which we attribute to the inherent difficulty of crafting
informative cluster-level features. We instead propose to use recurrent neural
networks (RNNs) to learn latent, global representations of entity clusters
directly from their mentions. We show that such representations are especially
useful for the prediction of pronominal mentions, and can be incorporated into
an end-to-end coreference system that outperforms the state of the art without
requiring any additional search.
| 2,016 | Computation and Language |
Conversational flow in Oxford-style debates | Public debates are a common platform for presenting and juxtaposing diverging
views on important issues. In this work we propose a methodology for tracking
how ideas flow between participants throughout a debate. We use this approach
in a case study of Oxford-style debates---a competitive format where the winner
is determined by audience votes---and show how the outcome of a debate depends
on aspects of conversational flow. In particular, we find that winners tend to
make better use of a debate's interactive component than losers, by actively
pursuing their opponents' points rather than promoting their own ideas over the
course of the conversation.
| 2,016 | Computation and Language |
Shallow Parsing Pipeline for Hindi-English Code-Mixed Social Media Text | In this study, the problem of shallow parsing of Hindi-English code-mixed
social media text (CSMT) has been addressed. We have annotated the data,
developed a language identifier, a normalizer, a part-of-speech tagger and a
shallow parser. To the best of our knowledge, we are the first to attempt
shallow parsing on CSMT. The pipeline developed has been made available to the
research community with the goal of enabling better text analysis of Hindi
English CSMT. The pipeline is accessible at http://bit.ly/csmt-parser-api .
| 2,016 | Computation and Language |
Disfluency Detection using a Bidirectional LSTM | We introduce a new approach for disfluency detection using a Bidirectional
Long-Short Term Memory neural network (BLSTM). In addition to the word
sequence, the model takes as input pattern match features that were developed
to reduce sensitivity to vocabulary size in training, which lead to improved
performance over the word sequence alone. The BLSTM takes advantage of explicit
repair states in addition to the standard reparandum states. The final output
leverages integer linear programming to incorporate constraints of disfluency
structure. In experiments on the Switchboard corpus, the model achieves
state-of-the-art performance for both the standard disfluency detection task
and the correction detection task. Analysis shows that the model has better
detection of non-repetition disfluencies, which tend to be much harder to
detect.
| 2,016 | Computation and Language |
Improving sentence compression by learning to predict gaze | We show how eye-tracking corpora can be used to improve sentence compression
models, presenting a novel multi-task learning algorithm based on multi-layer
LSTMs. We obtain performance competitive with or better than state-of-the-art
approaches.
| 2,016 | Computation and Language |
Visual Storytelling | We introduce the first dataset for sequential vision-to-language, and explore
how this data may be used for the task of visual storytelling. The first
release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211
sequences, aligned to both descriptive (caption) and story language. We
establish several strong baselines for the storytelling task, and motivate an
automatic metric to benchmark progress. Modelling concrete description as well
as figurative and social language, as provided in this dataset and the
storytelling task, has the potential to move artificial intelligence from basic
understandings of typical visual scenes towards more and more human-like
understanding of grounded event structure and subjective expression.
| 2,016 | Computation and Language |
StalemateBreaker: A Proactive Content-Introducing Approach to Automatic
Human-Computer Conversation | Existing open-domain human-computer conversation systems are typically
passive: they either synthesize or retrieve a reply provided a human-issued
utterance. It is generally presumed that humans should take the role to lead
the conversation and introduce new content when a stalemate occurs, and that
the computer only needs to "respond." In this paper, we propose
StalemateBreaker, a conversation system that can proactively introduce new
content when appropriate. We design a pipeline to determine when, what, and how
to introduce new content during human-computer conversation. We further propose
a novel reranking algorithm Bi-PageRank-HITS to enable rich interaction between
conversation context and candidate replies. Experiments show that both the
content-introducing approach and the reranking algorithm are effective. Our
full StalemateBreaker model outperforms a state-of-the-practice conversation
system by +14.4% p@1 when a stalemate occurs.
| 2,016 | Computation and Language |
Match-SRNN: Modeling the Recursive Matching Structure with Spatial RNN | Semantic matching, which aims to determine the matching degree between two
texts, is a fundamental problem for many NLP applications. Recently, deep
learning approach has been applied to this problem and significant improvements
have been achieved. In this paper, we propose to view the generation of the
global interaction between two texts as a recursive process: i.e. the
interaction of two texts at each position is a composition of the interactions
between their prefixes as well as the word level interaction at the current
position. Based on this idea, we propose a novel deep architecture, namely
Match-SRNN, to model the recursive matching structure. Firstly, a tensor is
constructed to capture the word level interactions. Then a spatial RNN is
applied to integrate the local interactions recursively, with importance
determined by four types of gates. Finally, the matching score is calculated
based on the global interaction. We show that, after degenerated to the exact
matching scenario, Match-SRNN can approximate the dynamic programming process
of longest common subsequence. Thus, there exists a clear interpretation for
Match-SRNN. Our experiments on two semantic matching tasks showed the
effectiveness of Match-SRNN, and its ability of visualizing the learned
matching structure.
| 2,016 | Computation and Language |
A Network-based End-to-End Trainable Task-oriented Dialogue System | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain.
| 2,017 | Computation and Language |
Sentence-Level Grammatical Error Identification as Sequence-to-Sequence
Correction | We demonstrate that an attention-based encoder-decoder model can be used for
sentence-level grammatical error identification for the Automated Evaluation of
Scientific Writing (AESW) Shared Task 2016. The attention-based encoder-decoder
models can be used for the generation of corrections, in addition to error
identification, which is of interest for certain end-user applications. We show
that a character-based encoder-decoder model is particularly effective,
outperforming other results on the AESW Shared Task on its own, and showing
gains over a word-based counterpart. Our final model--a combination of three
character-based encoder-decoder models, one word-based encoder-decoder model,
and a sentence-level CNN--is the highest performing system on the AESW 2016
binary prediction Shared Task.
| 2,016 | Computation and Language |
Supervised and Unsupervised Ensembling for Knowledge Base Population | We present results on combining supervised and unsupervised methods to
ensemble multiple systems for two popular Knowledge Base Population (KBP)
tasks, Cold Start Slot Filling (CSSF) and Tri-lingual Entity Discovery and
Linking (TEDL). We demonstrate that our combined system along with auxiliary
features outperforms the best performing system for both tasks in the 2015
competition, several ensembling baselines, as well as the state-of-the-art
stacking approach to ensembling KBP systems. The success of our technique on
two different and challenging problems demonstrates the power and generality of
our combined approach to ensembling.
| 2,016 | Computation and Language |
SSP: Semantic Space Projection for Knowledge Graph Embedding with Text
Descriptions | Knowledge representation is an important, long-history topic in AI, and there
have been a large amount of work for knowledge graph embedding which projects
symbolic entities and relations into low-dimensional, real-valued vector space.
However, most embedding methods merely concentrate on data fitting and ignore
the explicit semantic expression, leading to uninterpretable representations.
Thus, traditional embedding methods have limited potentials for many
applications such as question answering, and entity classification. To this
end, this paper proposes a semantic representation method for knowledge graph
\textbf{(KSR)}, which imposes a two-level hierarchical generative process that
globally extracts many aspects and then locally assigns a specific category in
each aspect for every triple. Since both aspects and categories are
semantics-relevant, the collection of categories in each aspect is treated as
the semantic representation of this triple. Extensive experiments justify our
model outperforms other state-of-the-art baselines substantially.
| 2,017 | Computation and Language |
From Incremental Meaning to Semantic Unit (phrase by phrase) | This paper describes an experimental approach to Detection of Minimal
Semantic Units and their Meaning (DiMSUM), explored within the framework of
SemEval 2016 Task 10. The approach is primarily based on a combination of word
embeddings and parserbased features, and employs unidirectional incremental
computation of compositional embeddings for multiword expressions.
| 2,016 | Computation and Language |
Speed-Constrained Tuning for Statistical Machine Translation Using
Bayesian Optimization | We address the problem of automatically finding the parameters of a
statistical machine translation system that maximize BLEU scores while ensuring
that decoding speed exceeds a minimum value. We propose the use of Bayesian
Optimization to efficiently tune the speed-related decoding parameters by
easily incorporating speed as a noisy constraint function. The obtained
parameter values are guaranteed to satisfy the speed constraint with an
associated confidence margin. Across three language pairs and two speed
constraint values, we report overall optimization time reduction compared to
grid and random search. We also show that Bayesian Optimization can decouple
speed and BLEU measurements, resulting in a further reduction of overall
optimization time as speed is measured over a small subset of sentences.
| 2,016 | Computation and Language |
Clustering Comparable Corpora of Russian and Ukrainian Academic Texts:
Word Embeddings and Semantic Fingerprints | We present our experience in applying distributional semantics (neural word
embeddings) to the problem of representing and clustering documents in a
bilingual comparable corpus. Our data is a collection of Russian and Ukrainian
academic texts, for which topics are their academic fields. In order to build
language-independent semantic representations of these documents, we train
neural distributional models on monolingual corpora and learn the optimal
linear transformation of vectors from one language to another. The resulting
vectors are then used to produce `semantic fingerprints' of documents, serving
as input to a clustering algorithm.
The presented method is compared to several baselines including `orthographic
translation' with Levenshtein edit distance and outperforms them by a large
margin. We also show that language-independent `semantic fingerprints' are
superior to multi-lingual clustering algorithms proposed in the previous work,
at the same time requiring less linguistic resources.
| 2,016 | Computation and Language |
Exploring Segment Representations for Neural Segmentation Models | Many natural language processing (NLP) tasks can be generalized into
segmentation problem. In this paper, we combine semi-CRF with neural network to
solve NLP segmentation tasks. Our model represents a segment both by composing
the input units and embedding the entire segment. We thoroughly study different
composition functions and different segment embeddings. We conduct extensive
experiments on two typical segmentation tasks: named entity recognition (NER)
and Chinese word segmentation (CWS). Experimental results show that our neural
semi-CRF model benefits from representing the entire segment and achieves the
state-of-the-art performance on CWS benchmark dataset and competitive results
on the CoNLL03 dataset.
| 2,016 | Computation and Language |
M$^2$S-Net: Multi-Modal Similarity Metric Learning based Deep
Convolutional Network for Answer Selection | Recent works using artificial neural networks based on distributed word
representation greatly boost performance on various natural language processing
tasks, especially the answer selection problem. Nevertheless, most of the
previous works used deep learning methods (like LSTM-RNN, CNN, etc.) only to
capture semantic representation of each sentence separately, without
considering the interdependence between each other. In this paper, we propose a
novel end-to-end learning framework which constitutes deep convolutional neural
network based on multi-modal similarity metric learning (M$^2$S-Net) on
pairwise tokens. The proposed model demonstrates its performance by surpassing
previous state-of-the-art systems on the answer selection benchmark, i.e.,
TREC-QA dataset, in both MAP and MRR metrics.
| 2,018 | Computation and Language |
An Attentive Neural Architecture for Fine-grained Entity Type
Classification | In this work we propose a novel attention-based neural network model for the
task of fine-grained entity type classification that unlike previously proposed
models recursively composes representations of entity mention contexts. Our
model achieves state-of-the-art performance with 74.94% loose micro F1-score on
the well-established FIGER dataset, a relative improvement of 2.59%. We also
investigate the behavior of the attention mechanism of our model and observe
that it can learn contextual linguistic expressions that indicate the
fine-grained category memberships of an entity.
| 2,016 | Computation and Language |
Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term
Memory Models and Auxiliary Loss | Bidirectional long short-term memory (bi-LSTM) networks have recently proven
successful for various NLP sequence modeling tasks, but little is known about
their reliance to input representations, target languages, data set size, and
label noise. We address these issues and evaluate bi-LSTMs with word,
character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to
traditional POS taggers across languages and data sizes. We also present a
novel bi-LSTM model, which combines the POS tagging loss function with an
auxiliary loss function that accounts for rare words. The model obtains
state-of-the-art performance across 22 languages, and works especially well for
morphologically complex languages. Our analysis suggests that bi-LSTMs are less
sensitive to training data size and label corruptions (at small noise levels)
than previously assumed.
| 2,016 | Computation and Language |
Efficient Calculation of Bigram Frequencies in a Corpus of Short Texts | We show that an efficient and popular method for calculating bigram
frequencies is unsuitable for bodies of short texts and offer a simple
alternative. Our method has the same computational complexity as the old method
and offers an exact count instead of an approximation.
| 2,016 | Computation and Language |
Syntactic and semantic classification of verb arguments using
dependency-based and rich semantic features | Corpus Pattern Analysis (CPA) has been the topic of Semeval 2015 Task 15,
aimed at producing a system that can aid lexicographers in their efforts to
build a dictionary of meanings for English verbs using the CPA annotation
process. CPA parsing is one of the subtasks which this annotation process is
made of and it is the focus of this report. A supervised machine-learning
approach has been implemented, in which syntactic features derived from parse
trees and semantic features derived from WordNet and word embeddings are used.
It is shown that this approach performs well, even with the data sparsity
issues that characterize the dataset, and can obtain better results than other
system by a margin of about 4% f-score.
| 2,016 | Computation and Language |
A Deep Neural Network for Chinese Zero Pronoun Resolution | Existing approaches for Chinese zero pronoun resolution overlook semantic
information. This is because zero pronouns have no descriptive information,
which results in difficulty in explicitly capturing their semantic similarities
with antecedents. Moreover, when dealing with candidate antecedents,
traditional systems simply take advantage of the local information of a single
candidate antecedent while failing to consider the underlying information
provided by the other candidates from a global perspective. To address these
weaknesses, we propose a novel zero pronoun-specific neural network, which is
capable of representing zero pronouns by utilizing the contextual information
at the semantic level. In addition, when dealing with candidate antecedents, a
two-level candidate encoder is employed to explicitly capture both the local
and global information of candidate antecedents. We conduct experiments on the
Chinese portion of the OntoNotes 5.0 corpus. Experimental results show that our
approach substantially outperforms the state-of-the-art method in various
experimental settings.
| 2,017 | Computation and Language |
Distributed Entity Disambiguation with Per-Mention Learning | Entity disambiguation, or mapping a phrase to its canonical representation in
a knowledge base, is a fundamental step in many natural language processing
applications. Existing techniques based on global ranking models fail to
capture the individual peculiarities of the words and hence, either struggle to
meet the accuracy requirements of many real-world applications or they are too
complex to satisfy real-time constraints of applications.
In this paper, we propose a new disambiguation system that learns specialized
features and models for disambiguating each ambiguous phrase in the English
language. To train and validate the hundreds of thousands of learning models
for this purpose, we use a Wikipedia hyperlink dataset with more than 170
million labelled annotations. We provide an extensive experimental evaluation
to show that the accuracy of our approach compares favourably with respect to
many state-of-the-art disambiguation systems. The training required for our
approach can be easily distributed over a cluster. Furthermore, updating our
system for new entities or calibrating it for special ones is a computationally
fast process, that does not affect the disambiguation of the other entities.
| 2,016 | Computation and Language |
A Factorization Machine Framework for Testing Bigram Embeddings in
Knowledgebase Completion | Embedding-based Knowledge Base Completion models have so far mostly combined
distributed representations of individual entities or relations to compute
truth scores of missing links. Facts can however also be represented using
pairwise embeddings, i.e. embeddings for pairs of entities and relations. In
this paper we explore such bigram embeddings with a flexible Factorization
Machine model and several ablations from it. We investigate the relevance of
various bigram types on the fb15k237 dataset and find relative improvements
compared to a compositional model.
| 2,016 | Computation and Language |
Dialog-based Language Learning | A long-term goal of machine learning research is to build an intelligent
dialog agent. Most research in natural language understanding has focused on
learning from fixed training sets of labeled data, with supervision either at
the word level (tagging, parsing tasks) or sentence level (question answering,
machine translation). This kind of supervision is not realistic of how humans
learn, where language is both learned by, and used for, communication. In this
work, we study dialog-based language learning, where supervision is given
naturally and implicitly in the response of the dialog partner during the
conversation. We study this setup in two domains: the bAbI dataset of (Weston
et al., 2015) and large-scale question answering from (Dodge et al., 2015). We
evaluate a set of baseline learning strategies on these tasks, and show that a
novel model incorporating predictive lookahead is a promising approach for
learning from a teacher's response. In particular, a surprising result is that
it can learn to answer questions correctly without any reward-based supervision
at all.
| 2,016 | Computation and Language |
Speaker Cluster-Based Speaker Adaptive Training for Deep Neural Network
Acoustic Modeling | A speaker cluster-based speaker adaptive training (SAT) method under deep
neural network-hidden Markov model (DNN-HMM) framework is presented in this
paper. During training, speakers that are acoustically adjacent to each other
are hierarchically clustered using an i-vector based distance metric. DNNs with
speaker dependent layers are then adaptively trained for each cluster of
speakers. Before decoding starts, an unseen speaker in test set is matched to
the closest speaker cluster through comparing i-vector based distances. The
previously trained DNN of the matched speaker cluster is used for decoding
utterances of the test speaker. The performance of the proposed method on a
large vocabulary spontaneous speech recognition task is evaluated on a training
set of with 1500 hours of speech, and a test set of 24 speakers with 1774
utterances. Comparing to a speaker independent DNN with a baseline word error
rate of 11.6%, a relative 6.8% reduction in word error rate is observed from
the proposed method.
| 2,016 | Computation and Language |
Chinese Song Iambics Generation with Neural Attention-based Model | Learning and generating Chinese poems is a charming yet challenging task.
Traditional approaches involve various language modeling and machine
translation techniques, however, they perform not as well when generating poems
with complex pattern constraints, for example Song iambics, a famous type of
poems that involve variable-length sentences and strict rhythmic patterns. This
paper applies the attention-based sequence-to-sequence model to generate
Chinese Song iambics. Specifically, we encode the cue sentences by a
bi-directional Long-Short Term Memory (LSTM) model and then predict the entire
iambic with the information provided by the encoder, in the form of an
attention-based LSTM that can regularize the generation process by the fine
structure of the input cues. Several techniques are investigated to improve the
model, including global context integration, hybrid style training, character
vector initialization and adaptation. Both the automatic and subjective
evaluation results show that our model indeed can learn the complex structural
and rhythmic patterns of Song iambics, and the generation is rather successful.
| 2,016 | Computation and Language |
A Novel Approach to Dropped Pronoun Translation | Dropped Pronouns (DP) in which pronouns are frequently dropped in the source
language but should be retained in the target language are challenge in machine
translation. In response to this problem, we propose a semi-supervised approach
to recall possibly missing pronouns in the translation. Firstly, we build
training data for DP generation in which the DPs are automatically labelled
according to the alignment information from a parallel corpus. Secondly, we
build a deep learning-based DP generator for input sentences in decoding when
no corresponding references exist. More specifically, the generation is
two-phase: (1) DP position detection, which is modeled as a sequential
labelling task with recurrent neural networks; and (2) DP prediction, which
employs a multilayer perceptron with rich features. Finally, we integrate the
above outputs into our translation system to recall missing pronouns by both
extracting rules from the DP-labelled training data and translating the
DP-generated input sentences. Experimental results show that our approach
achieves a significant improvement of 1.58 BLEU points in translation
performance with 66% F-score for DP generation accuracy.
| 2,016 | Computation and Language |
Row-less Universal Schema | Universal schema jointly embeds knowledge bases and textual patterns to
reason about entities and relations for automatic knowledge base construction
and information extraction. In the past, entity pairs and relations were
represented as learned vectors with compatibility determined by a scoring
function, limiting generalization to unseen text patterns and entities.
Recently, 'column-less' versions of Universal Schema have used compositional
pattern encoders to generalize to all text patterns. In this work we take the
next step and propose a 'row-less' model of universal schema, removing explicit
entity pair representations. Instead of learning vector representations for
each entity pair in our training set, we treat an entity pair as a function of
its relation types. In experimental results on the FB15k-237 benchmark we
demonstrate that we can match the performance of a comparable model with
explicit entity pair representations using a model of attention over relation
types. We further demonstrate that the model per- forms with nearly the same
accuracy on entity pairs never seen during training.
| 2,016 | Computation and Language |
Dependency Parsing with LSTMs: An Empirical Evaluation | We propose a transition-based dependency parser using Recurrent Neural
Networks with Long Short-Term Memory (LSTM) units. This extends the feedforward
neural network parser of Chen and Manning (2014) and enables modelling of
entire sequences of shift/reduce transition decisions. On the Google Web
Treebank, our LSTM parser is competitive with the best feedforward parser on
overall accuracy and notably achieves more than 3% improvement for long-range
dependencies, which has proved difficult for previous transition-based parsers
due to error propagation and limited context information. Our findings
additionally suggest that dropout regularisation on the embedding layer is
crucial to improve the LSTM's generalisation.
| 2,016 | Computation and Language |
SweLL on the rise: Swedish Learner Language corpus for European
Reference Level studies | We present a new resource for Swedish, SweLL, a corpus of Swedish Learner
essays linked to learners' performance according to the Common European
Framework of Reference (CEFR). SweLL consists of three subcorpora - SpIn,
SW1203 and Tisus, collected from three different educational establishments.
The common metadata for all subcorpora includes age, gender, native languages,
time of residence in Sweden, type of written task. Depending on the subcorpus,
learner texts may contain additional information, such as text genres, topics,
grades. Five of the six CEFR levels are represented in the corpus: A1, A2, B1,
B2 and C1 comprising in total 339 essays. C2 level is not included since
courses at C2 level are not offered. The work flow consists of collection of
essays and permits, essay digitization and registration, meta-data annotation,
automatic linguistic annotation. Inter-rater agreement is presented on the
basis of SW1203 subcorpus. The work on SweLL is still ongoing with more than
100 essays waiting in the pipeline. This article both describes the resource
and the "how-to" behind the compilation of SweLL.
| 2,016 | Computation and Language |
Bridging LSTM Architecture and the Neural Dynamics during Reading | Recently, the long short-term memory neural network (LSTM) has attracted wide
interest due to its success in many tasks. LSTM architecture consists of a
memory cell and three gates, which looks similar to the neuronal networks in
the brain. However, there still lacks the evidence of the cognitive
plausibility of LSTM architecture as well as its working mechanism. In this
paper, we study the cognitive plausibility of LSTM by aligning its internal
architecture with the brain activity observed via fMRI when the subjects read a
story. Experiment results show that the artificial memory vector in LSTM can
accurately predict the observed sequential brain activities, indicating the
correlation between LSTM architecture and the cognitive process of story
reading.
| 2,016 | Computation and Language |
Automatic verbal aggression detection for Russian and American
imageboards | The problem of aggression for Internet communities is rampant. Anonymous
forums usually called imageboards are notorious for their aggressive and
deviant behaviour even in comparison with other Internet communities. This
study is aimed at studying ways of automatic detection of verbal expression of
aggression for the most popular American (4chan.org) and Russian (2ch.hk)
imageboards. A set of 1,802,789 messages was used for this study. The machine
learning algorithm word2vec was applied to detect the state of aggression. A
decent result is obtained for English (88%), the results for Russian are yet to
be improved.
| 2,016 | Computation and Language |
Detecting state of aggression in sentences using CNN | In this article we study verbal expression of aggression and its detection
using machine learning and neural networks methods. We test our results using
our corpora of messages from anonymous imageboards. We also compare Random
forest classifier with convolutional neural network for "Movie reviews with one
sentence per review" corpus.
| 2,016 | Computation and Language |
Why and How to Pay Different Attention to Phrase Alignments of Different
Intensities | This work studies comparatively two typical sentence pair classification
tasks: textual entailment (TE) and answer selection (AS), observing that phrase
alignments of different intensities contribute differently in these tasks. We
address the problems of identifying phrase alignments of flexible granularity
and pooling alignments of different intensities for these tasks. Examples for
flexible granularity are alignments between two single words, between a single
word and a phrase and between a short phrase and a long phrase. By intensity we
roughly mean the degree of match, it ranges from identity over surface-form
co-occurrence, rephrasing and other semantic relatedness to unrelated words as
in lots of parenthesis text. Prior work (i) has limitations in phrase
generation and representation, or (ii) conducts alignment at word and phrase
levels by handcrafted features or (iii) utilizes a single attention mechanism
over alignment intensities without considering the characteristics of specific
tasks, which limits the system's effectiveness across tasks. We propose an
architecture based on Gated Recurrent Unit that supports (i) representation
learning of phrases of arbitrary granularity and (ii) task-specific focusing of
phrase alignments between two sentences by attention pooling. Experimental
results on TE and AS match our observation and are state-of-the-art.
| 2,016 | Computation and Language |
Visualization of Jacques Lacan's Registers of the Psychoanalytic Field,
and Discovery of Metaphor and of Metonymy. Analytical Case Study of Edgar
Allan Poe's "The Purloined Letter" | We start with a description of Lacan's work that we then take into our
analytics methodology. In a first investigation, a Lacan-motivated template of
the Poe story is fitted to the data. A segmentation of the storyline is used in
order to map out the diachrony. Based on this, it will be shown how synchronous
aspects, potentially related to Lacanian registers, can be sought. This
demonstrates the effectiveness of an approach based on a model template of the
storyline narrative. In a second and more comprehensive investigation, we
develop an approach for revealing, that is, uncovering, Lacanian register
relationships. Objectives of this work include the wide and general application
of our methodology. This methodology is strongly based on the "letting the data
speak" Correspondence Analysis analytics platform of Jean-Paul Benz\'ecri, that
is also the geometric data analysis, both qualitative and quantitative
analytics, developed by Pierre Bourdieu.
| 2,017 | Computation and Language |
Parsing Argumentation Structures in Persuasive Essays | In this article, we present a novel approach for parsing argumentation
structures. We identify argument components using sequence labeling at the
token level and apply a new joint model for detecting argumentation structures.
The proposed model globally optimizes argument component types and
argumentative relations using integer linear programming. We show that our
model considerably improves the performance of base classifiers and
significantly outperforms challenging heuristic baselines. Moreover, we
introduce a novel corpus of persuasive essays annotated with argumentation
structures. We show that our annotation scheme and annotation guidelines
successfully guide human annotators to substantial agreement. This corpus and
the annotation guidelines are freely available for ensuring reproducibility and
to encourage future research in computational argumentation.
| 2,016 | Computation and Language |
Conversational Markers of Constructive Discussions | Group discussions are essential for organizing every aspect of modern life,
from faculty meetings to senate debates, from grant review panels to papal
conclaves. While costly in terms of time and organization effort, group
discussions are commonly seen as a way of reaching better decisions compared to
solutions that do not require coordination between the individuals (e.g.
voting)---through discussion, the sum becomes greater than the parts. However,
this assumption is not irrefutable: anecdotal evidence of wasteful discussions
abounds, and in our own experiments we find that over 30% of discussions are
unproductive.
We propose a framework for analyzing conversational dynamics in order to
determine whether a given task-oriented discussion is worth having or not. We
exploit conversational patterns reflecting the flow of ideas and the balance
between the participants, as well as their linguistic choices. We apply this
framework to conversations naturally occurring in an online collaborative world
exploration game developed and deployed to support this research. Using this
setting, we show that linguistic cues and conversational patterns extracted
from the first 20 seconds of a team discussion are predictive of whether it
will be a wasteful or a productive one.
| 2,016 | Computation and Language |
Entities as topic labels: Improving topic interpretability and
evaluability combining Entity Linking and Labeled LDA | In order to create a corpus exploration method providing topics that are
easier to interpret than standard LDA topic models, here we propose combining
two techniques called Entity linking and Labeled LDA. Our method identifies in
an ontology a series of descriptive labels for each document in a corpus. Then
it generates a specific topic for each label. Having a direct relation between
topics and labels makes interpretation easier; using an ontology as background
knowledge limits label ambiguity. As our topics are described with a limited
number of clear-cut labels, they promote interpretability, and this may help
quantitative evaluation. We illustrate the potential of the approach by
applying it in order to define the most relevant topics addressed by each party
in the European Parliament's fifth mandate (1999-2004).
| 2,016 | Computation and Language |
Extracting Temporal and Causal Relations between Events | Structured information resulting from temporal information processing is
crucial for a variety of natural language processing tasks, for instance to
generate timeline summarization of events from news documents, or to answer
temporal/causal-related questions about some events. In this thesis we present
a framework for an integrated temporal and causal relation extraction system.
We first develop a robust extraction component for each type of relations, i.e.
temporal order and causality. We then combine the two extraction components
into an integrated relation extraction system, CATENA---CAusal and Temporal
relation Extraction from NAtural language texts---, by utilizing the
presumption about event precedence in causality, that causing events must
happened BEFORE resulting events. Several resources and techniques to improve
our relation extraction systems are also discussed, including word embeddings
and training data expansion. Finally, we report our adaptation efforts of
temporal information processing for languages other than English, namely
Italian and Indonesian.
| 2,016 | Computation and Language |
The IBM 2016 English Conversational Telephone Speech Recognition System | We describe a collection of acoustic and language modeling techniques that
lowered the word error rate of our English conversational telephone LVCSR
system to a record 6.6% on the Switchboard subset of the Hub5 2000 evaluation
testset. On the acoustic side, we use a score fusion of three strong models:
recurrent nets with maxout activations, very deep convolutional nets with 3x3
kernels, and bidirectional long short-term memory nets which operate on FMLLR
and i-vector features. On the language modeling side, we use an updated model
"M" and hierarchical neural network LMs.
| 2,016 | Computation and Language |
Detecting "Smart" Spammers On Social Network: A Topic Model Approach | Spammer detection on social network is a challenging problem. The rigid
anti-spam rules have resulted in emergence of "smart" spammers. They resemble
legitimate users who are difficult to identify. In this paper, we present a
novel spammer classification approach based on Latent Dirichlet
Allocation(LDA), a topic model. Our approach extracts both the local and the
global information of topic distribution patterns, which capture the essence of
spamming. Tested on one benchmark dataset and one self-collected dataset, our
proposed method outperforms other state-of-the-art methods in terms of averaged
F1-score.
| 2,016 | Computation and Language |
Comparing Fifty Natural Languages and Twelve Genetic Languages Using
Word Embedding Language Divergence (WELD) as a Quantitative Measure of
Language Distance | We introduce a new measure of distance between languages based on word
embedding, called word embedding language divergence (WELD). WELD is defined as
divergence between unified similarity distribution of words between languages.
Using such a measure, we perform language comparison for fifty natural
languages and twelve genetic languages. Our natural language dataset is a
collection of sentence-aligned parallel corpora from bible translations for
fifty languages spanning a variety of language families. Although we use
parallel corpora, which guarantees having the same content in all languages,
interestingly in many cases languages within the same family cluster together.
In addition to natural languages, we perform language comparison for the coding
regions in the genomes of 12 different organisms (4 plants, 6 animals, and two
human subjects). Our result confirms a significant high-level difference in the
genetic language model of humans/animals versus plants. The proposed method is
a step toward defining a quantitative measure of similarity between languages,
with applications in languages classification, genre identification, dialect
identification, and evaluation of translations.
| 2,016 | Computation and Language |
Word Ordering Without Syntax | Recent work on word ordering has argued that syntactic structure is
important, or even required, for effectively recovering the order of a
sentence. We find that, in fact, an n-gram language model with a simple
heuristic gives strong results on this task. Furthermore, we show that a long
short-term memory (LSTM) language model is even more effective at recovering
order, with our basic model outperforming a state-of-the-art syntactic model by
11.5 BLEU points. Additional data and larger beams yield further gains, at the
expense of training and search time.
| 2,016 | Computation and Language |
Distance Metric Learning for Aspect Phrase Grouping | Aspect phrase grouping is an important task in aspect-level sentiment
analysis. It is a challenging problem due to polysemy and context dependency.
We propose an Attention-based Deep Distance Metric Learning (ADDML) method, by
considering aspect phrase representation as well as context representation.
First, leveraging the characteristics of the review text, we automatically
generate aspect phrase sample pairs for distant supervision. Second, we feed
word embeddings of aspect phrases and their contexts into an attention-based
neural network to learn feature representation of contexts. Both aspect phrase
embedding and context embedding are used to learn a deep feature subspace for
measure the distances between aspect phrases for K-means clustering.
Experiments on four review datasets show that the proposed method outperforms
state-of-the-art strong baseline methods.
| 2,016 | Computation and Language |
Teaching natural language to computers | "Natural Language," whether spoken and attended to by humans, or processed
and generated by computers, requires networked structures that reflect creative
processes in semantic, syntactic, phonetic, linguistic, social, emotional, and
cultural modules. Being able to produce novel and useful behavior following
repeated practice gets to the root of both artificial intelligence and human
language. This paper investigates the modalities involved in language-like
applications that computers -- and programmers -- engage with, and aims to fine
tune the questions we ask to better account for context, self-awareness, and
embodiment.
| 2,016 | Computation and Language |
Response Selection with Topic Clues for Retrieval-based Chatbots | We consider incorporating topic information into message-response matching to
boost responses with rich content in retrieval-based chatbots. To this end, we
propose a topic-aware convolutional neural tensor network (TACNTN). In TACNTN,
matching between a message and a response is not only conducted between a
message vector and a response vector generated by convolutional neural
networks, but also leverages extra topic information encoded in two topic
vectors. The two topic vectors are linear combinations of topic words of the
message and the response respectively, where the topic words are obtained from
a pre-trained LDA model and their weights are determined by themselves as well
as the message vector and the response vector. The message vector, the response
vector, and the two topic vectors are fed to neural tensors to calculate a
matching score. Empirical study on a public data set and a human annotated data
set shows that TACNTN can significantly outperform state-of-the-art methods for
message-response matching.
| 2,016 | Computation and Language |
Multi30K: Multilingual English-German Image Descriptions | We introduce the Multi30K dataset to stimulate multilingual multimodal
research. Recent advances in image description have been demonstrated on
English-language datasets almost exclusively, but image description should not
be limited to English. This dataset extends the Flickr30K dataset with i)
German translations created by professional translators over a subset of the
English descriptions, and ii) descriptions crowdsourced independently of the
original English descriptions. We outline how the data can be used for
multilingual image description and multimodal machine translation, but we
anticipate the data will be useful for a broader range of tasks.
| 2,016 | Computation and Language |
Compositional Sentence Representation from Character within Large
Context Text | This paper describes a Hierarchical Composition Recurrent Network (HCRN)
consisting of a 3-level hierarchy of compositional models: character, word and
sentence. This model is designed to overcome two problems of representing a
sentence on the basis of a constituent word sequence. The first is a
data-sparsity problem in word embedding, and the other is a no usage of
inter-sentence dependency. In the HCRN, word representations are built from
characters, thus resolving the data-sparsity problem, and inter-sentence
dependency is embedded into sentence representation at the level of sentence
composition. We adopt a hierarchy-wise learning scheme in order to alleviate
the optimization difficulties of learning deep hierarchical recurrent network
in end-to-end fashion. The HCRN was quantitatively and qualitatively evaluated
on a dialogue act classification task. Especially, sentence representations
with an inter-sentence dependency are able to capture both implicit and
explicit semantics of sentence, significantly improving performance. In the
end, the HCRN achieved state-of-the-art performance with a test error rate of
22.7% for dialogue act classification on the SWBD-DAMSL database.
| 2,016 | Computation and Language |
TheanoLM - An Extensible Toolkit for Neural Network Language Modeling | We present a new tool for training neural network language models (NNLMs),
scoring sentences, and generating text. The tool has been written using Python
library Theano, which allows researcher to easily extend it and tune any aspect
of the training process. Regardless of the flexibility, Theano is able to
generate extremely fast native code that can utilize a GPU or multiple CPU
cores in order to parallelize the heavy numerical computations. The tool has
been evaluated in difficult Finnish and English conversational speech
recognition tasks, and significant improvement was obtained over our best
back-off n-gram models. The results that we obtained in the Finnish task were
compared to those from existing RNNLM and RWTHLM toolkits, and found to be as
good or better, while training times were an order of magnitude shorter.
| 2,016 | Computation and Language |
IISCNLP at SemEval-2016 Task 2: Interpretable STS with ILP based
Multiple Chunk Aligner | Interpretable semantic textual similarity (iSTS) task adds a crucial
explanatory layer to pairwise sentence similarity. We address various
components of this task: chunk level semantic alignment along with assignment
of similarity type and score for aligned chunks with a novel system presented
in this paper. We propose an algorithm, iMATCH, for the alignment of multiple
non-contiguous chunks based on Integer Linear Programming (ILP). Similarity
type and score assignment for pairs of chunks is done using a supervised
multiclass classification technique based on Random Forrest Classifier. Results
show that our algorithm iMATCH has low execution time and outperforms most
other participating systems in terms of alignment score. Of the three datasets,
we are top ranked for answer- students dataset in terms of overall score and
have top alignment score for headlines dataset in the gold chunks track.
| 2,016 | Computation and Language |
Compression and the origins of Zipf's law for word frequencies | Here we sketch a new derivation of Zipf's law for word frequencies based on
optimal coding. The structure of the derivation is reminiscent of Mandelbrot's
random typing model but it has multiple advantages over random typing: (1) it
starts from realistic cognitive pressures (2) it does not require fine tuning
of parameters and (3) it sheds light on the origins of other statistical laws
of language and thus can lead to a compact theory of linguistic laws. Our
findings suggest that the recurrence of Zipf's law in human languages could
originate from pressure for easy and fast communication.
| 2,016 | Computation and Language |
Modeling Rich Contexts for Sentiment Classification with LSTM | Sentiment analysis on social media data such as tweets and weibo has become a
very important and challenging task. Due to the intrinsic properties of such
data, tweets are short, noisy, and of divergent topics, and sentiment
classification on these data requires to modeling various contexts such as the
retweet/reply history of a tweet, and the social context about authors and
relationships. While few prior study has approached the issue of modeling
contexts in tweet, this paper proposes to use a hierarchical LSTM to model rich
contexts in tweet, particularly long-range context. Experimental results show
that contexts can help us to perform sentiment classification remarkably
better.
| 2,016 | Computation and Language |
The IBM Speaker Recognition System: Recent Advances and Error Analysis | We present the recent advances along with an error analysis of the IBM
speaker recognition system for conversational speech. Some of the key
advancements that contribute to our system include: a nearest-neighbor
discriminant analysis (NDA) approach (as opposed to LDA) for intersession
variability compensation in the i-vector space, the application of speaker and
channel-adapted features derived from an automatic speech recognition (ASR)
system for speaker recognition, and the use of a DNN acoustic model with a very
large number of output units (~10k senones) to compute the frame-level soft
alignments required in the i-vector estimation process. We evaluate these
techniques on the NIST 2010 SRE extended core conditions (C1-C9), as well as
the 10sec-10sec condition. To our knowledge, results achieved by our system
represent the best performances published to date on these conditions. For
example, on the extended tel-tel condition (C5) the system achieves an EER of
0.59%. To garner further understanding of the remaining errors (on C5), we
examine the recordings associated with the low scoring target trials, where
various issues are identified for the problematic recordings/trials.
Interestingly, it is observed that correcting the pathological recordings not
only improves the scores for the target trials but also for the nontarget
trials.
| 2,016 | Computation and Language |
Stance and Sentiment in Tweets | We can often detect from a person's utterances whether he/she is in favor of
or against a given target entity -- their stance towards the target. However, a
person may express the same stance towards a target by using negative or
positive language. Here for the first time we present a dataset of
tweet--target pairs annotated for both stance and sentiment. The targets may or
may not be referred to in the tweets, and they may or may not be the target of
opinion in the tweets. Partitions of this dataset were used as training and
test sets in a SemEval-2016 shared task competition. We propose a simple stance
detection system that outperforms submissions from all 19 teams that
participated in the shared task. Additionally, access to both stance and
sentiment annotations allows us to explore several research questions. We show
that while knowing the sentiment expressed by a tweet is beneficial for stance
classification, it alone is not sufficient. Finally, we use additional
unlabeled data through distant supervision techniques and word embeddings to
further improve stance classification.
| 2,016 | Computation and Language |
Improving Automated Patent Claim Parsing: Dataset, System, and
Experiments | Off-the-shelf natural language processing software performs poorly when
parsing patent claims owing to their use of irregular language relative to the
corpora built from news articles and the web typically utilized to train this
software. Stopping short of the extensive and expensive process of accumulating
a large enough dataset to completely retrain parsers for patent claims, a
method of adapting existing natural language processing software towards patent
claims via forced part of speech tag correction is proposed. An Amazon
Mechanical Turk collection campaign organized to generate a public corpus to
train such an improved claim parsing system is discussed, identifying lessons
learned during the campaign that can be of use in future NLP dataset collection
campaigns with AMT. Experiments utilizing this corpus and other patent claim
sets measure the parsing performance improvement garnered via the claim parsing
system. Finally, the utility of the improved claim parsing system within other
patent processing applications is demonstrated via experiments showing improved
automated patent subject classification when the new claim parsing system is
utilized to generate the features.
| 2,016 | Computation and Language |
Detecting Context Dependence in Exercise Item Candidates Selected from
Corpora | We explore the factors influencing the dependence of single sentences on
their larger textual context in order to automatically identify candidate
sentences for language learning exercises from corpora which are presentable in
isolation. An in-depth investigation of this question has not been previously
carried out. Understanding this aspect can contribute to a more efficient
selection of candidate sentences which, besides reducing the time required for
item writing, can also ensure a higher degree of variability and authenticity.
We present a set of relevant aspects collected based on the qualitative
analysis of a smaller set of context-dependent corpus example sentences.
Furthermore, we implemented a rule-based algorithm using these criteria which
achieved an average precision of 0.76 for the identification of different
issues related to context dependence. The method has also been evaluated
empirically where 80% of the sentences in which our system did not detect
context-dependent elements were also considered context-independent by human
raters.
| 2,016 | Computation and Language |
Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec | Distributed dense word vectors have been shown to be effective at capturing
token-level semantic and syntactic regularities in language, while topic models
can form interpretable representations over documents. In this work, we
describe lda2vec, a model that learns dense word vectors jointly with
Dirichlet-distributed latent document-level mixtures of topic vectors. In
contrast to continuous dense document representations, this formulation
produces sparse, interpretable document mixtures through a non-negative simplex
constraint. Our method is simple to incorporate into existing automatic
differentiation frameworks and allows for unsupervised document representations
geared for use by scientists while simultaneously learning word vectors and the
linear relationships between them.
| 2,016 | Computation and Language |
Adobe-MIT submission to the DSTC 4 Spoken Language Understanding pilot
task | The Dialog State Tracking Challenge 4 (DSTC 4) proposes several pilot tasks.
In this paper, we focus on the spoken language understanding pilot task, which
consists of tagging a given utterance with speech acts and semantic slots. We
compare different classifiers: the best system obtains 0.52 and 0.67 F1-scores
on the test set for speech act recognition for the tourist and the guide
respectively, and 0.52 F1-score for semantic tagging for both the guide and the
tourist.
| 2,016 | Computation and Language |
Robust Dialog State Tracking for Large Ontologies | The Dialog State Tracking Challenge 4 (DSTC 4) differentiates itself from the
previous three editions as follows: the number of slot-value pairs present in
the ontology is much larger, no spoken language understanding output is given,
and utterances are labeled at the subdialog level. This paper describes a novel
dialog state tracking method designed to work robustly under these conditions,
using elaborate string matching, coreference resolution tailored for dialogs
and a few other improvements. The method can correctly identify many values
that are not explicitly present in the utterance. On the final evaluation, our
method came in first among 7 competing teams and 24 entries. The F1-score
achieved by our method was 9 and 7 percentage points higher than that of the
runner-up for the utterance-level evaluation and for the subdialog-level
evaluation, respectively.
| 2,016 | Computation and Language |
Neural Recovery Machine for Chinese Dropped Pronoun | Dropped pronouns (DPs) are ubiquitous in pro-drop languages like Chinese,
Japanese etc. Previous work mainly focused on painstakingly exploring the
empirical features for DPs recovery. In this paper, we propose a neural
recovery machine (NRM) to model and recover DPs in Chinese, so that to avoid
the non-trivial feature engineering process. The experimental results show that
the proposed NRM significantly outperforms the state-of-the-art approaches on
both two heterogeneous datasets. Further experiment results of Chinese zero
pronoun (ZP) resolution show that the performance of ZP resolution can also be
improved by recovering the ZPs to DPs.
| 2,019 | Computation and Language |
On Improving Informativity and Grammaticality for Multi-Sentence
Compression | Multi Sentence Compression (MSC) is of great value to many real world
applications, such as guided microblog summarization, opinion summarization and
newswire summarization. Recently, word graph-based approaches have been
proposed and become popular in MSC. Their key assumption is that redundancy
among a set of related sentences provides a reliable way to generate
informative and grammatical sentences. In this paper, we propose an effective
approach to enhance the word graph-based MSC and tackle the issue that most of
the state-of-the-art MSC approaches are confronted with: i.e., improving both
informativity and grammaticality at the same time. Our approach consists of
three main components: (1) a merging method based on Multiword Expressions
(MWE); (2) a mapping strategy based on synonymy between words; (3) a re-ranking
step to identify the best compression candidates generated using a POS-based
language model (POS-LM). We demonstrate the effectiveness of this novel
approach using a dataset made of clusters of English newswire sentences. The
observed improvements on informativity and grammaticality of the generated
compressions show that our approach is superior to state-of-the-art MSC
methods.
| 2,016 | Computation and Language |
A corpus of preposition supersenses in English web reviews | We present the first corpus annotated with preposition supersenses,
unlexicalized categories for semantic functions that can be marked by English
prepositions (Schneider et al., 2015). That scheme improves upon its
predecessors to better facilitate comprehensive manual annotation. Moreover,
unlike the previous schemes, the preposition supersenses are organized
hierarchically. Our data will be publicly released on the web upon publication.
| 2,016 | Computation and Language |
Problems With Evaluation of Word Embeddings Using Word Similarity Tasks | Lacking standardized extrinsic evaluation methods for vector representations
of words, the NLP community has relied heavily on word similarity tasks as a
proxy for intrinsic evaluation of word vectors. Word similarity evaluation,
which correlates the distance between vectors and human judgments of semantic
similarity is attractive, because it is computationally inexpensive and fast.
In this paper we present several problems associated with the evaluation of
word vectors on word similarity datasets, and summarize existing solutions. Our
study suggests that the use of word similarity tasks for evaluation of word
vectors is not sustainable and calls for further research on evaluation
methods.
| 2,016 | Computation and Language |
The Controlled Natural Language of Randall Munroe's Thing Explainer | It is rare that texts or entire books written in a Controlled Natural
Language (CNL) become very popular, but exactly this has happened with a book
that has been published last year. Randall Munroe's Thing Explainer uses only
the 1'000 most often used words of the English language together with drawn
pictures to explain complicated things such as nuclear reactors, jet engines,
the solar system, and dishwashers. This restricted language is a very
interesting new case for the CNL community. I describe here its place in the
context of existing approaches on Controlled Natural Languages, and I provide a
first analysis from a scientific perspective, covering the word production
rules and word distributions.
| 2,016 | Computation and Language |
GLEU Without Tuning | The GLEU metric was proposed for evaluating grammatical error corrections
using n-gram overlap with a set of reference sentences, as opposed to
precision/recall of specific annotated errors (Napoles et al., 2015). This
paper describes improvements made to the GLEU metric that address problems that
arise when using an increasing number of reference sets. Unlike the originally
presented metric, the modified metric does not require tuning. We recommend
that this version be used instead of the original version.
| 2,016 | Computation and Language |
Grammatical Case Based IS-A Relation Extraction with Boosting for Polish | Pattern-based methods of IS-A relation extraction rely heavily on so called
Hearst patterns. These are ways of expressing instance enumerations of a class
in natural language. While these lexico-syntactic patterns prove quite useful,
they may not capture all taxonomical relations expressed in text. Therefore in
this paper we describe a novel method of IS-A relation extraction from
patterns, which uses morpho-syntactical annotations along with grammatical case
of noun phrases that constitute entities participating in IS-A relation. We
also describe a method for increasing the number of extracted relations that we
call pseudo-subclass boosting which has potential application in any
pattern-based relation extraction method. Experiments were conducted on a
corpus of about 0.5 billion web documents in Polish language.
| 2,016 | Computation and Language |
The Yahoo Query Treebank, V. 1.0 | A description and annotation guidelines for the Yahoo Webscope release of
Query Treebank, Version 1.0, May 2016.
| 2,016 | Computation and Language |
Different approaches for identifying important concepts in probabilistic
biomedical text summarization | Automatic text summarization tools help users in biomedical domain to acquire
their intended information from various textual resources more efficiently.
Some of the biomedical text summarization systems put the basis of their
sentence selection approach on the frequency of concepts extracted from the
input text. However, it seems that exploring other measures rather than the
frequency for identifying the valuable content of the input document, and
considering the correlations existing between concepts may be more useful for
this type of summarization. In this paper, we describe a Bayesian summarizer
for biomedical text documents. The Bayesian summarizer initially maps the input
text to the Unified Medical Language System (UMLS) concepts, then it selects
the important ones to be used as classification features. We introduce
different feature selection approaches to identify the most important concepts
of the text and to select the most informative content according to the
distribution of these concepts. We show that with the use of an appropriate
feature selection approach, the Bayesian biomedical summarizer can improve the
performance of summarization. We perform extensive evaluations on a corpus of
scientific papers in biomedical domain. The results show that the Bayesian
summarizer outperforms the biomedical summarizers that rely on the frequency of
concepts, the domain-independent and baseline methods based on the
Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics. Moreover,
the results suggest that using the meaningfulness measure and considering the
correlations of concepts in the feature selection step lead to a significant
increase in the performance of summarization.
| 2,017 | Computation and Language |
Coverage Embedding Models for Neural Machine Translation | In this paper, we enhance the attention-based neural machine translation
(NMT) by adding explicit coverage embedding models to alleviate issues of
repeating and dropping translations in NMT. For each source word, our model
starts with a full coverage embedding vector to track the coverage status, and
then keeps updating it with neural networks as the translation goes.
Experiments on the large-scale Chinese-to-English task show that our enhanced
model improves the translation quality significantly on various test sets over
the strong large vocabulary NMT system.
| 2,016 | Computation and Language |
Vocabulary Manipulation for Neural Machine Translation | In order to capture rich language phenomena, neural machine translation
models have to use a large vocabulary size, which requires high computing time
and large memory usage. In this paper, we alleviate this issue by introducing a
sentence-level or batch-level vocabulary, which is only a very small sub-set of
the full output vocabulary. For each sentence or batch, we only predict the
target words in its sentence-level or batch-level vocabulary. Thus, we reduce
both the computing time and the memory usage. Our method simply takes into
account the translation options of each word or phrase in the source sentence,
and picks a very small target vocabulary for each sentence based on a
word-to-word translation model or a bilingual phrase library learned from a
traditional machine translation model. Experimental results on the large-scale
English-to-French task show that our method achieves better translation
performance by 1 BLEU point over the large vocabulary neural machine
translation system of Jean et al. (2015).
| 2,016 | Computation and Language |
Machine Comprehension Based on Learning to Rank | Machine comprehension plays an essential role in NLP and has been widely
explored with dataset like MCTest. However, this dataset is too simple and too
small for learning true reasoning abilities. \cite{hermann2015teaching}
therefore release a large scale news article dataset and propose a deep LSTM
reader system for machine comprehension. However, the training process is
expensive. We therefore try feature-engineered approach with semantics on the
new dataset to see how traditional machine learning technique and semantics can
help with machine comprehension. Meanwhile, our proposed L2R reader system
achieves good performance with efficiency and less training data.
| 2,016 | Computation and Language |
Real-Time Web Scale Event Summarization Using Sequential Decision Making | We present a system based on sequential decision making for the online
summarization of massive document streams, such as those found on the web.
Given an event of interest (e.g. "Boston marathon bombing"), our system is able
to filter the stream for relevance and produce a series of short text updates
describing the event as it unfolds over time. Unlike previous work, our
approach is able to jointly model the relevance, comprehensiveness, novelty,
and timeliness required by time-sensitive queries. We demonstrate a 28.3%
improvement in summary F1 and a 43.8% improvement in time-sensitive F1 metrics.
| 2,016 | Computation and Language |
Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic
Representation Learning | We introduce polyglot language models, recurrent neural network models
trained to predict symbol sequences in many different languages using shared
representations of symbols and conditioning on typological information about
the language to be predicted. We apply these to the problem of modeling phone
sequences---a domain in which universal symbol inventories and
cross-linguistically shared feature representations are a natural fit.
Intrinsic evaluation on held-out perplexity, qualitative analysis of the
learned representations, and extrinsic evaluation in two downstream
applications that make use of phonetic features show (i) that polyglot models
better generalize to held-out data than comparable monolingual models and (ii)
that polyglot phonetic feature representations are of higher quality than those
learned monolingually.
| 2,016 | Computation and Language |
Noisy Parallel Approximate Decoding for Conditional Recurrent Language
Model | Recent advances in conditional recurrent language modelling have mainly
focused on network architectures (e.g., attention mechanism), learning
algorithms (e.g., scheduled sampling and sequence-level training) and novel
applications (e.g., image/video description generation, speech recognition,
etc.) On the other hand, we notice that decoding algorithms/strategies have not
been investigated as much, and it has become standard to use greedy or beam
search. In this paper, we propose a novel decoding strategy motivated by an
earlier observation that nonlinear hidden layers of a deep neural network
stretch the data manifold. The proposed strategy is embarrassingly
parallelizable without any communication overhead, while improving an existing
decoding algorithm. We extensively evaluate it with attention-based neural
machine translation on the task of En->Cz translation.
| 2,016 | Computation and Language |
Learning the Curriculum with Bayesian Optimization for Task-Specific
Word Representation Learning | We use Bayesian optimization to learn curricula for word representation
learning, optimizing performance on downstream tasks that depend on the learned
representations as features. The curricula are modeled by a linear ranking
function which is the scalar product of a learned weight vector and an
engineered feature vector that characterizes the different aspects of the
complexity of each instance in the training corpus. We show that learning the
curriculum improves performance on a variety of downstream tasks over random
orders and in comparison to the natural corpus order.
| 2,016 | Computation and Language |
Joint Embeddings of Hierarchical Categories and Entities | Due to the lack of structured knowledge applied in learning distributed
representation of categories, existing work cannot incorporate category
hierarchies into entity information.~We propose a framework that embeds
entities and categories into a semantic space by integrating structured
knowledge and taxonomy hierarchy from large knowledge bases. The framework
allows to compute meaningful semantic relatedness between entities and
categories.~Compared with the previous state of the art, our framework can
handle both single-word concepts and multiple-word concepts with superior
performance in concept categorization and semantic relatedness.
| 2,016 | Computation and Language |
On the Convergent Properties of Word Embedding Methods | Do word embeddings converge to learn similar things over different
initializations? How repeatable are experiments with word embeddings? Are all
word embedding techniques equally reliable? In this paper we propose evaluating
methods for learning word representations by their consistency across
initializations. We propose a measure to quantify the similarity of the learned
word representations under this setting (where they are subject to different
random initializations). Our preliminary results illustrate that our metric not
only measures a intrinsic property of word embedding methods but also
correlates well with other evaluation metrics on downstream tasks. We believe
our methods are is useful in characterizing robustness -- an important property
to consider when developing new word embedding methods.
| 2,016 | Computation and Language |
Which Learning Algorithms Can Generalize Identity-Based Rules to Novel
Inputs? | We propose a novel framework for the analysis of learning algorithms that
allows us to say when such algorithms can and cannot generalize certain
patterns from training data to test data. In particular we focus on situations
where the rule that must be learned concerns two components of a stimulus being
identical. We call such a basis for discrimination an identity-based rule.
Identity-based rules have proven to be difficult or impossible for certain
types of learning algorithms to acquire from limited datasets. This is in
contrast to human behaviour on similar tasks. Here we provide a framework for
rigorously establishing which learning algorithms will fail at generalizing
identity-based rules to novel stimuli. We use this framework to show that such
algorithms are unable to generalize identity-based rules to novel inputs unless
trained on virtually all possible inputs. We demonstrate these results
computationally with a multilayer feedforward neural network.
| 2,016 | Computation and Language |
A Corpus-based Toy Model for DisCoCat | The categorical compositional distributional (DisCoCat) model of meaning
rigorously connects distributional semantics and pregroup grammars, and has
found a variety of applications in computational linguistics. From a more
abstract standpoint, the DisCoCat paradigm predicates the construction of a
mapping from syntax to categorical semantics. In this work we present a
concrete construction of one such mapping, from a toy model of syntax for
corpora annotated with constituent structure trees, to categorical semantics
taking place in a category of free R-semimodules over an involutive commutative
semiring R.
| 2,016 | Computation and Language |
Towards Empathetic Human-Robot Interactions | Since the late 1990s when speech companies began providing their
customer-service software in the market, people have gotten used to speaking to
machines. As people interact more often with voice and gesture controlled
machines, they expect the machines to recognize different emotions, and
understand other high level communication features such as humor, sarcasm and
intention. In order to make such communication possible, the machines need an
empathy module in them which can extract emotions from human speech and
behavior and can decide the correct response of the robot. Although research on
empathetic robots is still in the early stage, we described our approach using
signal processing techniques, sentiment analysis and machine learning
algorithms to make robots that can "understand" human emotion. We propose Zara
the Supergirl as a prototype system of empathetic robots. It is a software
based virtual android, with an animated cartoon character to present itself on
the screen. She will get "smarter" and more empathetic through its deep
learning algorithms, and by gathering more data and learning from it. In this
paper, we present our work so far in the areas of deep learning of emotion and
sentiment recognition, as well as humor recognition. We hope to explore the
future direction of android development and how it can help improve people's
lives.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.