Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Speech recognition for medical conversations | In this work we explored building automatic speech recognition models for
transcribing doctor patient conversation. We collected a large scale dataset of
clinical conversations ($14,000$ hr), designed the task to represent the real
word scenario, and explored several alignment approaches to iteratively improve
data quality. We explored both CTC and LAS systems for building speech
recognition models. The LAS was more resilient to noisy data and CTC required
more data clean up. A detailed analysis is provided for understanding the
performance for clinical tasks. Our analysis showed the speech recognition
models performed well on important medical utterances, while errors occurred in
causal conversations. Overall we believe the resulting models can provide
reasonable quality in practice.
| 2,018 | Computation and Language |
FusionNet: Fusing via Fully-Aware Attention with Application to Machine
Comprehension | This paper introduces a new neural structure called FusionNet, which extends
existing attention approaches from three perspectives. First, it puts forward a
novel concept of "history of word" to characterize attention information from
the lowest word-level embedding up to the highest semantic-level
representation. Second, it introduces an improved attention scoring function
that better utilizes the "history of word" concept. Third, it proposes a
fully-aware multi-level attention mechanism to capture the complete information
in one text (such as a question) and exploit it in its counterpart (such as
context or passage) layer by layer. We apply FusionNet to the Stanford Question
Answering Dataset (SQuAD) and it achieves the first position for both single
and ensemble model on the official SQuAD leaderboard at the time of writing
(Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two
adversarial SQuAD datasets and it sets up the new state-of-the-art on both
datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to
51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.
| 2,018 | Computation and Language |
Non-Contextual Modeling of Sarcasm using a Neural Network Benchmark | One of the most crucial components of natural human-robot interaction is
artificial intuition and its influence on dialog systems. The intuitive
capability that humans have is undeniably extraordinary, and so remains one of
the greatest challenges for natural communicative dialogue between humans and
robots. In this paper, we introduce a novel probabilistic modeling framework of
identifying, classifying and learning features of sarcastic text via training a
neural network with human-informed sarcastic benchmarks. This is necessary for
establishing a comprehensive sentiment analysis schema that is sensitive to the
nuances of sarcasm-ridden text by being trained on linguistic cues. We show
that our model provides a good fit for this type of real-world informed data,
with potential to achieve as accurate, if not more, than alternatives. Though
the implementation and benchmarking is an extensive task, it can be extended
via the same method that we present to capture different forms of nuances in
communication and making for much more natural and engaging dialogue systems.
| 2,017 | Computation and Language |
Event Representations with Tensor-based Compositions | Robust and flexible event representations are important to many core areas in
language understanding. Scripts were proposed early on as a way of representing
sequences of events for such understanding, and has recently attracted renewed
attention. However, obtaining effective representations for modeling
script-like event sequences is challenging. It requires representations that
can capture event-level and scenario-level semantics. We propose a new
tensor-based composition method for creating event representations. The method
captures more subtle semantic interactions between an event and its entities
and yields representations that are effective at multiple event-related tasks.
With the continuous representations, we also devise a simple schema generation
method which produces better schemas compared to a prior discrete
representation based method. Our analysis shows that the tensors capture
distinct usages of a predicate even when there are only subtle differences in
their surface realizations.
| 2,017 | Computation and Language |
Generating Thematic Chinese Poetry using Conditional Variational
Autoencoders with Hybrid Decoders | Computer poetry generation is our first step towards computer writing.
Writing must have a theme. The current approaches of using sequence-to-sequence
models with attention often produce non-thematic poems. We present a novel
conditional variational autoencoder with a hybrid decoder adding the
deconvolutional neural networks to the general recurrent neural networks to
fully learn topic information via latent variables. This approach significantly
improves the relevance of the generated poems by representing each line of the
poem not only in a context-sensitive manner but also in a holistic way that is
highly related to the given keyword and the learned topic. A proposed augmented
word2vec model further improves the rhythm and symmetry. Tests show that the
generated poems by our approach are mostly satisfying with regulated rules and
consistent themes, and 73.42% of them receive an Overall score no less than 3
(the highest score is 5).
| 2,020 | Computation and Language |
Evaluating Machine Translation Performance on Chinese Idioms with a
Blacklist Method | Idiom translation is a challenging problem in machine translation because the
meaning of idioms is non-compositional, and a literal (word-by-word)
translation is likely to be wrong. In this paper, we focus on evaluating the
quality of idiom translation of MT systems. We introduce a new evaluation
method based on an idiom-specific blacklist of literal translations, based on
the insight that the occurrence of any blacklisted words in the translation
output indicates a likely translation error. We introduce a dataset, CIBB
(Chinese Idioms Blacklists Bank), and perform an evaluation of a
state-of-the-art Chinese-English neural MT system. Our evaluation confirms that
a sizable number of idioms in our test set are mistranslated (46.1%), that
literal translation error is a common error type, and that our blacklist method
is effective at identifying literal translation errors.
| 2,018 | Computation and Language |
Cross Temporal Recurrent Networks for Ranking Question Answer Pairs | Temporal gates play a significant role in modern recurrent-based neural
encoders, enabling fine-grained control over recursive compositional operations
over time. In recurrent models such as the long short-term memory (LSTM),
temporal gates control the amount of information retained or discarded over
time, not only playing an important role in influencing the learned
representations but also serving as a protection against vanishing gradients.
This paper explores the idea of learning temporal gates for sequence pairs
(question and answer), jointly influencing the learned representations in a
pairwise manner. In our approach, temporal gates are learned via 1D
convolutional layers and then subsequently cross applied across question and
answer for joint learning. Empirically, we show that this conceptually simple
sharing of temporal gates can lead to competitive performance across multiple
benchmarks. Intuitively, what our network achieves can be interpreted as
learning representations of question and answer pairs that are aware of what
each other is remembering or forgetting, i.e., pairwise temporal gating. Via
extensive experiments, we show that our proposed model achieves
state-of-the-art performance on two community-based QA datasets and competitive
performance on one factoid-based QA dataset.
| 2,017 | Computation and Language |
Visual and Textual Sentiment Analysis Using Deep Fusion Convolutional
Neural Networks | Sentiment analysis is attracting more and more attentions and has become a
very hot research topic due to its potential applications in personalized
recommendation, opinion mining, etc. Most of the existing methods are based on
either textual or visual data and can not achieve satisfactory results, as it
is very hard to extract sufficient information from only one single modality
data. Inspired by the observation that there exists strong semantic correlation
between visual and textual data in social medias, we propose an end-to-end deep
fusion convolutional neural network to jointly learn textual and visual
sentiment representations from training examples. The two modality information
are fused together in a pooling layer and fed into fully-connected layers to
predict the sentiment polarity. We evaluate the proposed approach on two widely
used data sets. Results show that our method achieves promising result compared
with the state-of-the-art methods which clearly demonstrate its competency.
| 2,017 | Computation and Language |
Effective Strategies in Zero-Shot Neural Machine Translation | In this paper, we proposed two strategies which can be applied to a
multilingual neural machine translation system in order to better tackle
zero-shot scenarios despite not having any parallel corpus. The experiments
show that they are effective in terms of both performance and computing
resources, especially in multilingual translation of unbalanced data in real
zero-resourced condition when they alleviate the language bias problem.
| 2,017 | Computation and Language |
Effective Use of Bidirectional Language Modeling for Transfer Learning
in Biomedical Named Entity Recognition | Biomedical named entity recognition (NER) is a fundamental task in text
mining of medical documents and has many applications. Deep learning based
approaches to this task have been gaining increasing attention in recent years
as their parameters can be learned end-to-end without the need for
hand-engineered features. However, these approaches rely on high-quality
labeled data, which is expensive to obtain. To address this issue, we
investigate how to use unlabeled text data to improve the performance of NER
models. Specifically, we train a bidirectional language model (BiLM) on
unlabeled data and transfer its weights to "pretrain" an NER model with the
same architecture as the BiLM, which results in a better parameter
initialization of the NER model. We evaluate our approach on four benchmark
datasets for biomedical NER and show that it leads to a substantial improvement
in the F1 scores compared with the state-of-the-art approaches. We also show
that BiLM weight transfer leads to a faster model training and the pretrained
model requires fewer training examples to achieve a particular F1 score.
| 2,018 | Computation and Language |
10Sent: A Stable Sentiment Analysis Method Based on the Combination of
Off-The-Shelf Approaches | Sentiment analysis has become a very important tool for analysis of social
media data. There are several methods developed for this research field, many
of them working very differently from each other, covering distinct aspects of
the problem and disparate strategies. Despite the large number of existent
techniques, there is no single one which fits well in all cases or for all data
sources. Supervised approaches may be able to adapt to specific situations but
they require manually labeled training, which is very cumbersome and expensive
to acquire, mainly for a new application. In this context, in here, we propose
to combine several very popular and effective state-of-the-practice sentiment
analysis methods, by means of an unsupervised bootstrapped strategy for
polarity classification. One of our main goals is to reduce the large
variability (lack of stability) of the unsupervised methods across different
domains (datasets). Our solution was thoroughly tested considering thirteen
different datasets in several domains such as opinions, comments, and social
media. The experimental results demonstrate that our combined method (aka,
10SENT) improves the effectiveness of the classification task, but more
importantly, it solves a key problem in the field. It is consistently among the
best methods in many data types, meaning that it can produce the best (or close
to best) results in almost all considered contexts, without any additional
costs (e.g., manual labeling). Our self-learning approach is also very
independent of the base methods, which means that it is highly extensible to
incorporate any new additional method that can be envisioned in the future.
Finally, we also investigate a transfer learning approach for sentiment
analysis as a means to gather additional (unsupervised) information for the
proposed approach and we show the potential of this technique to improve our
results.
| 2,017 | Computation and Language |
Mastering the Dungeon: Grounded Language Learning by Mechanical Turker
Descent | Contrary to most natural language processing research, which makes use of
static datasets, humans learn language interactively, grounded in an
environment. In this work we propose an interactive learning procedure called
Mechanical Turker Descent (MTD) and use it to train agents to execute natural
language commands grounded in a fantasy text adventure game. In MTD, Turkers
compete to train better agents in the short term, and collaborate by sharing
their agents' skills in the long term. This results in a gamified, engaging
experience for the Turkers and a better quality teaching signal for the agents
compared to static datasets, as the Turkers naturally adapt the training data
to the agent's abilities.
| 2,018 | Computation and Language |
Unsupervised Adaptation with Domain Separation Networks for Robust
Speech Recognition | Unsupervised domain adaptation of speech signal aims at adapting a
well-trained source-domain acoustic model to the unlabeled data from target
domain. This can be achieved by adversarial training of deep neural network
(DNN) acoustic models to learn an intermediate deep representation that is both
senone-discriminative and domain-invariant. Specifically, the DNN is trained to
jointly optimize the primary task of senone classification and the secondary
task of domain classification with adversarial objective functions. In this
work, instead of only focusing on learning a domain-invariant feature (i.e. the
shared component between domains), we also characterize the difference between
the source and target domain distributions by explicitly modeling the private
component of each domain through a private component extractor DNN. The private
component is trained to be orthogonal with the shared component and thus
implicitly increases the degree of domain-invariance of the shared component. A
reconstructor DNN is used to reconstruct the original speech feature from the
private and shared components as a regularization. This domain separation
framework is applied to the unsupervised environment adaptation task and
achieved 11.08% relative WER reduction from the gradient reversal layer
training, a representative adversarial training method, for automatic speech
recognition on CHiME-3 dataset.
| 2,017 | Computation and Language |
Application of Natural Language Processing to Determine User
Satisfaction in Public Services | Research on customer satisfaction has increased substantially in recent
years. However, the relative importance and relationships between different
determinants of satisfaction remains uncertain. Moreover, quantitative studies
to date tend to test for significance of pre-determined factors thought to have
an influence with no scalable means to identify other causes of user
satisfaction. The gaps in knowledge make it difficult to use available
knowledge on user preference for public service improvement. Meanwhile, digital
technology development has enabled new methods to collect user feedback, for
example through online forums where users can comment freely on their
experience. New tools are needed to analyze large volumes of such feedback. Use
of topic models is proposed as a feasible solution to aggregate open-ended user
opinions that can be easily deployed in the public sector. Generated insights
can contribute to a more inclusive decision-making process in public service
provision. This novel methodological approach is applied to a case of service
reviews of publicly-funded primary care practices in England. Findings from the
analysis of 145,000 reviews covering almost 7,700 primary care centers indicate
that the quality of interactions with staff and bureaucratic exigencies are the
key issues driving user satisfaction across England.
| 2,017 | Computation and Language |
On the Automatic Generation of Medical Imaging Reports | Medical imaging is widely used in clinical practice for diagnosis and
treatment. Report-writing can be error-prone for unexperienced physicians, and
time- consuming and tedious for experienced physicians. To address these
issues, we study the automatic generation of medical imaging reports. This task
presents several challenges. First, a complete report contains multiple
heterogeneous forms of information, including findings and tags. Second,
abnormal regions in medical images are difficult to identify. Third, the re-
ports are typically long, containing multiple sentences. To cope with these
challenges, we (1) build a multi-task learning framework which jointly performs
the pre- diction of tags and the generation of para- graphs, (2) propose a
co-attention mechanism to localize regions containing abnormalities and
generate narrations for them, (3) develop a hierarchical LSTM model to generate
long paragraphs. We demonstrate the effectiveness of the proposed methods on
two publicly available datasets.
| 2,019 | Computation and Language |
Does Higher Order LSTM Have Better Accuracy for Segmenting and Labeling
Sequence Data? | Existing neural models usually predict the tag of the current token
independent of the neighboring tags. The popular LSTM-CRF model considers the
tag dependencies between every two consecutive tags. However, it is hard for
existing neural models to take longer distance dependencies of tags into
consideration. The scalability is mainly limited by the complex model
structures and the cost of dynamic programming during training. In our work, we
first design a new model called "high order LSTM" to predict multiple tags for
the current token which contains not only the current tag but also the previous
several tags. We call the number of tags in one prediction as "order". Then we
propose a new method called Multi-Order BiLSTM (MO-BiLSTM) which combines low
order and high order LSTMs together. MO-BiLSTM keeps the scalability to high
order models with a pruning technique. We evaluate MO-BiLSTM on all-phrase
chunking and NER datasets. Experiment results show that MO-BiLSTM achieves the
state-of-the-art result in chunking and highly competitive results in two NER
datasets.
| 2,018 | Computation and Language |
Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes | Word embeddings use vectors to represent words such that the geometry between
vectors captures semantic relationship between the words. In this paper, we
develop a framework to demonstrate how the temporal dynamics of the embedding
can be leveraged to quantify changes in stereotypes and attitudes toward women
and ethnic minorities in the 20th and 21st centuries in the United States. We
integrate word embeddings trained on 100 years of text data with the U.S.
Census to show that changes in the embedding track closely with demographic and
occupation shifts over time. The embedding captures global social shifts --
e.g., the women's movement in the 1960s and Asian immigration into the U.S --
and also illuminates how specific adjectives and occupations became more
closely associated with certain populations over time. Our framework for
temporal analysis of word embedding opens up a powerful new intersection
between machine learning and quantitative social science.
| 2,018 | Computation and Language |
Customized Nonlinear Bandits for Online Response Selection in Neural
Conversation Models | Dialog response selection is an important step towards natural response
generation in conversational agents. Existing work on neural conversational
models mainly focuses on offline supervised learning using a large set of
context-response pairs. In this paper, we focus on online learning of response
selection in retrieval-based dialog systems. We propose a contextual
multi-armed bandit model with a nonlinear reward function that uses distributed
representation of text for online response selection. A bidirectional LSTM is
used to produce the distributed representations of dialog context and
responses, which serve as the input to a contextual bandit. In learning the
bandit, we propose a customized Thompson sampling method that is applied to a
polynomial feature space in approximating the reward. Experimental results on
the Ubuntu Dialogue Corpus demonstrate significant performance gains of the
proposed method over conventional linear contextual bandits. Moreover, we
report encouraging response selection performance of the proposed neural bandit
model using the Recall@k metric for a small set of online training samples.
| 2,017 | Computation and Language |
Improving the Accuracy of Pre-trained Word Embeddings for Sentiment
Analysis | Sentiment analysis is one of the well-known tasks and fast growing research
areas in natural language processing (NLP) and text classifications. This
technique has become an essential part of a wide range of applications
including politics, business, advertising and marketing. There are various
techniques for sentiment analysis, but recently word embeddings methods have
been widely used in sentiment classification tasks. Word2Vec and GloVe are
currently among the most accurate and usable word embedding methods which can
convert words into meaningful vectors. However, these methods ignore sentiment
information of texts and need a huge corpus of texts for training and
generating exact vectors which are used as inputs of deep learning models. As a
result, because of the small size of some corpuses, researcher often have to
use pre-trained word embeddings which were trained on other large text corpus
such as Google News with about 100 billion words. The increasing accuracy of
pre-trained word embeddings has a great impact on sentiment analysis research.
In this paper we propose a novel method, Improved Word Vectors (IWV), which
increases the accuracy of pre-trained word embeddings in sentiment analysis.
Our method is based on Part-of-Speech (POS) tagging techniques, lexicon-based
approaches and Word2Vec/GloVe methods. We tested the accuracy of our method via
different deep learning models and sentiment datasets. Our experiment results
show that Improved Word Vectors (IWV) are very effective for sentiment
analysis.
| 2,017 | Computation and Language |
Modelling Domain Relationships for Transfer Learning on Retrieval-based
Question Answering Systems in E-commerce | In this paper, we study transfer learning for the PI and NLI problems, aiming
to propose a general framework, which can effectively and efficiently adapt the
shared knowledge learned from a resource-rich source domain to a resource- poor
target domain. Specifically, since most existing transfer learning methods only
focus on learning a shared feature space across domains while ignoring the
relationship between the source and target domains, we propose to
simultaneously learn shared representations and domain relationships in a
unified framework. Furthermore, we propose an efficient and effective hybrid
model by combining a sentence encoding- based method and a sentence
interaction-based method as our base model. Extensive experiments on both
paraphrase identification and natural language inference demonstrate that our
base model is efficient and has promising performance compared to the competing
models, and our transfer learning method can help to significantly boost the
performance. Further analysis shows that the inter-domain and intra-domain
relationship captured by our model are insightful. Last but not least, we
deploy our transfer learning model for PI into our online chatbot system, which
can bring in significant improvements over our existing system. Finally, we
launch our new system on the chatbot platform Eva in our E-commerce site
AliExpress.
| 2,017 | Computation and Language |
SPINE: SParse Interpretable Neural Embeddings | Prediction without justification has limited utility. Much of the success of
neural models can be attributed to their ability to learn rich, dense and
expressive representations. While these representations capture the underlying
complexity and latent trends in the data, they are far from being
interpretable. We propose a novel variant of denoising k-sparse autoencoders
that generates highly efficient and interpretable distributed word
representations (word embeddings), beginning with existing word representations
from state-of-the-art methods like GloVe and word2vec. Through large scale
human evaluation, we report that our resulting word embedddings are much more
interpretable than the original GloVe and word2vec embeddings. Moreover, our
embeddings outperform existing popular word embeddings on a diverse suite of
benchmark downstream tasks.
| 2,017 | Computation and Language |
Ethical Challenges in Data-Driven Dialogue Systems | The use of dialogue systems as a medium for human-machine interaction is an
increasingly prevalent paradigm. A growing number of dialogue systems use
conversation strategies that are learned from large datasets. There are well
documented instances where interactions with these system have resulted in
biased or even offensive conversations due to the data-driven training process.
Here, we highlight potential ethical issues that arise in dialogue systems
research, including: implicit biases in data-driven systems, the rise of
adversarial examples, potential sources of privacy violations, safety concerns,
special considerations for reinforcement learning systems, and reproducibility
concerns. We also suggest areas stemming from these issues that deserve further
investigation. Through this initial survey, we hope to spur research leading to
robust, safe, and ethically sound dialogue systems.
| 2,017 | Computation and Language |
An Exploration of Word Embedding Initialization in Deep-Learning Tasks | Word embeddings are the interface between the world of discrete units of text
processing and the continuous, differentiable world of neural networks. In this
work, we examine various random and pretrained initialization methods for
embeddings used in deep networks and their effect on the performance on four
NLP tasks with both recurrent and convolutional architectures. We confirm that
pretrained embeddings are a little better than random initialization,
especially considering the speed of learning. On the other hand, we do not see
any significant difference between various methods of random initialization, as
long as the variance is kept reasonably low. High-variance initialization
prevents the network to use the space of embeddings and forces it to use other
free parameters to accomplish the task. We support this hypothesis by observing
the performance in learning lexical relations and by the fact that the network
can learn to perform reasonably in its task even with fixed random embeddings.
| 2,017 | Computation and Language |
Towards Accurate Deceptive Opinion Spam Detection based on Word
Order-preserving CNN | Nowadays, deep learning has been widely used. In natural language learning,
the analysis of complex semantics has been achieved because of its high degree
of flexibility. The deceptive opinions detection is an important application
area in deep learning model, and related mechanisms have been given attention
and researched. On-line opinions are quite short, varied types and content. In
order to effectively identify deceptive opinions, we need to comprehensively
study the characteristics of deceptive opinions, and explore novel
characteristics besides the textual semantics and emotional polarity that have
been widely used in text analysis. The detection mechanism based on deep
learning has better self-adaptability and can effectively identify all kinds of
deceptive opinions. In this paper, we optimize the convolution neural network
model by embedding the word order characteristics in its convolution layer and
pooling layer, which makes convolution neural network more suitable for various
text classification and deceptive opinions detection. The TensorFlow-based
experiments demonstrate that the detection mechanism proposed in this paper
achieve more accurate deceptive opinion detection results.
| 2,018 | Computation and Language |
Acronym Disambiguation: A Domain Independent Approach | Acronyms are omnipresent. They usually express information that is repetitive
and well known. But acronyms can also be ambiguous because there can be
multiple expansions for the same acronym. In this paper, we propose a general
system for acronym disambiguation that can work on any acronym given some
context information. We present methods for retrieving all the possible
expansions of an acronym from Wikipedia and AcronymsFinder.com. We propose to
use these expansions to collect all possible contexts in which these acronyms
are used and then score them using a paragraph embedding technique called
Doc2Vec. This method collectively led to achieving an accuracy of 90.9% in
selecting the correct expansion for given acronym, on a dataset we scraped from
Wikipedia with 707 distinct acronyms and 14,876 disambiguations.
| 2,017 | Computation and Language |
Experiential, Distributional and Dependency-based Word Embeddings have
Complementary Roles in Decoding Brain Activity | We evaluate 8 different word embedding models on their usefulness for
predicting the neural activation patterns associated with concrete nouns. The
models we consider include an experiential model, based on crowd-sourced
association data, several popular neural and distributional models, and a model
that reflects the syntactic context of words (based on dependency parses). Our
goal is to assess the cognitive plausibility of these various embedding models,
and understand how we can further improve our methods for interpreting brain
imaging data.
We show that neural word embedding models exhibit superior performance on the
tasks we consider, beating experiential word representation model. The
syntactically informed model gives the overall best performance when predicting
brain activation patterns from word embeddings; whereas the GloVe
distributional method gives the overall best performance when predicting in the
reverse direction (words vectors from brain images). Interestingly, however,
the error patterns of these different models are markedly different. This may
support the idea that the brain uses different systems for processing different
kinds of words. Moreover, we suggest that taking the relative strengths of
different embedding models into account will lead to better models of the brain
activity associated with words.
| 2,017 | Computation and Language |
Generative Adversarial Network for Abstractive Text Summarization | In this paper, we propose an adversarial process for abstractive text
summarization, in which we simultaneously train a generative model G and a
discriminative model D. In particular, we build the generator G as an agent of
reinforcement learning, which takes the raw text as input and predicts the
abstractive summarization. We also build a discriminator which attempts to
distinguish the generated summary from the ground truth summary. Extensive
experiments demonstrate that our model achieves competitive ROUGE scores with
the state-of-the-art methods on CNN/Daily Mail dataset. Qualitatively, we show
that our model is able to generate more abstractive, readable and diverse
summaries.
| 2,017 | Computation and Language |
Learning to Remember Translation History with a Continuous Cache | Existing neural machine translation (NMT) models generally translate
sentences in isolation, missing the opportunity to take advantage of
document-level information. In this work, we propose to augment NMT models with
a very light-weight cache-like memory network, which stores recent hidden
representations as translation history. The probability distribution over
generated words is updated online depending on the translation history
retrieved from the memory, endowing NMT models with the capability to
dynamically adapt over time. Experiments on multiple domains with different
topics and styles show the effectiveness of the proposed approach with
negligible impact on the computational cost.
| 2,017 | Computation and Language |
Improved Neural Text Attribute Transfer with Non-parallel Data | Text attribute transfer using non-parallel data requires methods that can
perform disentanglement of content and linguistic attributes. In this work, we
propose multiple improvements over the existing approaches that enable the
encoder-decoder framework to cope with the text attribute transfer from
non-parallel data. We perform experiments on the sentiment transfer task using
two datasets. For both datasets, our proposed method outperforms a strong
baseline in two of the three employed evaluation metrics.
| 2,017 | Computation and Language |
Machine Translation using Semantic Web Technologies: A Survey | A large number of machine translation approaches have recently been developed
to facilitate the fluid migration of content across languages. However, the
literature suggests that many obstacles must still be dealt with to achieve
better automatic translations. One of these obstacles is lexical and syntactic
ambiguity. A promising way of overcoming this problem is using Semantic Web
technologies. This article presents the results of a systematic review of
machine translation approaches that rely on Semantic Web technologies for
translating texts. Overall, our survey suggests that while Semantic Web
technologies can enhance the quality of machine translation outputs for various
problems, the combination of both is still in its infancy.
| 2,018 | Computation and Language |
Modeling Past and Future for Neural Machine Translation | Existing neural machine translation systems do not explicitly model what has
been translated and what has not during the decoding phase. To address this
problem, we propose a novel mechanism that separates the source information
into two parts: translated Past contents and untranslated Future contents,
which are modeled by two additional recurrent layers. The Past and Future
contents are fed to both the attention model and the decoder states, which
offers NMT systems the knowledge of translated and untranslated contents.
Experimental results show that the proposed approach significantly improves
translation performance in Chinese-English, German-English and English-German
translation tasks. Specifically, the proposed model outperforms the
conventional coverage model in both of the translation quality and the
alignment error rate.
| 2,017 | Computation and Language |
Neural Text Generation: A Practical Guide | Deep learning methods have recently achieved great empirical success on
machine translation, dialogue response generation, summarization, and other
text generation tasks. At a high level, the technique has been to train
end-to-end neural network models consisting of an encoder model to produce a
hidden representation of the source text, followed by a decoder model to
generate the target. While such models have significantly fewer pieces than
earlier systems, significant tuning is still required to achieve good
performance. For text generation models in particular, the decoder can behave
in undesired ways, such as by generating truncated or repetitive outputs,
outputting bland and generic responses, or in some cases producing
ungrammatical gibberish. This paper is intended as a practical guide for
resolving such undesired behavior in text generation models, with the aim of
helping enable real-world applications.
| 2,017 | Computation and Language |
Code Completion with Neural Attention and Pointer Networks | Intelligent code completion has become an essential research task to
accelerate modern software development. To facilitate effective code completion
for dynamically-typed programming languages, we apply neural language models by
learning from large codebases, and develop a tailored attention mechanism for
code completion. However, standard neural language models even with attention
mechanism cannot correctly predict the out-of-vocabulary (OoV) words that
restrict the code completion performance. In this paper, inspired by the
prevalence of locally repeated terms in program source code, and the recently
proposed pointer copy mechanism, we propose a pointer mixture network for
better predicting OoV words in code completion. Based on the context, the
pointer mixture network learns to either generate a within-vocabulary word
through an RNN component, or regenerate an OoV word from local context through
a pointer component. Experiments on two benchmarked datasets demonstrate the
effectiveness of our attention mechanism and pointer mixture network on the
code completion task.
| 2,019 | Computation and Language |
Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis | We consider the task of fine-grained sentiment analysis from the perspective
of multiple instance learning (MIL). Our neural model is trained on document
sentiment labels, and learns to predict the sentiment of text segments, i.e.
sentences or elementary discourse units (EDUs), without segment-level
supervision. We introduce an attention-based polarity scoring method for
identifying positive and negative text snippets and a new dataset which we call
SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating
MIL-style sentiment models like ours. Experimental results demonstrate superior
performance against multiple baselines, whereas a judgement elicitation study
shows that EDU-level opinion extraction produces more informative summaries
than sentence-based alternatives.
| 2,018 | Computation and Language |
Production Ready Chatbots: Generate if not Retrieve | In this paper, we present a hybrid model that combines a neural
conversational model and a rule-based graph dialogue system that assists users
in scheduling reminders through a chat conversation. The graph based system has
high precision and provides a grammatically accurate response but has a low
recall. The neural conversation model can cater to a variety of requests, as it
generates the responses word by word as opposed to using canned responses. The
hybrid system shows significant improvements over the existing baseline system
of rule based approach and caters to complex queries with a domain-restricted
neural model. Restricting the conversation topic and combination of graph based
retrieval system with a neural generative model makes the final system robust
enough for a real world application.
| 2,018 | Computation and Language |
Table-to-text Generation by Structure-aware Seq2seq Learning | Table-to-text generation aims to generate a description for a factual table
which can be viewed as a set of field-value records. To encode both the content
and the structure of a table, we propose a novel structure-aware seq2seq
architecture which consists of field-gating encoder and description generator
with dual attention. In the encoding phase, we update the cell memory of the
LSTM unit by a field gate and its corresponding field value in order to
incorporate field information into table representation. In the decoding phase,
dual attention mechanism which contains word level attention and field level
attention is proposed to model the semantic relevance between the generated
description and the table. We conduct experiments on the \texttt{WIKIBIO}
dataset which contains over 700k biographies and corresponding infoboxes from
Wikipedia. The attention visualizations and case studies show that our model is
capable of generating coherent and informative descriptions based on the
comprehensive understanding of both the content and the structure of a table.
Automatic evaluations also show our model outperforms the baselines by a great
margin. Code for this work is available on
https://github.com/tyliupku/wiki2bio.
| 2,017 | Computation and Language |
Lexical-semantic resources: yet powerful resources for automatic
personality classification | In this paper, we aim to reveal the impact of lexical-semantic resources,
used in particular for word sense disambiguation and sense-level semantic
categorization, on automatic personality classification task. While stylistic
features (e.g., part-of-speech counts) have been shown their power in this
task, the impact of semantics beyond targeted word lists is relatively
unexplored. We propose and extract three types of lexical-semantic features,
which capture high-level concepts and emotions, overcoming the lexical gap of
word n-grams. Our experimental results are comparable to state-of-the-art
methods, while no personality-specific resources are required.
| 2,018 | Computation and Language |
Slim Embedding Layers for Recurrent Neural Language Models | Recurrent neural language models are the state-of-the-art models for language
modeling. When the vocabulary size is large, the space taken to store the model
parameters becomes the bottleneck for the use of recurrent neural language
models. In this paper, we introduce a simple space compression method that
randomly shares the structured parameters at both the input and output
embedding layers of the recurrent neural language models to significantly
reduce the size of model parameters, but still compactly represent the original
input and output embedding layers. The method is easy to implement and tune.
Experiments on several data sets show that the new method can get similar
perplexity and BLEU score results while only using a very tiny fraction of
parameters.
| 2,017 | Computation and Language |
Surfacing contextual hate speech words within social media | Social media platforms have recently seen an increase in the occurrence of
hate speech discourse which has led to calls for improved detection methods.
Most of these rely on annotated data, keywords, and a classification technique.
While this approach provides good coverage, it can fall short when dealing with
new terms produced by online extremist communities which act as original
sources of words which have alternate hate speech meanings. These code words
(which can be both created and adopted words) are designed to evade automatic
detection and often have benign meanings in regular discourse. As an example,
"skypes", "googles", and "yahoos" are all instances of words which have an
alternate meaning that can be used for hate speech. This overlap introduces
additional challenges when relying on keywords for both the collection of data
that is specific to hate speech, and downstream classification. In this work,
we develop a community detection approach for finding extremist hate speech
communities and collecting data from their members. We also develop a word
embedding model that learns the alternate hate speech meaning of words and
demonstrate the candidacy of our code words with several annotation
experiments, designed to determine if it is possible to recognize a word as
being used for hate speech without knowing its alternate meaning. We report an
inter-annotator agreement rate of K=0.871, and K=0.676 for data drawn from our
extremist community and the keyword approach respectively, supporting our claim
that hate speech detection is a contextual task and does not depend on a fixed
list of keywords. Our goal is to advance the domain by providing a high quality
hate speech dataset in addition to learned code words that can be fed into
existing classification approaches, thus improving the accuracy of automated
detection.
| 2,017 | Computation and Language |
End-to-end Adversarial Learning for Generative Conversational Agents | This paper presents a new adversarial learning method for generative
conversational agents (GCA) besides a new model of GCA. Similar to previous
works on adversarial learning for dialogue generation, our method assumes the
GCA as a generator that aims at fooling a discriminator that labels dialogues
as human-generated or machine-generated; however, in our approach, the
discriminator performs token-level classification, i.e. it indicates whether
the current token was generated by humans or machines. To do so, the
discriminator also receives the context utterances (the dialogue history) and
the incomplete answer up to the current token as input. This new approach makes
possible the end-to-end training by backpropagation. A self-conversation
process enables to produce a set of generated data with more diversity for the
adversarial training. This approach improves the performance on questions not
related to the training data. Experimental results with human and adversarial
evaluations show that the adversarial method yields significant performance
gains over the usual teacher forcing training.
| 2,018 | Computation and Language |
Vietnamese Semantic Role Labelling | In this paper, we study semantic role labelling (SRL), a subtask of semantic
parsing of natural language sentences and its application for the Vietnamese
language. We present our effort in building Vietnamese PropBank, the first
Vietnamese SRL corpus and a software system for labelling semantic roles of
Vietnamese texts. In particular, we present a novel constituent extraction
algorithm in the argument candidate identification step which is more suitable
and more accurate than the common node-mapping method. In the machine learning
part, our system integrates distributed word features produced by two recent
unsupervised learning models in two learned statistical classifiers and makes
use of integer linear programming inference procedure to improve the accuracy.
The system is evaluated in a series of experiments and achieves a good result,
an $F_1$ score of 74.77%. Our system, including corpus and software, is
available as an open source project for free research and we believe that it is
a good baseline for the development of future Vietnamese SRL systems.
| 2,017 | Computation and Language |
Unsupervised Discovery of Structured Acoustic Tokens with Applications
to Spoken Term Detection | In this paper, we compare two paradigms for unsupervised discovery of
structured acoustic tokens directly from speech corpora without any human
annotation. The Multigranular Paradigm seeks to capture all available
information in the corpora with multiple sets of tokens for different model
granularities. The Hierarchical Paradigm attempts to jointly learn several
levels of signal representations in a hierarchical structure. The two paradigms
are unified within a theoretical framework in this paper. Query-by-Example
Spoken Term Detection (QbE-STD) experiments on the QUESST dataset of MediaEval
2015 verifies the competitiveness of the acoustic tokens. The Enhanced
Relevance Score (ERS) proposed in this work improves both paradigms for the
task of QbE-STD. We also list results on the ABX evaluation task of the Zero
Resource Challenge 2015 for comparison of the Paradigms.
| 2,017 | Computation and Language |
Acoustic-To-Word Model Without OOV | Recently, the acoustic-to-word model based on the Connectionist Temporal
Classification (CTC) criterion was shown as a natural end-to-end model directly
targeting words as output units. However, this type of word-based CTC model
suffers from the out-of-vocabulary (OOV) issue as it can only model limited
number of words in the output layer and maps all the remaining words into an
OOV output node. Therefore, such word-based CTC model can only recognize the
frequent words modeled by the network output nodes. It also cannot easily
handle the hot-words which emerge after the model is trained. In this study, we
improve the acoustic-to-word model with a hybrid CTC model which can predict
both words and characters at the same time. With a shared-hidden-layer
structure and modular design, the alignments of words generated from the
word-based CTC and the character-based CTC are synchronized. Whenever the
acoustic-to-word model emits an OOV token, we back off that OOV segment to the
word output generated from the character-based CTC, hence solving the OOV or
hot-words issue. Evaluated on a Microsoft Cortana voice assistant task, the
proposed model can reduce the errors introduced by the OOV output token in the
acoustic-to-word model by 30%.
| 2,017 | Computation and Language |
Hybrid Oracle: Making Use of Ambiguity in Transition-based Chinese
Dependency Parsing | In the training of transition-based dependency parsers, an oracle is used to
predict a transition sequence for a sentence and its gold tree. However, the
transition system may exhibit ambiguity, that is, there can be multiple correct
transition sequences that form the gold tree. We propose to make use of the
property in the training of neural dependency parsers, and present the Hybrid
Oracle. The new oracle gives all the correct transitions for a parsing state,
which are used in the cross entropy loss function to provide better supervisory
signal. It is also used to generate different transition sequences for a
sentence to better explore the training data and improve the generalization
ability of the parser. Evaluations show that the parsers trained using the
hybrid oracle outperform the parsers using the traditional oracle in Chinese
dependency parsing. We provide analysis from a linguistic view. The code is
available at https://github.com/lancopku/nndep .
| 2,018 | Computation and Language |
Visualisation and 'diagnostic classifiers' reveal how recurrent and
recursive neural networks process hierarchical structure | We investigate how neural networks can learn and process languages with
hierarchical, compositional semantics. To this end, we define the artificial
task of processing nested arithmetic expressions, and study whether different
types of neural networks can learn to compute their meaning. We find that
recursive neural networks can find a generalising solution to this problem, and
we visualise this solution by breaking it up in three steps: project, sum and
squash. As a next step, we investigate recurrent neural networks, and show that
a gated recurrent unit, that processes its input incrementally, also performs
very well on this task. To develop an understanding of what the recurrent
network encodes, visualisation techniques alone do not suffice. Therefore, we
develop an approach where we formulate and test multiple hypotheses on the
information encoded and processed by the network. For each hypothesis, we
derive predictions about features of the hidden state representations at each
time step, and train 'diagnostic classifiers' to test those predictions. Our
results indicate that the networks follow a strategy similar to our
hypothesised 'cumulative strategy', which explains the high accuracy of the
network on novel expressions, the generalisation to longer expressions than
seen in training, and the mild deterioration with increasing length. This is
turn shows that diagnostic classifiers can be a useful technique for opening up
the black box of neural networks. We argue that diagnostic classification,
unlike most visualisation techniques, does scale up from small networks in a
toy domain, to larger and deeper recurrent networks dealing with real-life
data, and may therefore contribute to a better understanding of the internal
dynamics of current state-of-the-art models in natural language processing.
| 2,018 | Computation and Language |
Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging | In multi-turn dialogs, natural language understanding models can introduce
obvious errors by being blind to contextual information. To incorporate dialog
history, we present a neural architecture with Speaker-Sensitive Dual Memory
Networks which encode utterances differently depending on the speaker. This
addresses the different extents of information available to the system - the
system knows only the surface form of user utterances while it has the exact
semantics of system output. We performed experiments on real user data from
Microsoft Cortana, a commercial personal assistant. The result showed a
significant performance improvement over the state-of-the-art slot tagging
models using contextual information.
| 2,017 | Computation and Language |
End-to-End Optimization of Task-Oriented Dialogue Model with Deep
Reinforcement Learning | In this paper, we present a neural network based task-oriented dialogue
system that can be optimized end-to-end with deep reinforcement learning (RL).
The system is able to track dialogue state, interface with knowledge bases, and
incorporate query results into agent's responses to successfully complete
task-oriented dialogues. Dialogue policy learning is conducted with a hybrid
supervised and deep RL methods. We first train the dialogue agent in a
supervised manner by learning directly from task-oriented dialogue corpora, and
further optimize it with deep RL during its interaction with users. In the
experiments on two different dialogue task domains, our model demonstrates
robust performance in tracking dialogue state and producing reasonable system
responses. We show that deep RL based optimization leads to significant
improvement on task success rate and reduction in dialogue length comparing to
supervised training model. We further show benefits of training task-oriented
dialogue model end-to-end comparing to component-wise optimization with
experiment results on dialogue simulations and human evaluations.
| 2,017 | Computation and Language |
Curriculum Q-Learning for Visual Vocabulary Acquisition | The structure of curriculum plays a vital role in our learning process, both
as children and adults. Presenting material in ascending order of difficulty
that also exploits prior knowledge can have a significant impact on the rate of
learning. However, the notion of difficulty and prior knowledge differs from
person to person. Motivated by the need for a personalised curriculum, we
present a novel method of curriculum learning for vocabulary words in the form
of visual prompts. We employ a reinforcement learning model grounded in
pedagogical theories that emulates the actions of a tutor. We simulate three
students with different levels of vocabulary knowledge in order to evaluate the
how well our model adapts to the environment. The results of the simulation
reveal that through interaction, the model is able to identify the areas of
weakness, as well as push students to the edge of their ZPD. We hypothesise
that these methods can also be effective in training agents to learn language
representations in a simulated environment where it has previously been shown
that order of words and prior knowledge play an important role in the efficacy
of language learning.
| 2,017 | Computation and Language |
Identifying Patterns of Associated-Conditions through Topic Models of
Electronic Medical Records | Multiple adverse health conditions co-occurring in a patient are typically
associated with poor prognosis and increased office or hospital visits.
Developing methods to identify patterns of co-occurring conditions can assist
in diagnosis. Thus identifying patterns of associations among co-occurring
conditions is of growing interest. In this paper, we report preliminary results
from a data-driven study, in which we apply a machine learning method, namely,
topic modeling, to electronic medical records, aiming to identify patterns of
associated conditions. Specifically, we use the well established latent
dirichlet allocation, a method based on the idea that documents can be modeled
as a mixture of latent topics, where each topic is a distribution over words.
In our study, we adapt the LDA model to identify latent topics in patients'
EMRs. We evaluate the performance of our method both qualitatively, and show
that the obtained topics indeed align well with distinct medical phenomena
characterized by co-occurring conditions.
| 2,017 | Computation and Language |
Embedding Words as Distributions with a Bayesian Skip-gram Model | We introduce a method for embedding words as probability densities in a
low-dimensional space. Rather than assuming that a word embedding is fixed
across the entire text collection, as in standard word embedding methods, in
our Bayesian model we generate it from a word-specific prior density for each
occurrence of a given word. Intuitively, for each word, the prior density
encodes the distribution of its potential 'meanings'. These prior densities are
conceptually similar to Gaussian embeddings. Interestingly, unlike the Gaussian
embeddings, we can also obtain context-specific densities: they encode
uncertainty about the sense of a word given its context and correspond to
posterior distributions within our model. The context-dependent densities have
many potential applications: for example, we show that they can be directly
used in the lexical substitution task. We describe an effective estimation
method based on the variational autoencoding framework. We also demonstrate
that our embeddings achieve competitive results on standard benchmarks.
| 2,018 | Computation and Language |
Improved Twitter Sentiment Analysis Using Naive Bayes and Custom
Language Model | In the last couple decades, social network services like Twitter have
generated large volumes of data about users and their interests, providing
meaningful business intelligence so organizations can better understand and
engage their customers. All businesses want to know who is promoting their
products, who is complaining about them, and how are these opinions bringing or
diminishing value to a company. Companies want to be able to identify their
high-value customers and quantify the value each user brings. Many businesses
use social media metrics to calculate the user contribution score, which
enables them to quantify the value that influential users bring on social
media, so the businesses can offer them more differentiated services. However,
the score calculation can be refined to provide a better illustration of a
user's contribution. Using Microsoft Azure as a case study, we conducted
Twitter sentiment analysis to develop a machine learning classification model
that identifies tweet contents and sentiments most illustrative of
positive-value user contribution. Using data mining and AI-powered cognitive
tools, we analyzed factors of social influence and specifically, promotional
language in the developer community. Our predictive model was a combination of
a traditional supervised machine learning algorithm and a custom-developed
natural language model for identifying promotional tweets, that identifies a
product-specific promotion on Twitter with a 90% accuracy rate.
| 2,017 | Computation and Language |
Multimodal Attribute Extraction | The broad goal of information extraction is to derive structured information
from unstructured data. However, most existing methods focus solely on text,
ignoring other types of unstructured data such as images, video and audio which
comprise an increasing portion of the information on the web. To address this
shortcoming, we propose the task of multimodal attribute extraction. Given a
collection of unstructured and semi-structured contextual information about an
entity (such as a textual description, or visual depictions) the task is to
extract the entity's underlying attributes. In this paper, we provide a dataset
containing mixed-media data for over 2 million product items along with 7
million attribute-value pairs describing the items which can be used to train
attribute extractors in a weakly supervised manner. We provide a variety of
baselines which demonstrate the relative effectiveness of the individual modes
of information towards solving the task, as well as study human performance.
| 2,017 | Computation and Language |
Predicting and Explaining Human Semantic Search in a Cognitive Model | Recent work has attempted to characterize the structure of semantic memory
and the search algorithms which, together, best approximate human patterns of
search revealed in a semantic fluency task. There are a number of models that
seek to capture semantic search processes over networks, but they vary in the
cognitive plausibility of their implementation. Existing work has also
neglected to consider the constraints that the incremental process of language
acquisition must place on the structure of semantic memory. Here we present a
model that incrementally updates a semantic network, with limited computational
steps, and replicates many patterns found in human semantic fluency using a
simple random walk. We also perform thorough analyses showing that a
combination of both structural and semantic features are correlated with human
performance patterns.
| 2,017 | Computation and Language |
Neural Response Generation with Dynamic Vocabularies | We study response generation for open domain conversation in chatbots.
Existing methods assume that words in responses are generated from an identical
vocabulary regardless of their inputs, which not only makes them vulnerable to
generic patterns and irrelevant noise, but also causes a high cost in decoding.
We propose a dynamic vocabulary sequence-to-sequence (DVS2S) model which allows
each input to possess their own vocabulary in decoding. In training, vocabulary
construction and response generation are jointly learned by maximizing a lower
bound of the true objective with a Monte Carlo sampling method. In inference,
the model dynamically allocates a small vocabulary for an input with the word
prediction model, and conducts decoding only with the small vocabulary. Because
of the dynamic vocabulary mechanism, DVS2S eludes many generic patterns and
irrelevant words in generation, and enjoys efficient decoding at the same time.
Experimental results on both automatic metrics and human annotations show that
DVS2S can significantly outperform state-of-the-art methods in terms of
response quality, but only requires 60% decoding time compared to the most
efficient baseline.
| 2,017 | Computation and Language |
Modeling Coherence for Neural Machine Translation with Dynamic and Topic
Caches | Sentences in a well-formed text are connected to each other via various links
to form the cohesive structure of the text. Current neural machine translation
(NMT) systems translate a text in a conventional sentence-by-sentence fashion,
ignoring such cross-sentence links and dependencies. This may lead to generate
an incoherent target text for a coherent source text. In order to handle this
issue, we propose a cache-based approach to modeling coherence for neural
machine translation by capturing contextual information either from recently
translated sentences or the entire document. Particularly, we explore two types
of caches: a dynamic cache, which stores words from the best translation
hypotheses of preceding sentences, and a topic cache, which maintains a set of
target-side topical words that are semantically related to the document to be
translated. On this basis, we build a new layer to score target words in these
two caches with a cache-based neural model. Here the estimated probabilities
from the cache-based neural model are combined with NMT probabilities into the
final word prediction probabilities via a gating mechanism. Finally, the
proposed cache-based neural model is trained jointly with NMT system in an
end-to-end manner. Experiments and analysis presented in this paper demonstrate
that the proposed cache-based model achieves substantial improvements over
several state-of-the-art SMT and NMT baselines.
| 2,018 | Computation and Language |
Multi-Domain Adversarial Learning for Slot Filling in Spoken Language
Understanding | The goal of this paper is to learn cross-domain representations for slot
filling task in spoken language understanding (SLU). Most of the recently
published SLU models are domain-specific ones that work on individual task
domains. Annotating data for each individual task domain is both financially
costly and non-scalable. In this work, we propose an adversarial training
method in learning common features and representations that can be shared
across multiple domains. Model that produces such shared representations can be
combined with models trained on individual domain SLU data to reduce the amount
of training samples required for developing a new domain. In our experiments
using data sets from multiple domains, we show that adversarial training helps
in learning better domain-general SLU models, leading to improved slot filling
F1 scores. We further show that applying adversarial learning on domain-general
model also helps in achieving higher slot filling performance when the model is
jointly optimized with domain-specific models.
| 2,017 | Computation and Language |
Calculating Semantic Similarity between Academic Articles using Topic
Event and Ontology | Determining semantic similarity between academic documents is crucial to many
tasks such as plagiarism detection, automatic technical survey and semantic
search. Current studies mostly focus on semantic similarity between concepts,
sentences and short text fragments. However, document-level semantic matching
is still based on statistical information in surface level, neglecting article
structures and global semantic meanings, which may cause the deviation in
document understanding. In this paper, we focus on the document-level semantic
similarity issue for academic literatures with a novel method. We represent
academic articles with topic events that utilize multiple information profiles,
such as research purposes, methodologies and domains to integrally describe the
research work, and calculate the similarity between topic events based on the
domain ontology to acquire the semantic similarity between articles.
Experiments show that our approach achieves significant performance compared to
state-of-the-art methods.
| 2,017 | Computation and Language |
Lexical and Derivational Meaning in Vector-Based Models of
Relativisation | Sadrzadeh et al (2013) present a compositional distributional analysis of
relative clauses in English in terms of the Frobenius algebraic structure of
finite dimensional vector spaces. The analysis relies on distinct type
assignments and lexical recipes for subject vs object relativisation. The
situation for Dutch is different: because of the verb final nature of Dutch,
relative clauses are ambiguous between a subject vs object relativisation
reading. Using an extended version of Lambek calculus, we present a
compositional distributional framework that accounts for this derivational
ambiguity, and that allows us to give a single meaning recipe for the relative
pronoun reconciling the Frobenius semantics with the demands of Dutch
derivational syntax.
| 2,017 | Computation and Language |
Graph Centrality Measures for Boosting Popularity-Based Entity Linking | Many Entity Linking systems use collective graph-based methods to
disambiguate the entity mentions within a document. Most of them have focused
on graph construction and initial weighting of the candidate entities, less
attention has been devoted to compare the graph ranking algorithms. In this
work, we focus on the graph-based ranking algorithms, therefore we propose to
apply five centrality measures: Degree, HITS, PageRank, Betweenness and
Closeness. A disambiguation graph of candidate entities is constructed for each
document using the popularity method, then centrality measures are applied to
choose the most relevant candidate to boost the results of entity popularity
method. We investigate the effectiveness of each centrality measure on the
performance across different domains and datasets. Our experiments show that a
simple and fast centrality measure such as Degree centrality can outperform
other more time-consuming measures.
| 2,017 | Computation and Language |
On the importance of normative data in speech-based assessment | Data sets for identifying Alzheimer's disease (AD) are often relatively
sparse, which limits their ability to train generalizable models. Here, we
augment such a data set, DementiaBank, with each of two normative data sets,
the Wisconsin Longitudinal Study and Talk2Me, each of which employs a
speech-based picture-description assessment. Through minority class
oversampling with ADASYN, we outperform state-of-the-art results in binary
classification of people with and without AD in DementiaBank. This work
highlights the effectiveness of combining sparse and difficult-to-acquire
patient data with relatively large and easily accessible normative datasets.
| 2,017 | Computation and Language |
Text Generation Based on Generative Adversarial Nets with Latent
Variable | In this paper, we propose a model using generative adversarial net (GAN) to
generate realistic text. Instead of using standard GAN, we combine variational
autoencoder (VAE) with generative adversarial net. The use of high-level latent
random variables is helpful to learn the data distribution and solve the
problem that generative adversarial net always emits the similar data. We
propose the VGAN model where the generative model is composed of recurrent
neural network and VAE. The discriminative model is a convolutional neural
network. We train the model via policy gradient. We apply the proposed model to
the task of text generation and compare it to other recent neural network based
models, such as recurrent neural network language model and SeqGAN. We evaluate
the performance of the model by calculating negative log-likelihood and the
BLEU score. We conduct experiments on three benchmark datasets, and results
show that our model outperforms other previous models.
| 2,018 | Computation and Language |
Visual Features for Context-Aware Speech Recognition | Automatic transcriptions of consumer-generated multi-media content such as
"Youtube" videos still exhibit high word error rates. Such data typically
occupies a very broad domain, has been recorded in challenging conditions, with
cheap hardware and a focus on the visual modality, and may have been
post-processed or edited. In this paper, we extend our earlier work on adapting
the acoustic model of a DNN-based speech recognition system to an RNN language
model and show how both can be adapted to the objects and scenes that can be
automatically detected in the video. We are working on a corpus of "how-to"
videos from the web, and the idea is that an object that can be seen ("car"),
or a scene that is being detected ("kitchen") can be used to condition both
models on the "context" of the recording, thereby reducing perplexity and
improving transcription. We achieve good improvements in both cases and compare
and analyze the respective reductions in word error rate. We expect that our
results can be used for any type of speech processing in which "context"
information is available, for example in robotics, man-machine interaction, or
when indexing large audio-visual archives, and should ultimately help to bring
together the "video-to-text" and "speech-to-text" communities.
| 2,017 | Computation and Language |
Improving Visually Grounded Sentence Representations with Self-Attention | Sentence representation models trained only on language could potentially
suffer from the grounding problem. Recent work has shown promising results in
improving the qualities of sentence representations by jointly training them
with associated image features. However, the grounding capability is limited
due to distant connection between input sentences and image features by the
design of the architecture. In order to further close the gap, we propose
applying self-attention mechanism to the sentence encoder to deepen the
grounding effect. Our results on transfer tasks show that self-attentive
encoders are better for visual grounding, as they exploit specific words with
strong visual associations.
| 2,017 | Computation and Language |
Sentiment Classification using Images and Label Embeddings | In this project we analysed how much semantic information images carry, and
how much value image data can add to sentiment analysis of the text associated
with the images. To better understand the contribution from images, we compared
models which only made use of image data, models which only made use of text
data, and models which combined both data types. We also analysed if this
approach could help sentiment classifiers generalize to unknown sentiments.
| 2,017 | Computation and Language |
Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals | Performance appraisal (PA) is an important HR process to periodically measure
and evaluate every employee's performance vis-a-vis the goals established by
the organization. A PA process involves purposeful multi-step multi-modal
communication between employees, their supervisors and their peers, such as
self-appraisal, supervisor assessment and peer feedback. Analysis of the
structured data and text produced in PA is crucial for measuring the quality of
appraisals and tracking actual improvements. In this paper, we apply text
mining techniques to produce insights from PA text. First, we perform sentence
classification to identify strengths, weaknesses and suggestions of
improvements found in the supervisor assessments and then use clustering to
discover broad categories among them. Next we use multi-class multi-label
classification techniques to match supervisor assessments to predefined broad
perspectives on performance. Finally, we propose a short-text summarization
technique to produce a summary of peer feedback comments for a given employee
and compare it with manual summaries. All techniques are illustrated using a
real-life dataset of supervisor assessment and peer feedback text produced
during the PA of 4528 employees in a large multi-national IT company.
| 2,017 | Computation and Language |
Generalized Grounding Graphs: A Probabilistic Framework for
Understanding Grounded Commands | Many task domains require robots to interpret and act upon natural language
commands which are given by people and which refer to the robot's physical
surroundings. Such interpretation is known variously as the symbol grounding
problem, grounded semantics and grounded language acquisition. This problem is
challenging because people employ diverse vocabulary and grammar, and because
robots have substantial uncertainty about the nature and contents of their
surroundings, making it difficult to associate the constitutive language
elements (principally noun phrases and spatial relations) of the command text
to elements of those surroundings. Symbolic models capture linguistic structure
but have not scaled successfully to handle the diverse language produced by
untrained users. Existing statistical approaches can better handle diversity,
but have not to date modeled complex linguistic structure, limiting achievable
accuracy. Recent hybrid approaches have addressed limitations in scaling and
complexity, but have not effectively associated linguistic and perceptual
features. Our framework, called Generalized Grounding Graphs (G^3), addresses
these issues by defining a probabilistic graphical model dynamically according
to the linguistic parse structure of a natural language command. This approach
scales effectively, handles linguistic diversity, and enables the system to
associate parts of a command with the specific objects, places, and events in
the external world to which they refer. We show that robots can learn word
meanings and use those learned meanings to robustly follow natural language
commands produced by untrained users. We demonstrate our approach for both
mobility commands and mobile manipulation commands involving a variety of
semi-autonomous robotic platforms, including a wheelchair, a micro-air vehicle,
a forklift, and the Willow Garage PR2.
| 2,017 | Computation and Language |
An Encoder-Decoder Model for ICD-10 Coding of Death Certificates | Information extraction from textual documents such as hospital records and
healthrelated user discussions has become a topic of intense interest. The task
of medical concept coding is to map a variable length text to medical concepts
and corresponding classification codes in some external system or ontology. In
this work, we utilize recurrent neural networks to automatically assign ICD-10
codes to fragments of death certificates written in English. We develop
end-to-end neural architectures directly tailored to the task, including basic
encoder-decoder architecture for statistical translation. In order to
incorporate prior knowledge, we concatenate cosine similarities vector among
the text and dictionary entry to the encoded state. Being applied to a standard
benchmark from CLEF eHealth 2017 challenge, our model achieved F-measure of
85.01% on a full test set with significant improvement as compared to the
average score of 62.2% for all official participants approaches.
| 2,017 | Computation and Language |
#anorexia, #anarexia, #anarexyia: Characterizing Online Community
Practices with Orthographic Variation | Distinctive linguistic practices help communities build solidarity and
differentiate themselves from outsiders. In an online community, one such
practice is variation in orthography, which includes spelling, punctuation, and
capitalization. Using a dataset of over two million Instagram posts, we
investigate orthographic variation in a community that shares pro-eating
disorder (pro-ED) content. We find that not only does orthographic variation
grow more frequent over time, it also becomes more profound or deep, with
variants becoming increasingly distant from the original: as, for example,
#anarexyia is more distant than #anarexia from the original spelling #anorexia.
These changes are driven by newcomers, who adopt the most extreme linguistic
practices as they enter the community. Moreover, this behavior correlates with
engagement: the newcomers who adopt deeper orthographic variants tend to remain
active for longer in the community, and the posts that contain deeper variation
receive more positive feedback in the form of "likes." Previous work has linked
community membership change with language change, and our work casts this
connection in a new light, with newcomers driving an evolving practice, rather
than adapting to it. We also demonstrate the utility of orthographic variation
as a new lens to study sociolinguistic change in online communities,
particularly when the change results from an exogenous force such as a content
ban.
| 2,017 | Computation and Language |
AWE-CM Vectors: Augmenting Word Embeddings with a Clinical Metathesaurus | In recent years, word embeddings have been surprisingly effective at
capturing intuitive characteristics of the words they represent. These vectors
achieve the best results when training corpora are extremely large, sometimes
billions of words. Clinical natural language processing datasets, however, tend
to be much smaller. Even the largest publicly-available dataset of medical
notes is three orders of magnitude smaller than the dataset of the oft-used
"Google News" word vectors. In order to make up for limited training data
sizes, we encode expert domain knowledge into our embeddings. Building on a
previous extension of word2vec, we show that generalizing the notion of a
word's "context" to include arbitrary features creates an avenue for encoding
domain knowledge into word embeddings. We show that the word vectors produced
by this method outperform their text-only counterparts across the board in
correlation with clinical experts.
| 2,017 | Computation and Language |
Sequence Mining and Pattern Analysis in Drilling Reports with Deep
Natural Language Processing | Drilling activities in the oil and gas industry have been reported over
decades for thousands of wells on a daily basis, yet the analysis of this text
at large-scale for information retrieval, sequence mining, and pattern analysis
is very challenging. Drilling reports contain interpretations written by
drillers from noting measurements in downhole sensors and surface equipment,
and can be used for operation optimization and accident mitigation. In this
initial work, a methodology is proposed for automatic classification of
sentences written in drilling reports into three relevant labels (EVENT,
SYMPTOM and ACTION) for hundreds of wells in an actual field. Some of the main
challenges in the text corpus were overcome, which include the high frequency
of technical symbols, mistyping/abbreviation of technical terms, and the
presence of incomplete sentences in the drilling reports. We obtain
state-of-the-art classification accuracy within this technical language and
illustrate advanced queries enabled by the tool.
| 2,017 | Computation and Language |
EmTaggeR: A Word Embedding Based Novel Method for Hashtag Recommendation
on Twitter | The hashtag recommendation problem addresses recommending (suggesting) one or
more hashtags to explicitly tag a post made on a given social network platform,
based upon the content and context of the post. In this work, we propose a
novel methodology for hashtag recommendation for microblog posts, specifically
Twitter. The methodology, EmTaggeR, is built upon a training-testing framework
that builds on the top of the concept of word embedding. The training phase
comprises of learning word vectors associated with each hashtag, and deriving a
word embedding for each hashtag. We provide two training procedures, one in
which each hashtag is trained with a separate word embedding model applicable
in the context of that hashtag, and another in which each hashtag obtains its
embedding from a global context. The testing phase constitutes computing the
average word embedding of the test post, and finding the similarity of this
embedding with the known embeddings of the hashtags. The tweets that contain
the most-similar hashtag are extracted, and all the hashtags that appear in
these tweets are ranked in terms of embedding similarity scores. The top-K
hashtags that appear in this ranked list, are recommended for the given test
post. Our system produces F1 score of 50.83%, improving over the LDA baseline
by around 6.53 times, outperforming the best-performing system known in the
literature that provides a lift of 6.42 times. EmTaggeR is a fast, scalable and
lightweight system, which makes it practical to deploy in real-life
applications.
| 2,017 | Computation and Language |
Deep Semantic Role Labeling with Self-Attention | Semantic Role Labeling (SRL) is believed to be a crucial step towards natural
language understanding and has been widely studied. Recent years, end-to-end
SRL with recurrent neural networks (RNN) has gained increasing attention.
However, it remains a major challenge for RNNs to handle structural information
and long range dependencies. In this paper, we present a simple and effective
architecture for SRL which aims to address these problems. Our model is based
on self-attention which can directly capture the relationships between two
tokens regardless of their distance. Our single model achieves F$_1=83.4$ on
the CoNLL-2005 shared task dataset and F$_1=82.7$ on the CoNLL-2012 shared task
dataset, which outperforms the previous state-of-the-art results by $1.8$ and
$1.0$ F$_1$ score respectively. Besides, our model is computationally
efficient, and the parsing speed is 50K tokens per second on a single Titan X
GPU.
| 2,017 | Computation and Language |
Phylogenetics of Indo-European Language families via an
Algebro-Geometric Analysis of their Syntactic Structures | Using Phylogenetic Algebraic Geometry, we analyze computationally the
phylogenetic tree of subfamilies of the Indo-European language family, using
data of syntactic structures. The two main sources of syntactic data are the
SSWL database and Longobardi's recent data of syntactic parameters. We compute
phylogenetic invariants and likelihood functions for two sets of Germanic
languages, a set of Romance languages, a set of Slavic languages and a set of
early Indo-European languages, and we compare the results with what is known
through historical linguistics.
| 2,019 | Computation and Language |
Capturing Reliable Fine-Grained Sentiment Associations by Crowdsourcing
and Best-Worst Scaling | Access to word-sentiment associations is useful for many applications,
including sentiment analysis, stance detection, and linguistic analysis.
However, manually assigning fine-grained sentiment association scores to words
has many challenges with respect to keeping annotations consistent. We apply
the annotation technique of Best-Worst Scaling to obtain real-valued sentiment
association scores for words and phrases in three different domains: general
English, English Twitter, and Arabic Twitter. We show that on all three domains
the ranking of words by sentiment remains remarkably consistent even when the
annotation process is repeated with a different set of annotators. We also, for
the first time, determine the minimum difference in sentiment association that
is perceptible to native speakers of a language.
| 2,017 | Computation and Language |
Best-Worst Scaling More Reliable than Rating Scales: A Case Study on
Sentiment Intensity Annotation | Rating scales are a widely used method for data annotation; however, they
present several challenges, such as difficulty in maintaining inter- and
intra-annotator consistency. Best-worst scaling (BWS) is an alternative method
of annotation that is claimed to produce high-quality annotations while keeping
the required number of annotations similar to that of rating scales. However,
the veracity of this claim has never been systematically established. Here for
the first time, we set up an experiment that directly compares the rating scale
method with BWS. We show that with the same total number of annotations, BWS
produces significantly more reliable results than the rating scale.
| 2,017 | Computation and Language |
State-of-the-art Speech Recognition With Sequence-to-Sequence Models | Attention-based encoder-decoder architectures such as Listen, Attend, and
Spell (LAS), subsume the acoustic, pronunciation and language model components
of a traditional automatic speech recognition (ASR) system into a single neural
network. In previous work, we have shown that such architectures are comparable
to state-of-theart ASR systems on dictation tasks, but it was not clear if such
architectures would be practical for more challenging tasks such as voice
search. In this work, we explore a variety of structural and optimization
improvements to our LAS model which significantly improve performance. On the
structural side, we show that word piece models can be used instead of
graphemes. We also introduce a multi-head attention architecture, which offers
improvements over the commonly-used single-head attention. On the optimization
side, we explore synchronous training, scheduled sampling, label smoothing, and
minimum word error rate optimization, which are all shown to improve accuracy.
We present results with a unidirectional LSTM encoder for streaming
recognition. On a 12, 500 hour voice search task, we find that the proposed
changes improve the WER from 9.2% to 5.6%, while the best conventional system
achieves 6.7%; on a dictation task our model achieves a WER of 4.1% compared to
5% for the conventional system.
| 2,018 | Computation and Language |
The Effect of Negators, Modals, and Degree Adverbs on Sentiment
Composition | Negators, modals, and degree adverbs can significantly affect the sentiment
of the words they modify. Often, their impact is modeled with simple
heuristics; although, recent work has shown that such heuristics do not capture
the true sentiment of multi-word phrases. We created a dataset of phrases that
include various negators, modals, and degree adverbs, as well as their
combinations. Both the phrases and their constituent content words were
annotated with real-valued scores of sentiment association. Using phrasal terms
in the created dataset, we analyze the impact of individual modifiers and the
average effect of the groups of modifiers on overall sentiment. We find that
the effect of modifiers varies substantially among the members of the same
group. Furthermore, each individual modifier can affect sentiment words in
different ways. Therefore, solutions based on statistical learning seem more
promising than fixed hand-crafted rules on the task of automatic sentiment
prediction.
| 2,017 | Computation and Language |
One for All: Towards Language Independent Named Entity Linking | Entity linking (EL) is the task of disambiguating mentions in text by
associating them with entries in a predefined database of mentions (persons,
organizations, etc). Most previous EL research has focused mainly on one
language, English, with less attention being paid to other languages, such as
Spanish or Chinese. In this paper, we introduce LIEL, a Language Independent
Entity Linking system, which provides an EL framework which, once trained on
one language, works remarkably well on a number of different languages without
change. LIEL makes a joint global prediction over the entire document,
employing a discriminative reranking framework with many domain and
language-independent feature functions. Experiments on numerous benchmark
datasets, show that the proposed system, once trained on one language, English,
outperforms several state-of-the-art systems in English (by 4 points) and the
trained model also works very well on Spanish (14 points better than a
competitor system), demonstrating the viability of the approach.
| 2,017 | Computation and Language |
Improving the Performance of Online Neural Transducer Models | Having a sequence-to-sequence model which can operate in an online fashion is
important for streaming applications such as Voice Search. Neural transducer is
a streaming sequence-to-sequence model, but has shown a significant degradation
in performance compared to non-streaming models such as Listen, Attend and
Spell (LAS). In this paper, we present various improvements to NT.
Specifically, we look at increasing the window over which NT computes
attention, mainly by looking backwards in time so the model still remains
online. In addition, we explore initializing a NT model from a LAS-trained
model so that it is guided with a better alignment. Finally, we explore
including stronger language models such as using wordpiece models, and applying
an external LM during the beam search. On a Voice Search task, we find with
these improvements we can get NT to match the performance of LAS.
| 2,017 | Computation and Language |
Neural Cross-Lingual Entity Linking | A major challenge in Entity Linking (EL) is making effective use of
contextual information to disambiguate mentions to Wikipedia that might refer
to different entities in different contexts. The problem exacerbates with
cross-lingual EL which involves linking mentions written in non-English
documents to entries in the English Wikipedia: to compare textual clues across
languages we need to compute similarity between textual fragments across
languages. In this paper, we propose a neural EL model that trains fine-grained
similarities and dissimilarities between the query and candidate document from
multiple perspectives, combined with convolution and tensor networks. Further,
we show that this English-trained system can be applied, in zero-shot learning,
to other languages by making surprisingly effective use of multi-lingual
embeddings. The proposed system has strong empirical evidence yielding
state-of-the-art results in English as well as cross-lingual: Spanish and
Chinese TAC 2015 datasets.
| 2,017 | Computation and Language |
Minimum Word Error Rate Training for Attention-based
Sequence-to-Sequence Models | Sequence-to-sequence models, such as attention-based models in automatic
speech recognition (ASR), are typically trained to optimize the cross-entropy
criterion which corresponds to improving the log-likelihood of the data.
However, system performance is usually measured in terms of word error rate
(WER), not log-likelihood. Traditional ASR systems benefit from discriminative
sequence training which optimizes criteria such as the state-level minimum
Bayes risk (sMBR) which are more closely related to WER. In the present work,
we explore techniques to train attention-based models to directly minimize
expected word error rate. We consider two loss functions which approximate the
expected number of word errors: either by sampling from the model, or by using
N-best lists of decoded hypotheses, which we find to be more effective than the
sampling-based method. In experimental evaluations, we find that the proposed
training procedure improves performance by up to 8.2% relative to the baseline
system. This allows us to train grapheme-based, uni-directional attention-based
models which match the performance of a traditional, state-of-the-art,
discriminative sequence-trained system on a mobile voice-search task.
| 2,017 | Computation and Language |
Neural Machine Translation by Generating Multiple Linguistic Factors | Factored neural machine translation (FNMT) is founded on the idea of using
the morphological and grammatical decomposition of the words (factors) at the
output side of the neural network. This architecture addresses two well-known
problems occurring in MT, namely the size of target language vocabulary and the
number of unknown tokens produced in the translation. FNMT system is designed
to manage larger vocabulary and reduce the training time (for systems with
equivalent target language vocabulary size). Moreover, we can produce
grammatically correct words that are not part of the vocabulary. FNMT model is
evaluated on IWSLT'15 English to French task and compared to the baseline
word-based and BPE-based NMT systems. Promising qualitative and quantitative
results (in terms of BLEU and METEOR) are reported.
| 2,017 | Computation and Language |
No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica
in End-to-End Models | For decades, context-dependent phonemes have been the dominant sub-word unit
for conventional acoustic modeling systems. This status quo has begun to be
challenged recently by end-to-end models which seek to combine acoustic,
pronunciation, and language model components into a single neural network. Such
systems, which typically predict graphemes or words, simplify the recognition
process since they remove the need for a separate expert-curated pronunciation
lexicon to map from phoneme-based units to words. However, there has been
little previous work comparing phoneme-based versus grapheme-based sub-word
units in the end-to-end modeling framework, to determine whether the gains from
such approaches are primarily due to the new probabilistic model, or from the
joint learning of the various components with grapheme-based units.
In this work, we conduct detailed experiments which are aimed at quantifying
the value of phoneme-based pronunciation lexica in the context of end-to-end
models. We examine phoneme-based end-to-end models, which are contrasted
against grapheme-based ones on a large vocabulary English Voice-search task,
where we find that graphemes do indeed outperform phonemes. We also compare
grapheme and phoneme-based approaches on a multi-dialect English task, which
once again confirm the superiority of graphemes, greatly simplifying the system
for recognizing multiple dialects.
| 2,017 | Computation and Language |
Strong Baselines for Simple Question Answering over Knowledge Graphs
with and without Neural Networks | We examine the problem of question answering over knowledge graphs, focusing
on simple questions that can be answered by the lookup of a single fact.
Adopting a straightforward decomposition of the problem into entity detection,
entity linking, relation prediction, and evidence combination, we explore
simple yet strong baselines. On the popular SimpleQuestions dataset, we find
that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach
the state of the art, and techniques that do not use neural networks also
perform reasonably well. These results show that gains from sophisticated deep
learning techniques proposed in the literature are quite modest and that some
previous models exhibit unnecessary complexity.
| 2,018 | Computation and Language |
Dual Attention Network for Product Compatibility and Function
Satisfiability Analysis | Product compatibility and their functionality are of utmost importance to
customers when they purchase products, and to sellers and manufacturers when
they sell products. Due to the huge number of products available online, it is
infeasible to enumerate and test the compatibility and functionality of every
product. In this paper, we address two closely related problems: product
compatibility analysis and function satisfiability analysis, where the second
problem is a generalization of the first problem (e.g., whether a product works
with another product can be considered as a special function). We first
identify a novel question and answering corpus that is up-to-date regarding
product compatibility and functionality information. To allow automatic
discovery product compatibility and functionality, we then propose a deep
learning model called Dual Attention Network (DAN). Given a QA pair for a
to-be-purchased product, DAN learns to 1) discover complementary products (or
functions), and 2) accurately predict the actual compatibility (or
satisfiability) of the discovered products (or functions). The challenges
addressed by the model include the briefness of QAs, linguistic patterns
indicating compatibility, and the appropriate fusion of questions and answers.
We conduct experiments to quantitatively and qualitatively show that the
identified products and functions have both high coverage and accuracy,
compared with a wide spectrum of baselines.
| 2,017 | Computation and Language |
Distance-based Self-Attention Network for Natural Language Inference | Attention mechanism has been used as an ancillary means to help RNN or CNN.
However, the Transformer (Vaswani et al., 2017) recently recorded the
state-of-the-art performance in machine translation with a dramatic reduction
in training time by solely using attention. Motivated by the Transformer,
Directional Self Attention Network (Shen et al., 2017), a fully attention-based
sentence encoder, was proposed. It showed good performance with various data by
using forward and backward directional information in a sentence. But in their
study, not considered at all was the distance between words, an important
feature when learning the local dependency to help understand the context of
input text. We propose Distance-based Self-Attention Network, which considers
the word distance by using a simple distance mask in order to model the local
dependency without losing the ability of modeling global dependency which
attention has inherent. Our model shows good performance with NLI data, and it
records the new state-of-the-art result with SNLI data. Additionally, we show
that our model has a strength in long sentences or documents.
| 2,017 | Computation and Language |
Multi-channel Encoder for Neural Machine Translation | Attention-based Encoder-Decoder has the effective architecture for neural
machine translation (NMT), which typically relies on recurrent neural networks
(RNN) to build the blocks that will be lately called by attentive reader during
the decoding process. This design of encoder yields relatively uniform
composition on source sentence, despite the gating mechanism employed in
encoding RNN. On the other hand, we often hope the decoder to take pieces of
source sentence at varying levels suiting its own linguistic structure: for
example, we may want to take the entity name in its raw form while taking an
idiom as a perfectly composed unit. Motivated by this demand, we propose
Multi-channel Encoder (MCE), which enhances encoding components with different
levels of composition. More specifically, in addition to the hidden state of
encoding RNN, MCE takes 1) the original word embedding for raw encoding with no
composition, and 2) a particular design of external memory in Neural Turing
Machine (NTM) for more complex composition, while all three encoding strategies
are properly blended during decoding. Empirical study on Chinese-English
translation shows that our model can improve by 6.52 BLEU points upon a strong
open source NMT system: DL4MT1. On the WMT14 English- French task, our single
shallow system achieves BLEU=38.8, comparable with the state-of-the-art deep
models.
| 2,017 | Computation and Language |
A Novel Embedding Model for Knowledge Base Completion Based on
Convolutional Neural Network | In this paper, we propose a novel embedding model, named ConvKB, for
knowledge base completion. Our model ConvKB advances state-of-the-art models by
employing a convolutional neural network, so that it can capture global
relationships and transitional characteristics between entities and relations
in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity)
is represented as a 3-column matrix where each column vector represents a
triple element. This 3-column matrix is then fed to a convolution layer where
multiple filters are operated on the matrix to generate different feature maps.
These feature maps are then concatenated into a single feature vector
representing the input triple. The feature vector is multiplied with a weight
vector via a dot product to return a score. This score is then used to predict
whether the triple is valid or not. Experiments show that ConvKB achieves
better link prediction performance than previous state-of-the-art embedding
models on two benchmark datasets WN18RR and FB15k-237.
| 2,018 | Computation and Language |
Product Function Need Recognition via Semi-supervised Attention Network | Functionality is of utmost importance to customers when they purchase
products. However, it is unclear to customers whether a product can really
satisfy their needs on functions. Further, missing functions may be
intentionally hidden by the manufacturers or the sellers. As a result, a
customer needs to spend a fair amount of time before purchasing or just
purchase the product on his/her own risk. In this paper, we first identify a
novel QA corpus that is dense on product functionality information
\footnote{The annotated corpus can be found at
\url{https://www.cs.uic.edu/~hxu/}.}. We then design a neural network called
Semi-supervised Attention Network (SAN) to discover product functions from
questions. This model leverages unlabeled data as contextual information to
perform semi-supervised sequence labeling. We conduct experiments to show that
the extracted function have both high coverage and accuracy, compared with a
wide spectrum of baselines.
| 2,017 | Computation and Language |
Discourse-Aware Rumour Stance Classification in Social Media Using
Sequential Classifiers | Rumour stance classification, defined as classifying the stance of specific
social media posts into one of supporting, denying, querying or commenting on
an earlier post, is becoming of increasing interest to researchers. While most
previous work has focused on using individual tweets as classifier inputs, here
we report on the performance of sequential classifiers that exploit the
discourse features inherent in social media interactions or 'conversational
threads'. Testing the effectiveness of four sequential classifiers -- Hawkes
Processes, Linear-Chain Conditional Random Fields (Linear CRF), Tree-Structured
Conditional Random Fields (Tree CRF) and Long Short Term Memory networks (LSTM)
-- on eight datasets associated with breaking news stories, and looking at
different types of local and contextual features, our work sheds new light on
the development of accurate stance classifiers. We show that sequential
classifiers that exploit the use of discourse properties in social media
conversations while using only local features, outperform non-sequential
classifiers. Furthermore, we show that LSTM using a reduced set of features can
outperform the other sequential classifiers; this performance is consistent
across datasets and across types of stances. To conclude, our work also
analyses the different features under study, identifying those that best help
characterise and distinguish between stances, such as supporting tweets being
more likely to be accompanied by evidence than denying tweets. We also set
forth a number of directions for future research.
| 2,018 | Computation and Language |
Why Do Neural Dialog Systems Generate Short and Meaningless Replies? A
Comparison between Dialog and Translation | This paper addresses the question: Why do neural dialog systems generate
short and meaningless replies? We conjecture that, in a dialog system, an
utterance may have multiple equally plausible replies, causing the deficiency
of neural networks in the dialog application. We propose a systematic way to
mimic the dialog scenario in a machine translation system, and manage to
reproduce the phenomenon of generating short and less meaningful sentences in
the translation setting, showing evidence of our conjecture.
| 2,017 | Computation and Language |
A Corpus of Deep Argumentative Structures as an Explanation to
Argumentative Relations | In this paper, we compose a new task for deep argumentative structure
analysis that goes beyond shallow discourse structure analysis. The idea is
that argumentative relations can reasonably be represented with a small set of
predefined patterns. For example, using value judgment and bipolar causality,
we can explain a support relation between two argumentative segments as
follows: Segment 1 states that something is good, and Segment 2 states that it
is good because it promotes something good when it happens. We are motivated by
the following questions: (i) how do we formulate the task?, (ii) can a
reasonable pattern set be created?, and (iii) do the patterns work? To examine
the task feasibility, we conduct a three-stage, detailed annotation study using
357 argumentative relations from the argumentative microtext corpus, a small,
but highly reliable corpus. We report the coverage of explanations captured by
our patterns on a test set composed of 270 relations. Our coverage result of
74.6% indicates that argumentative relations can reasonably be explained by our
small pattern set. Our agreement result of 85.9% shows that a reasonable
inter-annotator agreement can be achieved. To assist with future work in
computational argumentation, the annotated corpus is made publicly available.
| 2,017 | Computation and Language |
Hungarian Layer: Logics Empowered Neural Architecture | Neural architecture is a purely numeric framework, which fits the data as a
continuous function. However, lacking of logic flow (e.g. \textit{if, for,
while}), traditional algorithms (e.g. \textit{Hungarian algorithm, A$^*$
searching, decision tress algorithm}) could not be embedded into this paradigm,
which limits the theories and applications. In this paper, we reform the
calculus graph as a dynamic process, which is guided by logic flow. Within our
novel methodology, traditional algorithms could empower numerical neural
network. Specifically, regarding the subject of sentence matching, we
reformulate this issue as the form of task-assignment, which is solved by
Hungarian algorithm. First, our model applies BiLSTM to parse the sentences.
Then Hungarian layer aligns the matching positions. Last, we transform the
matching results for soft-max regression by another BiLSTM. Extensive
experiments show that our model outperforms other state-of-the-art baselines
substantially.
| 2,018 | Computation and Language |
Topics and Label Propagation: Best of Both Worlds for Weakly Supervised
Text Classification | We propose a Label Propagation based algorithm for weakly supervised text
classification. We construct a graph where each document is represented by a
node and edge weights represent similarities among the documents. Additionally,
we discover underlying topics using Latent Dirichlet Allocation (LDA) and
enrich the document graph by including the topics in the form of additional
nodes. The edge weights between a topic and a text document represent level of
"affinity" between them. Our approach does not require document level
labelling, instead it expects manual labels only for topic nodes. This
significantly minimizes the level of supervision needed as only a few topics
are observed to be enough for achieving sufficiently high accuracy. The Label
Propagation Algorithm is employed on this enriched graph to propagate labels
among the nodes. Our approach combines the advantages of Label Propagation
(through document-document similarities) and Topic Modelling (for minimal but
smart supervision). We demonstrate the effectiveness of our approach on various
datasets and compare with state-of-the-art weakly supervised text
classification approaches.
| 2,017 | Computation and Language |
Convolutional Neural Networks for Medical Diagnosis from Admission Notes | $\textbf{Objective}$ Develop an automatic diagnostic system which only uses
textual admission information from Electronic Health Records (EHRs) and assist
clinicians with a timely and statistically proved decision tool. The hope is
that the tool can be used to reduce mis-diagnosis.
$\textbf{Materials and Methods}$ We use the real-world clinical notes from
MIMIC-III, a freely available dataset consisting of clinical data of more than
forty thousand patients who stayed in intensive care units of the Beth Israel
Deaconess Medical Center between 2001 and 2012. We proposed a Convolutional
Neural Network model to learn semantic features from unstructured textual input
and automatically predict primary discharge diagnosis.
$\textbf{Results}$ The proposed model achieved an overall 96.11% accuracy and
80.48% weighted F1 score values on 10 most frequent disease classes,
significantly outperforming four strong baseline models by at least 12.7% in
weighted F1 score.
$\textbf{Discussion}$ Experimental results imply that the CNN model is
suitable for supporting diagnosis decision making in the presence of complex,
noisy and unstructured clinical data while at the same time using fewer layers
and parameters that other traditional Deep Network models.
$\textbf{Conclusion}$ Our model demonstrated capability of representing
complex medical meaningful features from unstructured clinical notes and
prediction power for commonly misdiagnosed frequent diseases. It can use easily
adopted in clinical setting to provide timely and statistically proved decision
support.
$\textbf{Keywords}$ Convolutional neural network, text classification,
discharge diagnosis prediction, admission information from EHRs.
| 2,017 | Computation and Language |
Effective Neural Solution for Multi-Criteria Word Segmentation | We present a simple yet elegant solution to train a single joint model on
multi-criteria corpora for Chinese Word Segmentation (CWS). Our novel design
requires no private layers in model architecture, instead, introduces two
artificial tokens at the beginning and ending of input sentence to specify the
required target criteria. The rest of the model including Long Short-Term
Memory (LSTM) layer and Conditional Random Fields (CRFs) layer remains
unchanged and is shared across all datasets, keeping the size of parameter
collection minimal and constant. On Bakeoff 2005 and Bakeoff 2008 datasets, our
innovative design has surpassed both single-criterion and multi-criteria
state-of-the-art learning results. To the best knowledge, our design is the
first one that has achieved the latest high performance on such large scale
datasets. Source codes and corpora of this paper are available on GitHub.
| 2,018 | Computation and Language |
Sequence to Sequence Networks for Roman-Urdu to Urdu Transliteration | Neural Machine Translation models have replaced the conventional phrase based
statistical translation methods since the former takes a generic, scalable,
data-driven approach rather than relying on manual, hand-crafted features. The
neural machine translation system is based on one neural network that is
composed of two parts, one that is responsible for input language sentence and
other part that handles the desired output language sentence. This model based
on encoder-decoder architecture also takes as input the distributed
representations of the source language which enriches the learnt dependencies
and gives a warm start to the network. In this work, we transform Roman-Urdu to
Urdu transliteration into sequence to sequence learning problem. To this end,
we make the following contributions. We create the first ever parallel corpora
of Roman-Urdu to Urdu, create the first ever distributed representation of
Roman-Urdu and present the first neural machine translation model that
transliterates text from Roman-Urdu to Urdu language. Our model has achieved
the state-of-the-art results using BLEU as the evaluation metric. Precisely,
our model is able to correctly predict sentences up to length 10 while
achieving BLEU score of 48.6 on the test set. We are hopeful that our model and
our results shall serve as the baseline for further work in the domain of
neural machine translation for Roman-Urdu to Urdu using distributed
representation.
| 2,017 | Computation and Language |
Building competitive direct acoustics-to-word models for English
conversational speech recognition | Direct acoustics-to-word (A2W) models in the end-to-end paradigm have
received increasing attention compared to conventional sub-word based automatic
speech recognition models using phones, characters, or context-dependent hidden
Markov model states. This is because A2W models recognize words from speech
without any decoder, pronunciation lexicon, or externally-trained language
model, making training and decoding with such models simple. Prior work has
shown that A2W models require orders of magnitude more training data in order
to perform comparably to conventional models. Our work also showed this
accuracy gap when using the English Switchboard-Fisher data set. This paper
describes a recipe to train an A2W model that closes this gap and is at-par
with state-of-the-art sub-word based models. We achieve a word error rate of
8.8%/13.9% on the Hub5-2000 Switchboard/CallHome test sets without any decoder
or language model. We find that model initialization, training data order, and
regularization have the most impact on the A2W model performance. Next, we
present a joint word-character A2W model that learns to first spell the word
and then recognize it. This model provides a rich output to the user instead of
simple word hypotheses, making it especially useful in the case of words unseen
or rarely-seen during training.
| 2,017 | Computation and Language |
Characterizing the hyper-parameter space of LSTM language models for
mixed context applications | Applying state of the art deep learning models to novel real world datasets
gives a practical evaluation of the generalizability of these models. Of
importance in this process is how sensitive the hyper parameters of such models
are to novel datasets as this would affect the reproducibility of a model. We
present work to characterize the hyper parameter space of an LSTM for language
modeling on a code-mixed corpus. We observe that the evaluated model shows
minimal sensitivity to our novel dataset bar a few hyper parameters.
| 2,017 | Computation and Language |
Word Sense Disambiguation with LSTM: Do We Really Need 100 Billion
Words? | Recently, Yuan et al. (2016) have shown the effectiveness of using Long
Short-Term Memory (LSTM) for performing Word Sense Disambiguation (WSD). Their
proposed technique outperformed the previous state-of-the-art with several
benchmarks, but neither the training data nor the source code was released.
This paper presents the results of a reproduction study of this technique using
only openly available datasets (GigaWord, SemCore, OMSTI) and software
(TensorFlow). From them, it emerged that state-of-the-art results can be
obtained with much less data than hinted by Yuan et al. All code and trained
models are made freely available.
| 2,017 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.