Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Authorship Analysis of Xenophon's Cyropaedia | In the past several decades, many authorship attribution studies have used
computational methods to determine the authors of disputed texts. Disputed
authorship is a common problem in Classics, since little information about
ancient documents has survived the centuries. Many scholars have questioned the
authenticity of the final chapter of Xenophon's Cyropaedia, a 4th century B.C.
historical text. In this study, we use N-grams frequency vectors with a cosine
similarity function and word frequency vectors with Naive Bayes Classifiers
(NBC) and Support Vector Machines (SVM) to analyze the authorship of the
Cyropaedia. Although the N-gram analysis shows that the epilogue of the
Cyropaedia differs slightly from the rest of the work, comparing the analysis
of Xenophon with analyses of Aristotle and Plato suggests that this difference
is not significant. Both NBC and SVM analyses of word frequencies show that the
final chapter of the Cyropaedia is closely related to the other chapters of the
Cyropaedia. Therefore, this analysis suggests that the disputed chapter was
written by Xenophon. This information can help scholars better understand the
Cyropaedia and also demonstrates the usefulness of applying modern authorship
analysis techniques to classical literature.
| 2,017 | Computation and Language |
Distributed Representation for Traditional Chinese Medicine Herb via
Deep Learning Models | Traditional Chinese Medicine (TCM) has accumulated a big amount of precious
resource in the long history of development. TCM prescriptions that consist of
TCM herbs are an important form of TCM treatment, which are similar to natural
language documents, but in a weakly ordered fashion. Directly adapting language
modeling style methods to learn the embeddings of the herbs can be problematic
as the herbs are not strictly in order, the herbs in the front of the
prescription can be connected to the very last ones. In this paper, we propose
to represent TCM herbs with distributed representations via Prescription Level
Language Modeling (PLLM). In one of our experiments, the correlation between
our calculated similarity between medicines and the judgment of professionals
achieves a Spearman score of 55.35 indicating a strong correlation, which
surpasses human beginners (TCM related field bachelor student) by a big margin
(over 10%).
| 2,017 | Computation and Language |
A Survey on Dialogue Systems: Recent Advances and New Frontiers | Dialogue systems have attracted more and more attention. Recent advances on
dialogue systems are overwhelmingly contributed by deep learning techniques,
which have been employed to enhance a wide range of big data applications such
as computer vision, natural language processing, and recommender systems. For
dialogue systems, deep learning can leverage a massive amount of data to learn
meaningful feature representations and response generation strategies, while
requiring a minimum amount of hand-crafting. In this article, we give an
overview to these recent advances on dialogue systems from various perspectives
and discuss some possible research directions. In particular, we generally
divide existing dialogue systems into task-oriented and non-task-oriented
models, then detail how deep learning techniques help them with representative
algorithms and finally discuss some appealing research directions that can
bring the dialogue system research into a new frontier.
| 2,020 | Computation and Language |
Evaluation of Croatian Word Embeddings | Croatian is poorly resourced and highly inflected language from Slavic
language family. Nowadays, research is focusing mostly on English. We created a
new word analogy corpus based on the original English Word2vec word analogy
corpus and added some of the specific linguistic aspects from Croatian
language. Next, we created Croatian WordSim353 and RG65 corpora for a basic
evaluation of word similarities. We compared created corpora on two popular
word representation models, based on Word2Vec tool and fastText tool. Models
has been trained on 1.37B tokens training data corpus and tested on a new
robust Croatian word analogy corpus. Results show that models are able to
create meaningful word representation. This research has shown that free word
order and the higher morphological complexity of Croatian language influences
the quality of resulting word embeddings.
| 2,017 | Computation and Language |
Fine-tuning Tree-LSTM for phrase-level sentiment classification on a
Polish dependency treebank. Submission to PolEval task 2 | We describe a variant of Child-Sum Tree-LSTM deep neural network (Tai et al,
2015) fine-tuned for working with dependency trees and morphologically rich
languages using the example of Polish. Fine-tuning included applying a custom
regularization technique (zoneout, described by (Krueger et al., 2016), and
further adapted for Tree-LSTMs) as well as using pre-trained word embeddings
enhanced with sub-word information (Bojanowski et al., 2016). The system was
implemented in PyTorch and evaluated on phrase-level sentiment labeling task as
part of the PolEval competition.
| 2,017 | Computation and Language |
Hi, how can I help you?: Automating enterprise IT support help desks | Question answering is one of the primary challenges of natural language
understanding. In realizing such a system, providing complex long answers to
questions is a challenging task as opposed to factoid answering as the former
needs context disambiguation. The different methods explored in the literature
can be broadly classified into three categories namely: 1) classification
based, 2) knowledge graph based and 3) retrieval based. Individually, none of
them address the need of an enterprise wide assistance system for an IT support
and maintenance domain. In this domain the variance of answers is large ranging
from factoid to structured operating procedures; the knowledge is present
across heterogeneous data sources like application specific documentation,
ticket management systems and any single technique for a general purpose
assistance is unable to scale for such a landscape. To address this, we have
built a cognitive platform with capabilities adopted for this domain. Further,
we have built a general purpose question answering system leveraging the
platform that can be instantiated for multiple products, technologies in the
support domain. The system uses a novel hybrid answering model that
orchestrates across a deep learning classifier, a knowledge graph based context
disambiguation module and a sophisticated bag-of-words search system. This
orchestration performs context switching for a provided question and also does
a smooth hand-off of the question to a human expert if none of the automated
techniques can provide a confident answer. This system has been deployed across
675 internal enterprise IT support and maintenance projects.
| 2,017 | Computation and Language |
Neural Language Modeling by Jointly Learning Syntax and Lexicon | We propose a neural language model capable of unsupervised syntactic
structure induction. The model leverages the structure information to form
better semantic representations and better language modeling. Standard
recurrent neural networks are limited by their structure and fail to
efficiently use syntactic information. On the other hand, tree-structured
recursive networks usually require additional structural supervision at the
cost of human expert annotation. In this paper, We propose a novel neural
language model, called the Parsing-Reading-Predict Networks (PRPN), that can
simultaneously induce the syntactic structure from unannotated sentences and
leverage the inferred structure to learn a better language model. In our model,
the gradient can be directly back-propagated from the language model loss into
the neural parsing network. Experiments show that the proposed model can
discover the underlying syntactic structure and achieve state-of-the-art
performance on word/character-level language model tasks.
| 2,018 | Computation and Language |
Neural Speed Reading via Skim-RNN | Inspired by the principles of speed reading, we introduce Skim-RNN, a
recurrent neural network (RNN) that dynamically decides to update only a small
fraction of the hidden state for relatively unimportant input tokens. Skim-RNN
gives computational advantage over an RNN that always updates the entire hidden
state. Skim-RNN uses the same input and output interfaces as a standard RNN and
can be easily used instead of RNNs in existing models. In our experiments, we
show that Skim-RNN can achieve significantly reduced computational cost without
losing accuracy compared to standard RNNs across five different natural
language tasks. In addition, we demonstrate that the trade-off between accuracy
and speed of Skim-RNN can be dynamically controlled during inference time in a
stable manner. Our analysis also shows that Skim-RNN running on a single CPU
offers lower latency compared to standard RNNs on GPUs.
| 2,018 | Computation and Language |
TAMU at KBP 2017: Event Nugget Detection and Coreference Resolution | In this paper, we describe TAMU's system submitted to the TAC KBP 2017 event
nugget detection and coreference resolution task. Our system builds on the
statistical and empirical observations made on training and development data.
We found that modifiers of event nuggets tend to have unique syntactic
distribution. Their parts-of-speech tags and dependency relations provides them
essential characteristics that are useful in identifying their span and also
defining their types and realis status. We further found that the joint
modeling of event span detection and realis status identification performs
better than the individual models for both tasks. Our simple system designed
using minimal features achieved the micro-average F1 scores of 57.72, 44.27 and
42.47 for event span detection, type identification and realis status
classification tasks respectively. Also, our system achieved the CoNLL F1 score
of 27.20 in event coreference resolution task.
| 2,018 | Computation and Language |
Synthetic and Natural Noise Both Break Neural Machine Translation | Character-based neural machine translation (NMT) models alleviate
out-of-vocabulary issues, learn morphology, and move us closer to completely
end-to-end translation systems. Unfortunately, they are also very brittle and
easily falter when presented with noisy data. In this paper, we confront NMT
models with synthetic and natural sources of noise. We find that
state-of-the-art models fail to translate even moderately noisy texts that
humans have no trouble comprehending. We explore two approaches to increase
model robustness: structure-invariant word representations and robust training
on noisy texts. We find that a model based on a character convolutional neural
network is able to simultaneously learn representations robust to multiple
kinds of noise.
| 2,018 | Computation and Language |
Towards Language-Universal End-to-End Speech Recognition | Building speech recognizers in multiple languages typically involves
replicating a monolingual training recipe for each language, or utilizing a
multi-task learning approach where models for different languages have separate
output labels but share some internal parameters. In this work, we exploit
recent progress in end-to-end speech recognition to create a single
multilingual speech recognition system capable of recognizing any of the
languages seen in training. To do so, we propose the use of a universal
character set that is shared among all languages. We also create a
language-specific gating mechanism within the network that can modulate the
network's internal representations in a language-specific way. We evaluate our
proposed approach on the Microsoft Cortana task across three languages and show
that our system outperforms both the individual monolingual systems and systems
built with a multi-task learning approach. We also show that this model can be
used to initialize a monolingual speech recognizer, and can be used to create a
bilingual model for use in code-switching scenarios.
| 2,017 | Computation and Language |
Improved training for online end-to-end speech recognition systems | Achieving high accuracy with end-to-end speech recognizers requires careful
parameter initialization prior to training. Otherwise, the networks may fail to
find a good local optimum. This is particularly true for online networks, such
as unidirectional LSTMs. Currently, the best strategy to train such systems is
to bootstrap the training from a tied-triphone system. However, this is time
consuming, and more importantly, is impossible for languages without a
high-quality pronunciation lexicon. In this work, we propose an initialization
strategy that uses teacher-student learning to transfer knowledge from a large,
well-trained, offline end-to-end speech recognition model to an online
end-to-end model, eliminating the need for a lexicon or any other linguistic
resources. We also explore curriculum learning and label smoothing and show how
they can be combined with the proposed teacher-student learning for further
improvements. We evaluate our methods on a Microsoft Cortana personal assistant
task and show that the proposed method results in a 19 % relative improvement
in word error rate compared to a randomly-initialized baseline system.
| 2,018 | Computation and Language |
Non-Autoregressive Neural Machine Translation | Existing approaches to neural machine translation condition each output word
on previously generated outputs. We introduce a model that avoids this
autoregressive property and produces its outputs in parallel, allowing an order
of magnitude lower latency during inference. Through knowledge distillation,
the use of input token fertilities as a latent variable, and policy gradient
fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative
to the autoregressive Transformer network used as a teacher. We demonstrate
substantial cumulative improvements associated with each of the three aspects
of our training strategy, and validate our approach on IWSLT 2016
English-German and two WMT language pairs. By sampling fertilities in parallel
at inference time, our non-autoregressive model achieves near-state-of-the-art
performance of 29.8 BLEU on WMT 2016 English-Romanian.
| 2,018 | Computation and Language |
Structure Regularized Bidirectional Recurrent Convolutional Neural
Network for Relation Classification | Relation classification is an important semantic processing task in the field
of natural language processing (NLP). In this paper, we present a novel model,
Structure Regularized Bidirectional Recurrent Convolutional Neural
Network(SR-BRCNN), to classify the relation of two entities in a sentence, and
the new dataset of Chinese Sanwen for named entity recognition and relation
classification. Some state-of-the-art systems concentrate on modeling the
shortest dependency path (SDP) between two entities leveraging convolutional or
recurrent neural networks. We further explore how to make full use of the
dependency relations information in the SDP and how to improve the model by the
method of structure regularization. We propose a structure regularized model to
learn relation representations along the SDP extracted from the forest formed
by the structure regularized dependency tree, which benefits reducing the
complexity of the whole model and helps improve the $F_{1}$ score by 10.3.
Experimental results show that our method outperforms the state-of-the-art
approaches on the Chinese Sanwen task and performs as well on the SemEval-2010
Task 8 dataset\footnote{The Chinese Sanwen corpus this paper developed and used
will be released in the further.
| 2,017 | Computation and Language |
Extractive Multi-document Summarization Using Multilayer Networks | Huge volumes of textual information has been produced every single day. In
order to organize and understand such large datasets, in recent years,
summarization techniques have become popular. These techniques aims at finding
relevant, concise and non-redundant content from such a big data. While network
methods have been adopted to model texts in some scenarios, a systematic
evaluation of multilayer network models in the multi-document summarization
task has been limited to a few studies. Here, we evaluate the performance of a
multilayer-based method to select the most relevant sentences in the context of
an extractive multi document summarization (MDS) task. In the adopted model,
nodes represent sentences and edges are created based on the number of shared
words between sentences. Differently from previous studies in multi-document
summarization, we make a distinction between edges linking sentences from
different documents (inter-layer) and those connecting sentences from the same
document (intra-layer). As a proof of principle, our results reveal that such a
discrimination between intra- and inter-layer in a multilayered representation
is able to improve the quality of the generated summaries. This piece of
information could be used to improve current statistical methods and related
textual models.
| 2,018 | Computation and Language |
RubyStar: A Non-Task-Oriented Mixture Model Dialog System | RubyStar is a dialog system designed to create "human-like" conversation by
combining different response generation strategies. RubyStar conducts a
non-task-oriented conversation on general topics by using an ensemble of
rule-based, retrieval-based and generative methods. Topic detection, engagement
monitoring, and context tracking are used for managing interaction. Predictable
elements of conversation, such as the bot's backstory and simple question
answering are handled by separate modules. We describe a rating scheme we
developed for evaluating response generation. We find that character-level RNN
is an effective generation model for general responses, with proper parameter
settings; however other kinds of conversation topics might benefit from using
other models.
| 2,017 | Computation and Language |
Improving Hypernymy Extraction with Distributional Semantic Classes | In this paper, we show how distributionally-induced semantic classes can be
helpful for extracting hypernyms. We present methods for inducing sense-aware
semantic classes using distributional semantics and using these induced
semantic classes for filtering noisy hypernymy relations. Denoising of
hypernyms is performed by labeling each semantic class with its hypernyms. On
the one hand, this allows us to filter out wrong extractions using the global
structure of distributionally similar senses. On the other hand, we infer
missing hypernyms via label propagation to cluster terms. We conduct a
large-scale crowdsourcing study showing that processing of automatically
extracted hypernyms using our approach improves the quality of the hypernymy
extraction in terms of both precision and recall. Furthermore, we show the
utility of our method in the domain taxonomy induction task, achieving the
state-of-the-art results on a SemEval'16 task on taxonomy induction.
| 2,018 | Computation and Language |
Large-scale Cloze Test Dataset Created by Teachers | Cloze tests are widely adopted in language exams to evaluate students'
language proficiency. In this paper, we propose the first large-scale
human-created cloze test dataset CLOTH, containing questions used in
middle-school and high-school language exams. With missing blanks carefully
created by teachers and candidate choices purposely designed to be nuanced,
CLOTH requires a deeper language understanding and a wider attention span than
previously automatically-generated cloze datasets. We test the performance of
dedicatedly designed baseline models including a language model trained on the
One Billion Word Corpus and show humans outperform them by a significant
margin. We investigate the source of the performance gap, trace model
deficiencies to some distinct properties of CLOTH, and identify the limited
ability of comprehending the long-term context to be the key bottleneck.
| 2,018 | Computation and Language |
Weakly-supervised Relation Extraction by Pattern-enhanced Embedding
Learning | Extracting relations from text corpora is an important task in text mining.
It becomes particularly challenging when focusing on weakly-supervised relation
extraction, that is, utilizing a few relation instances (i.e., a pair of
entities and their relation) as seeds to extract more instances from corpora.
Existing distributional approaches leverage the corpus-level co-occurrence
statistics of entities to predict their relations, and require large number of
labeled instances to learn effective relation classifiers. Alternatively,
pattern-based approaches perform bootstrapping or apply neural networks to
model the local contexts, but still rely on large number of labeled instances
to build reliable models. In this paper, we study integrating the
distributional and pattern-based methods in a weakly-supervised setting, such
that the two types of methods can provide complementary supervision for each
other to build an effective, unified model. We propose a novel co-training
framework with a distributional module and a pattern module. During training,
the distributional module helps the pattern module discriminate between the
informative patterns and other patterns, and the pattern module generates some
highly-confident instances to improve the distributional module. The whole
framework can be effectively optimized by iterating between improving the
pattern module and updating the distributional module. We conduct experiments
on two tasks: knowledge base completion with text corpora and corpus-level
relation extraction. Experimental results prove the effectiveness of our
framework in the weakly-supervised setting.
| 2,017 | Computation and Language |
An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading
Comprehension Tasks | Reading comprehension (RC) is a challenging task that requires synthesis of
information across sentences and multiple turns of reasoning. Using a
state-of-the-art RC model, we empirically investigate the performance of
single-turn and multiple-turn reasoning on the SQuAD and MS MARCO datasets. The
RC model is an end-to-end neural network with iterative attention, and uses
reinforcement learning to dynamically control the number of turns. We find that
multiple-turn reasoning outperforms single-turn reasoning for all question and
answer types; further, we observe that enabling a flexible number of turns
generally improves upon a fixed multiple-turn strategy. %across all question
types, and is particularly beneficial to questions with lengthy, descriptive
answers. We achieve results competitive to the state-of-the-art on these two
datasets.
| 2,017 | Computation and Language |
Tracking of enriched dialog states for flexible conversational
information access | Dialog state tracking (DST) is a crucial component in a task-oriented dialog
system for conversational information access. A common practice in current
dialog systems is to define the dialog state by a set of slot-value pairs. Such
representation of dialog states and the slot-filling based DST have been widely
employed, but suffer from three drawbacks. (1) The dialog state can contain
only a single value for a slot, and (2) can contain only users' affirmative
preference over the values for a slot. (3) Current task-based dialog systems
mainly focus on the searching task, while the enquiring task is also very
common in practice. The above observations motivate us to enrich current
representation of dialog states and collect a brand new dialog dataset about
movies, based upon which we build a new DST, called enriched DST (EDST), for
flexible accessing movie information. The EDST supports the searching task, the
enquiring task and their mixed task. We show that the new EDST method not only
achieves good results on Iqiyi dataset, but also outperforms other
state-of-the-art DST methods on the traditional dialog datasets, WOZ2.0 and
DSTC2.
| 2,018 | Computation and Language |
Learning Multi-Modal Word Representation Grounded in Visual Context | Representing the semantics of words is a long-standing problem for the
natural language processing community. Most methods compute word semantics
given their textual context in large corpora. More recently, researchers
attempted to integrate perceptual and visual features. Most of these works
consider the visual appearance of objects to enhance word representations but
they ignore the visual environment and context in which objects appear. We
propose to unify text-based techniques with vision-based techniques by
simultaneously leveraging textual and visual context to learn multimodal word
embeddings. We explore various choices for what can serve as a visual context
and present an end-to-end method to integrate visual context elements in a
multimodal skip-gram model. We provide experiments and extensive analysis of
the obtained results.
| 2,017 | Computation and Language |
Language Modeling for Code-Switched Data: Challenges and Approaches | Lately, the problem of code-switching has gained a lot of attention and has
emerged as an active area of research. In bilingual communities, the speakers
commonly embed the words and phrases of a non-native language into the syntax
of a native language in their day-to-day communications. The code-switching is
a global phenomenon among multilingual communities, still very limited acoustic
and linguistic resources are available as yet. For developing effective speech
based applications, the ability of the existing language technologies to deal
with the code-switched data can not be over emphasized. The code-switching is
broadly classified into two modes: inter-sentential and intra-sentential
code-switching. In this work, we have studied the intra-sentential problem in
the context of code-switching language modeling task. The salient contributions
of this paper includes: (i) the creation of Hindi-English code-switching text
corpus by crawling a few blogging sites educating about the usage of the
Internet (ii) the exploration of the parts-of-speech features towards more
effective modeling of Hindi-English code-switched data by the monolingual
language model (LM) trained on native (Hindi) language data, and (iii) the
proposal of a novel textual factor referred to as the code-switch factor
(CS-factor), which allows the LM to predict the code-switching instances. In
the context of recognition of the code-switching data, the substantial
reduction in the PPL is achieved with the use of POS factors and also the
proposed CS-factor provides independent as well as additive gain in the PPL.
| 2,017 | Computation and Language |
The Lifted Matrix-Space Model for Semantic Composition | Tree-structured neural network architectures for sentence encoding draw
inspiration from the approach to semantic composition generally seen in formal
linguistics, and have shown empirical improvements over comparable sequence
models by doing so. Moreover, adding multiplicative interaction terms to the
composition functions in these models can yield significant further
improvements. However, existing compositional approaches that adopt such a
powerful composition function scale poorly, with parameter counts exploding as
model dimension or vocabulary size grows. We introduce the Lifted Matrix-Space
model, which uses a global transformation to map vector word embeddings to
matrices, which can then be composed via an operation based on matrix-matrix
multiplication. Its composition function effectively transmits a larger number
of activations across layers with relatively few model parameters. We evaluate
our model on the Stanford NLI corpus, the Multi-Genre NLI corpus, and the
Stanford Sentiment Treebank and find that it consistently outperforms TreeLSTM
(Tai et al., 2015), the previous best known composition function for
tree-structured models.
| 2,019 | Computation and Language |
Document Context Neural Machine Translation with Memory Networks | We present a document-level neural machine translation model which takes both
source and target document context into account using memory networks. We model
the problem as a structured prediction problem with interdependencies among the
observed and hidden variables, i.e., the source sentences and their unobserved
target translations in the document. The resulting structured prediction
problem is tackled with a neural translation model equipped with two memory
components, one each for the source and target side, to capture the documental
interdependencies. We train the model end-to-end, and propose an iterative
decoding algorithm based on block coordinate descent. Experimental results of
English translations from French, German, and Estonian documents show that our
model is effective in exploiting both source and target document context, and
statistically significantly outperforms the previous work in terms of BLEU and
METEOR.
| 2,018 | Computation and Language |
Reinforcement Learning of Speech Recognition System Based on Policy
Gradient and Hypothesis Selection | Speech recognition systems have achieved high recognition performance for
several tasks. However, the performance of such systems is dependent on the
tremendously costly development work of preparing vast amounts of task-matched
transcribed speech data for supervised training. The key problem here is the
cost of transcribing speech data. The cost is repeatedly required to support
new languages and new tasks. Assuming broad network services for transcribing
speech data for many users, a system would become more self-sufficient and more
useful if it possessed the ability to learn from very light feedback from the
users without annoying them. In this paper, we propose a general reinforcement
learning framework for speech recognition systems based on the policy gradient
method. As a particular instance of the framework, we also propose a hypothesis
selection-based reinforcement learning method. The proposed framework provides
a new view for several existing training and adaptation methods. The
experimental results show that the proposed method improves the recognition
performance compared to unsupervised adaptation.
| 2,017 | Computation and Language |
Integrating User and Agent Models: A Deep Task-Oriented Dialogue System | Task-oriented dialogue systems can efficiently serve a large number of
customers and relieve people from tedious works. However, existing
task-oriented dialogue systems depend on handcrafted actions and states or
extra semantic labels, which sometimes degrades user experience despite the
intensive human intervention. Moreover, current user simulators have limited
expressive ability so that deep reinforcement Seq2Seq models have to rely on
selfplay and only work in some special cases. To address those problems, we
propose a uSer and Agent Model IntegrAtion (SAMIA) framework inspired by an
observation that the roles of the user and agent models are asymmetric.
Firstly, this SAMIA framework model the user model as a Seq2Seq learning
problem instead of ranking or designing rules. Then the built user model is
used as a leverage to train the agent model by deep reinforcement learning. In
the test phase, the output of the agent model is filtered by the user model to
enhance the stability and robustness. Experiments on a real-world coffee
ordering dataset verify the effectiveness of the proposed SAMIA framework.
| 2,017 | Computation and Language |
Joint Sentiment/Topic Modeling on Text Data Using Boosted Restricted
Boltzmann Machine | Recently by the development of the Internet and the Web, different types of
social media such as web blogs become an immense source of text data. Through
the processing of these data, it is possible to discover practical information
about different topics, individuals opinions and a thorough understanding of
the society. Therefore, applying models which can automatically extract the
subjective information from the documents would be efficient and helpful. Topic
modeling methods, also sentiment analysis are the most raised topics in the
natural language processing and text mining fields. In this paper a new
structure for joint sentiment-topic modeling based on Restricted Boltzmann
Machine (RBM) which is a type of neural networks is proposed. By modifying the
structure of RBM as well as appending a layer which is analogous to sentiment
of text data to it, we propose a generative structure for joint sentiment topic
modeling based on neutral networks. The proposed method is supervised and
trained by the Contrastive Divergence algorithm. The new attached layer in the
proposed model is a layer with the multinomial probability distribution which
can be used in text data sentiment classification or any other supervised
application. The proposed model is compared with existing models in the
experiments such as evaluating as a generative model, sentiment classification,
information retrieval and the corresponding results demonstrate the efficiency
of the method.
| 2,019 | Computation and Language |
Neural Skill Transfer from Supervised Language Tasks to Reading
Comprehension | Reading comprehension is a challenging task in natural language processing
and requires a set of skills to be solved. While current approaches focus on
solving the task as a whole, in this paper, we propose to use a neural network
`skill' transfer approach. We transfer knowledge from several lower-level
language tasks (skills) including textual entailment, named entity recognition,
paraphrase detection and question type classification into the reading
comprehension model.
We conduct an empirical evaluation and show that transferring language skill
knowledge leads to significant improvements for the task with much fewer steps
compared to the baseline model. We also show that the skill transfer approach
is effective even with small amounts of training data. Another finding of this
work is that using token-wise deep label supervision for text classification
improves the performance of transfer learning.
| 2,017 | Computation and Language |
YEDDA: A Lightweight Collaborative Text Span Annotation Tool | In this paper, we introduce \textsc{Yedda}, a lightweight but efficient and
comprehensive open-source tool for text span annotation. \textsc{Yedda}
provides a systematic solution for text span annotation, ranging from
collaborative user annotation to administrator evaluation and analysis. It
overcomes the low efficiency of traditional text annotation tools by annotating
entities through both command line and shortcut keys, which are configurable
with custom labels. \textsc{Yedda} also gives intelligent recommendations by
learning the up-to-date annotated text. An administrator client is developed to
evaluate annotation quality of multiple annotators and generate detailed
comparison report for each annotator pair. Experiments show that the proposed
system can reduce the annotation time by half compared with existing annotation
tools. And the annotation time can be further compressed by 16.47\% through
intelligent recommendation.
| 2,018 | Computation and Language |
Towards the Use of Deep Reinforcement Learning with Global Policy For
Query-based Extractive Summarisation | Supervised approaches for text summarisation suffer from the problem of
mismatch between the target labels/scores of individual sentences and the
evaluation score of the final summary. Reinforcement learning can solve this
problem by providing a learning mechanism that uses the score of the final
summary as a guide to determine the decisions made at the time of selection of
each sentence. In this paper we present a proof-of-concept approach that
applies a policy-gradient algorithm to learn a stochastic policy using an
undiscounted reward. The method has been applied to a policy consisting of a
simple neural network and simple features. The resulting deep reinforcement
learning system is able to learn a global policy and obtain encouraging
results.
| 2,017 | Computation and Language |
Bayesian Paragraph Vectors | Word2vec (Mikolov et al., 2013) has proven to be successful in natural
language processing by capturing the semantic relationships between different
words. Built on top of single-word embeddings, paragraph vectors (Le and
Mikolov, 2014) find fixed-length representations for pieces of text with
arbitrary lengths, such as documents, paragraphs, and sentences. In this work,
we propose a novel interpretation for neural-network-based paragraph vectors by
developing an unsupervised generative model whose maximum likelihood solution
corresponds to traditional paragraph vectors. This probabilistic formulation
allows us to go beyond point estimates of parameters and to perform Bayesian
posterior inference. We find that the entropy of paragraph vectors decreases
with the length of documents, and that information about posterior uncertainty
improves performance in supervised learning tasks such as sentiment analysis
and paraphrase detection.
| 2,017 | Computation and Language |
Breaking the Softmax Bottleneck: A High-Rank RNN Language Model | We formulate language modeling as a matrix factorization problem, and show
that the expressiveness of Softmax-based models (including the majority of
neural language models) is limited by a Softmax bottleneck. Given that natural
language is highly context-dependent, this further implies that in practice
Softmax with distributed word embeddings does not have enough capacity to model
natural language. We propose a simple and effective method to address this
issue, and improve the state-of-the-art perplexities on Penn Treebank and
WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on
the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points
in perplexity.
| 2,018 | Computation and Language |
Kernelized Hashcode Representations for Relation Extraction | Kernel methods have produced state-of-the-art results for a number of NLP
tasks such as relation extraction, but suffer from poor scalability due to the
high cost of computing kernel similarities between natural language structures.
A recently proposed technique, kernelized locality-sensitive hashing (KLSH),
can significantly reduce the computational cost, but is only applicable to
classifiers operating on kNN graphs. Here we propose to use random subspaces of
KLSH codes for efficiently constructing an explicit representation of NLP
structures suitable for general classification methods. Further, we propose an
approach for optimizing the KLSH model for classification problems by
maximizing an approximation of mutual information between the KLSH codes
(feature vectors) and the class labels. We evaluate the proposed approach on
biomedical relation extraction datasets, and observe significant and robust
improvements in accuracy w.r.t. state-of-the-art classifiers, along with
drastic (orders-of-magnitude) speedup compared to conventional kernel methods.
| 2,019 | Computation and Language |
KBGAN: Adversarial Learning for Knowledge Graph Embeddings | We introduce KBGAN, an adversarial learning framework to improve the
performances of a wide range of existing knowledge graph embedding models.
Because knowledge graphs typically only contain positive facts, sampling useful
negative training examples is a non-trivial task. Replacing the head or tail
entity of a fact with a uniformly randomly selected entity is a conventional
method for generating negative facts, but the majority of the generated
negative facts can be easily discriminated from positive facts, and will
contribute little towards the training. Inspired by generative adversarial
networks (GANs), we use one knowledge graph embedding model as a negative
sample generator to assist the training of our desired model, which acts as the
discriminator in GANs. This framework is independent of the concrete form of
generator and discriminator, and therefore can utilize a wide variety of
knowledge graph embedding models as its building blocks. In experiments, we
adversarially train two translation-based models, TransE and TransD, each with
assistance from one of the two probability-based models, DistMult and ComplEx.
We evaluate the performances of KBGAN on the link prediction task, using three
knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental
results show that adversarial training substantially improves the performances
of target embedding models under various settings.
| 2,018 | Computation and Language |
Towards Automated ICD Coding Using Deep Learning | International Classification of Diseases(ICD) is an authoritative health care
classification system of different diseases and conditions for clinical and
management purposes. Considering the complicated and dedicated process to
assign correct codes to each patient admission based on overall diagnosis, we
propose a hierarchical deep learning model with attention mechanism which can
automatically assign ICD diagnostic codes given written diagnosis. We utilize
character-aware neural language models to generate hidden representations of
written diagnosis descriptions and ICD codes, and design an attention mechanism
to address the mismatch between the numbers of descriptions and corresponding
codes. Our experimental results show the strong potential of automated ICD
coding from diagnosis descriptions. Our best model achieves 0.53 and 0.90 of F1
score and area under curve of receiver operating characteristic respectively.
The result outperforms those achieved using character-unaware encoding method
or without attention mechanism. It indicates that our proposed deep learning
model can code automatically in a reasonable way and provide a framework for
computer-auxiliary ICD coding.
| 2,022 | Computation and Language |
Fine Grained Knowledge Transfer for Personalized Task-oriented Dialogue
Systems | Training a personalized dialogue system requires a lot of data, and the data
collected for a single user is usually insufficient. One common practice for
this problem is to share training dialogues between different users and train
multiple sequence-to-sequence dialogue models together with transfer learning.
However, current sequence-to-sequence transfer learning models operate on the
entire sentence, which might cause negative transfer if different personal
information from different users is mixed up. We propose a personalized decoder
model to transfer finer granularity phrase-level knowledge between different
users while keeping personal preferences of each user intact. A novel personal
control gate is introduced, enabling the personalized decoder to switch between
generating personalized phrases and shared phrases. The proposed personalized
decoder model can be easily combined with various deep models and can be
trained with reinforcement learning. Real-world experimental results
demonstrate that the phrase-level personalized decoder improves the BLEU over
multiple sentence-level transfer baseline models by as much as 7.5%.
| 2,017 | Computation and Language |
MojiTalk: Generating Emotional Responses at Scale | Generating emotional language is a key step towards building empathetic
natural language processing agents. However, a major challenge for this line of
research is the lack of large-scale labeled training data, and previous studies
are limited to only small sets of human annotated sentiment labels.
Additionally, explicitly controlling the emotion and sentiment of generated
text is also difficult. In this paper, we take a more radical approach: we
exploit the idea of leveraging Twitter data that are naturally labeled with
emojis. More specifically, we collect a large corpus of Twitter conversations
that include emojis in the response, and assume the emojis convey the
underlying emotions of the sentence. We then introduce a reinforced conditional
variational encoder approach to train a deep generative model on these
conversations, which allows us to use emojis to control the emotion of the
generated text. Experimentally, we show in our quantitative and qualitative
analyses that the proposed models can successfully generate high-quality
abstractive conversation responses in accordance with designated emotions.
| 2,018 | Computation and Language |
Discovering conversational topics and emotions associated with
Demonetization tweets in India | Social media platforms contain great wealth of information which provides us
opportunities explore hidden patterns or unknown correlations, and understand
people's satisfaction with what they are discussing. As one showcase, in this
paper, we summarize the data set of Twitter messages related to recent
demonetization of all Rs. 500 and Rs. 1000 notes in India and explore insights
from Twitter's data. Our proposed system automatically extracts the popular
latent topics in conversations regarding demonetization discussed in Twitter
via the Latent Dirichlet Allocation (LDA) based topic model and also identifies
the correlated topics across different categories. Additionally, it also
discovers people's opinions expressed through their tweets related to the event
under consideration via the emotion analyzer. The system also employs an
intuitive and informative visualization to show the uncovered insight.
Furthermore, we use an evaluation measure, Normalized Mutual Information (NMI),
to select the best LDA models. The obtained LDA results show that the tool can
be effectively used to extract discussion topics and summarize them for further
manual analysis.
| 2,017 | Computation and Language |
Interpretable probabilistic embeddings: bridging the gap between topic
models and neural networks | We consider probabilistic topic models and more recent word embedding
techniques from a perspective of learning hidden semantic representations.
Inspired by a striking similarity of the two approaches, we merge them and
learn probabilistic embeddings with online EM-algorithm on word co-occurrence
data. The resulting embeddings perform on par with Skip-Gram Negative Sampling
(SGNS) on word similarity tasks and benefit in the interpretability of the
components. Next, we learn probabilistic document embeddings that outperform
paragraph2vec on a document similarity task and require less memory and time
for training. Finally, we employ multimodal Additive Regularization of Topic
Models (ARTM) to obtain a high sparsity and learn embeddings for other
modalities, such as timestamps and categories. We observe further improvement
of word similarity performance and meaningful inter-modality similarities.
| 2,017 | Computation and Language |
Unsupervised Document Embedding With CNNs | We propose a new model for unsupervised document embedding. Leading existing
approaches either require complex inference or use recurrent neural networks
(RNN) that are difficult to parallelize. We take a different route and develop
a convolutional neural network (CNN) embedding model. Our CNN architecture is
fully parallelizable resulting in over 10x speedup in inference time over RNN
models. Parallelizable architecture enables to train deeper models where each
successive layer has increasingly larger receptive field and models longer
range semantic structure within the document. We additionally propose a fully
unsupervised learning algorithm to train this model based on stochastic forward
prediction. Empirical results on two public benchmarks show that our approach
produces comparable to state-of-the-art accuracy at a fraction of computational
cost.
| 2,018 | Computation and Language |
Automatic Extraction of Commonsense LocatedNear Knowledge | LocatedNear relation is a kind of commonsense knowledge describing two
physical objects that are typically found near each other in real life. In this
paper, we study how to automatically extract such relationship through a
sentence-level relation classifier and aggregating the scores of entity pairs
from a large corpus. Also, we release two benchmark datasets for evaluation and
future research.
| 2,018 | Computation and Language |
Syntax-Directed Attention for Neural Machine Translation | Attention mechanism, including global attention and local attention, plays a
key role in neural machine translation (NMT). Global attention attends to all
source words for word prediction. In comparison, local attention selectively
looks at fixed-window source words. However, alignment weights for the current
target word often decrease to the left and right by linear distance centering
on the aligned source position and neglect syntax-directed distance
constraints. In this paper, we extend local attention with syntax-distance
constraint, to focus on syntactically related source words with the predicted
target word, thus learning a more effective context vector for word prediction.
Moreover, we further propose a double context NMT architecture, which consists
of a global context vector and a syntax-directed context vector over the global
attention, to provide more translation performance for NMT from source
representation. The experiments on the large-scale Chinese-to-English and
English-to-Germen translation tasks show that the proposed approach achieves a
substantial and significant improvement over the baseline system.
| 2,019 | Computation and Language |
Neural Natural Language Inference Models Enhanced with External
Knowledge | Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets.
| 2,020 | Computation and Language |
Fast Reading Comprehension with ConvNets | State-of-the-art deep reading comprehension models are dominated by recurrent
neural nets. Their sequential nature is a natural fit for language, but it also
precludes parallelization within an instances and often becomes the bottleneck
for deploying such models to latency critical scenarios. This is particularly
problematic for longer texts. Here we present a convolutional architecture as
an alternative to these recurrent architectures. Using simple dilated
convolutional units in place of recurrent ones, we achieve results comparable
to the state of the art on two question answering tasks, while at the same time
achieving up to two orders of magnitude speedups for question answering.
| 2,017 | Computation and Language |
Convolutional Neural Network with Word Embeddings for Chinese Word
Segmentation | Character-based sequence labeling framework is flexible and efficient for
Chinese word segmentation (CWS). Recently, many character-based neural models
have been applied to CWS. While they obtain good performance, they have two
obvious weaknesses. The first is that they heavily rely on manually designed
bigram feature, i.e. they are not good at capturing n-gram features
automatically. The second is that they make no use of full word information.
For the first weakness, we propose a convolutional neural model, which is able
to capture rich n-gram features without any feature engineering. For the second
one, we propose an effective approach to integrate the proposed model with word
embeddings. We evaluate the model on two benchmark datasets: PKU and MSR.
Without any feature engineering, the model obtains competitive performance --
95.7% on PKU and 97.3% on MSR. Armed with word embeddings, the model achieves
state-of-the-art performance on both datasets -- 96.5% on PKU and 98.0% on MSR,
without using any external labeled resource.
| 2,017 | Computation and Language |
SQLNet: Generating Structured Queries From Natural Language Without
Reinforcement Learning | Synthesizing SQL queries from natural language is a long-standing open
problem and has been attracting considerable interest recently. Toward solving
the problem, the de facto approach is to employ a sequence-to-sequence-style
model. Such an approach will necessarily require the SQL queries to be
serialized. Since the same SQL query may have multiple equivalent
serializations, training a sequence-to-sequence-style model is sensitive to the
choice from one of them. This phenomenon is documented as the "order-matters"
problem. Existing state-of-the-art approaches rely on reinforcement learning to
reward the decoder when it generates any of the equivalent serializations.
However, we observe that the improvement from reinforcement learning is
limited.
In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally
solve this problem by avoiding the sequence-to-sequence structure when the
order does not matter. In particular, we employ a sketch-based approach where
the sketch contains a dependency graph so that one prediction can be done by
taking into consideration only the previous predictions that it depends on. In
addition, we propose a sequence-to-set model as well as the column attention
mechanism to synthesize the query based on the sketch. By combining all these
novel techniques, we show that SQLNet can outperform the prior art by 9% to 13%
on the WikiSQL task.
| 2,017 | Computation and Language |
Digitising Cultural Complexity: Representing Rich Cultural Data in a Big
Data environment | One of the major terminological forces driving ICT integration in research
today is that of "big data." While the phrase sounds inclusive and integrative,
"big data" approaches are highly selective, excluding input that cannot be
effectively structured, represented, or digitised. Data of this complex sort is
precisely the kind that human activity produces, but the technological
imperative to enhance signal through the reduction of noise does not
accommodate this richness. Data and the computational approaches that
facilitate "big data" have acquired a perceived objectivity that belies their
curated, malleable, reactive, and performative nature. In an input environment
where anything can "be data" once it is entered into the system as "data," data
cleaning and processing, together with the metadata and information
architectures that structure and facilitate our cultural archives acquire a
capacity to delimit what data are. This engenders a process of simplification
that has major implications for the potential for future innovation within
research environments that depend on rich material yet are increasingly
mediated by digital technologies. This paper presents the preliminary findings
of the European-funded KPLEX (Knowledge Complexity) project which investigates
the delimiting effect digital mediation and datafication has on rich, complex
cultural data. The paper presents a systematic review of existing implicit
definitions of data, elaborating on the implications of these definitions and
highlighting the ways in which metadata and computational technologies can
restrict the interpretative potential of data. It sheds light on the gap
between analogue or augmented digital practices and fully computational ones,
and the strategies researchers have developed to deal with this gap. The paper
proposes a reconceptualisation of data as it is functionally employed within
digitally-mediated research so as to incorporate and acknowledge the richness
and complexity of our source materials.
| 2,017 | Computation and Language |
Word, Subword or Character? An Empirical Study of Granularity in
Chinese-English NMT | Neural machine translation (NMT), a new approach to machine translation, has
been proved to outperform conventional statistical machine translation (SMT)
across a variety of language pairs. Translation is an open-vocabulary problem,
but most existing NMT systems operate with a fixed vocabulary, which causes the
incapability of translating rare words. This problem can be alleviated by using
different translation granularities, such as character, subword and hybrid
word-character. Translation involving Chinese is one of the most difficult
tasks in machine translation, however, to the best of our knowledge, there has
not been any other work exploring which translation granularity is most
suitable for Chinese in NMT. In this paper, we conduct an extensive comparison
using Chinese-English NMT as a case study. Furthermore, we discuss the
advantages and disadvantages of various translation granularities in detail.
Our experiments show that subword model performs best for Chinese-to-English
translation with the vocabulary which is not so big while hybrid word-character
model is most suitable for English-to-Chinese translation. Moreover,
experiments of different granularities show that Hybrid_BPE method can achieve
best result on Chinese-to-English translation task.
| 2,017 | Computation and Language |
Evaluating prose style transfer with the Bible | In the prose style transfer task a system, provided with text input and a
target prose style, produces output which preserves the meaning of the input
text but alters the style. These systems require parallel data for evaluation
of results and usually make use of parallel data for training. Currently, there
are few publicly available corpora for this task. In this work, we identify a
high-quality source of aligned, stylistically distinct text in different
versions of the Bible. We provide a standardized split, into training,
development and testing data, of the public domain versions in our corpus. This
corpus is highly parallel since many Bible versions are included. Sentences are
aligned due to the presence of chapter and verse numbers within all versions of
the text. In addition to the corpus, we present the results, as measured by the
BLEU and PINC metrics, of several models trained on our data which can serve as
baselines for future research. While we present these data as a style transfer
corpus, we believe that it is of unmatched quality and may be useful for other
natural language tasks as well.
| 2,018 | Computation and Language |
QuickEdit: Editing Text & Translations by Crossing Words Out | We propose a framework for computer-assisted text editing. It applies to
translation post-editing and to paraphrasing. Our proposal relies on very
simple interactions: a human editor modifies a sentence by marking tokens they
would like the system to change. Our model then generates a new sentence which
reformulates the initial sentence by avoiding marked words. The approach builds
upon neural sequence-to-sequence modeling and introduces a neural network which
takes as input a sentence along with change markers. Our model is trained on
translation bitext by simulating post-edits. We demonstrate the advantage of
our approach for translation post-editing through simulated post-edits. We also
evaluate our model for paraphrasing through a user study.
| 2,018 | Computation and Language |
Robust Multilingual Part-of-Speech Tagging via Adversarial Training | Adversarial training (AT) is a powerful regularization method for neural
networks, aiming to achieve robustness to input perturbations. Yet, the
specific effects of the robustness obtained from AT are still unclear in the
context of natural language processing. In this paper, we propose and analyze a
neural POS tagging model that exploits AT. In our experiments on the Penn
Treebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages),
we find that AT not only improves the overall tagging accuracy, but also 1)
prevents over-fitting well in low resource languages and 2) boosts tagging
accuracy for rare / unseen words. We also demonstrate that 3) the improved
tagging performance by AT contributes to the downstream task of dependency
parsing, and that 4) AT helps the model to learn cleaner word representations.
5) The proposed AT model is generally effective in different sequence labeling
tasks. These positive results motivate further use of AT for natural language
tasks.
| 2,018 | Computation and Language |
From Word Segmentation to POS Tagging for Vietnamese | This paper presents an empirical comparison of two strategies for Vietnamese
Part-of-Speech (POS) tagging from unsegmented text: (i) a pipeline strategy
where we consider the output of a word segmenter as the input of a POS tagger,
and (ii) a joint strategy where we predict a combined segmentation and POS tag
for each syllable. We also make a comparison between state-of-the-art (SOTA)
feature-based and neural network-based models. On the benchmark Vietnamese
treebank (Nguyen et al., 2009), experimental results show that the pipeline
strategy produces better scores of POS tagging from unsegmented text than the
joint strategy, and the highest accuracy is obtained by using a feature-based
model.
| 2,017 | Computation and Language |
Classical Structured Prediction Losses for Sequence to Sequence Learning | There has been much recent work on training neural attention models at the
sequence-level using either reinforcement learning-style methods or by
optimizing the beam. In this paper, we survey a range of classical objective
functions that have been widely used to train linear models for structured
prediction and apply them to neural sequence to sequence models. Our
experiments show that these losses can perform surprisingly well by slightly
outperforming beam search optimization in a like for like setup. We also report
new state of the art results on both IWSLT'14 German-English translation as
well as Gigaword abstractive summarization. On the larger WMT'14 English-French
translation task, sequence-level training achieves 41.5 BLEU which is on par
with the state of the art.
| 2,018 | Computation and Language |
Dynamic Fusion Networks for Machine Reading Comprehension | This paper presents a novel neural model - Dynamic Fusion Network (DFN), for
machine reading comprehension (MRC). DFNs differ from most state-of-the-art
models in their use of a dynamic multi-strategy attention process, in which
passages, questions and answer candidates are jointly fused into attention
vectors, along with a dynamic multi-step reasoning module for generating
answers. With the use of reinforcement learning, for each input sample that
consists of a question, a passage and a list of candidate answers, an instance
of DFN with a sample-specific network architecture can be dynamically
constructed by determining what attention strategy to apply and how many
reasoning steps to take. Experiments show that DFNs achieve the best result
reported on RACE, a challenging MRC dataset that contains real human reading
questions in a wide variety of types. A detailed empirical analysis also
demonstrates that DFNs can produce attention vectors that summarize information
from questions, passages and answer candidates more effectively than other
popular MRC models.
| 2,018 | Computation and Language |
Unified Pragmatic Models for Generating and Following Instructions | We show that explicit pragmatic inference aids in correctly generating and
following natural language instructions for complex, sequential tasks. Our
pragmatics-enabled models reason about why speakers produce certain
instructions, and about how listeners will react upon hearing them. Like
previous pragmatic models, we use learned base listener and speaker models to
build a pragmatic speaker that uses the base listener to simulate the
interpretation of candidate descriptions, and a pragmatic listener that reasons
counterfactually about alternative descriptions. We extend these models to
tasks with sequential structure. Evaluation of language generation and
interpretation shows that pragmatic inference improves state-of-the-art
listener models (at correctly interpreting human instructions) and speaker
models (at producing instructions correctly interpreted by humans) in diverse
settings.
| 2,018 | Computation and Language |
Learning an Executable Neural Semantic Parser | This paper describes a neural semantic parser that maps natural language
utterances onto logical forms which can be executed against a task-specific
environment, such as a knowledge base or a database, to produce a response. The
parser generates tree-structured logical forms with a transition-based approach
which combines a generic tree-generation algorithm with domain-general
operations defined by the logical language. The generation process is modeled
by structured recurrent neural networks, which provide a rich encoding of the
sentential context and generation history for making predictions. To tackle
mismatches between natural language and logical form tokens, various attention
mechanisms are explored. Finally, we consider different training settings for
the neural semantic parser, including a fully supervised training where
annotated logical forms are given, weakly-supervised training where denotations
are provided, and distant supervision where only unlabeled sentences and a
knowledge base are available. Experiments across a wide range of datasets
demonstrate the effectiveness of our parser.
| 2,018 | Computation and Language |
DuReader: a Chinese Machine Reading Comprehension Dataset from
Real-world Applications | This paper introduces DuReader, a new large-scale, open-domain Chinese ma-
chine reading comprehension (MRC) dataset, designed to address real-world MRC.
DuReader has three advantages over previous MRC datasets: (1) data sources:
questions and documents are based on Baidu Search and Baidu Zhidao; answers are
manually generated. (2) question types: it provides rich annotations for more
question types, especially yes-no and opinion questions, that leaves more
opportunity for the research community. (3) scale: it contains 200K questions,
420K answers and 1M documents; it is the largest Chinese MRC dataset so far.
Experiments show that human performance is well above current state-of-the-art
baseline systems, leaving plenty of room for the community to make
improvements. To help the community make these improvements, both DuReader and
baseline systems have been posted online. We also organize a shared competition
to encourage the exploration of more models. Since the release of the task,
there are significant improvements over the baselines.
| 2,018 | Computation and Language |
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question
Answering | A popular recent approach to answering open-domain questions is to first
search for question-related passages and then apply reading comprehension
models to extract answers. Existing methods usually extract answers from single
passages independently. But some questions require a combination of evidence
from across different sources to answer correctly. In this paper, we propose
two models which make use of multiple passages to generate their answers. Both
use an answer-reranking approach which reorders the answer candidates generated
by an existing state-of-the-art QA model. We propose two methods, namely,
strength-based re-ranking and coverage-based re-ranking, to make use of the
aggregated evidence from different passages to better determine the answer. Our
models have achieved state-of-the-art results on three public open-domain QA
datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with
about 8 percentage points of improvement over the former two datasets.
| 2,018 | Computation and Language |
On Extending Neural Networks with Loss Ensembles for Text Classification | Ensemble techniques are powerful approaches that combine several weak
learners to build a stronger one. As a meta learning framework, ensemble
techniques can easily be applied to many machine learning techniques. In this
paper we propose a neural network extended with an ensemble loss function for
text classification. The weight of each weak loss function is tuned within the
training phase through the gradient propagation optimization method of the
neural network. The approach is evaluated on several text classification
datasets. We also evaluate its performance in various environments with several
degrees of label noise. Experimental results indicate an improvement of the
results and strong resilience against label noise in comparison with other
methods.
| 2,017 | Computation and Language |
False Positive and Cross-relation Signals in Distant Supervision Data | Distant supervision (DS) is a well-established method for relation extraction
from text, based on the assumption that when a knowledge-base contains a
relation between a term pair, then sentences that contain that pair are likely
to express the relation. In this paper, we use the results of a crowdsourcing
relation extraction task to identify two problems with DS data quality: the
widely varying degree of false positives across different relations, and the
observed causal connection between relations that are not considered by the DS
method. The crowdsourcing data aggregation is performed using ambiguity-aware
CrowdTruth metrics, that are used to capture and interpret inter-annotator
disagreement. We also present preliminary results of using the crowd to enhance
DS training data for a relation classification model, without requiring the
crowd to annotate the entire set.
| 2,017 | Computation and Language |
Unsupervised patient representations from clinical notes with
interpretable classification decisions | We have two main contributions in this work: 1. We explore the usage of a
stacked denoising autoencoder, and a paragraph vector model to learn
task-independent dense patient representations directly from clinical notes. We
evaluate these representations by using them as features in multiple supervised
setups, and compare their performance with those of sparse representations. 2.
To understand and interpret the representations, we explore the best encoded
features within the patient representations obtained from the autoencoder
model. Further, we calculate the significance of the input features of the
trained classifiers when we use these pretrained representations as input.
| 2,017 | Computation and Language |
Controllable Abstractive Summarization | Current models for document summarization disregard user preferences such as
the desired length, style, the entities that the user might be interested in,
or how much of the document the user has already read. We present a neural
summarization model with a simple but effective mechanism to enable users to
specify these high level attributes in order to control the shape of the final
summaries to better suit their needs. With user input, our system can produce
high quality summaries that follow user preferences. Without user input, we set
the control variables automatically. On the full text CNN-Dailymail dataset, we
outperform state of the art abstractive systems (both in terms of F1-ROUGE1
40.38 vs. 39.53 and human evaluation).
| 2,018 | Computation and Language |
Weakly-supervised Semantic Parsing with Abstract Examples | Training semantic parsers from weak supervision (denotations) rather than
strong supervision (programs) complicates training in two ways. First, a large
search space of potential programs needs to be explored at training time to
find a correct program. Second, spurious programs that accidentally lead to a
correct denotation add noise to training. In this work we propose that in
closed worlds with clear semantic types, one can substantially alleviate these
problems by utilizing an abstract representation, where tokens in both the
language utterance and program are lifted to an abstract form. We show that
these abstractions can be defined with a handful of lexical rules and that they
result in sharing between different examples that alleviates the difficulties
in training. To test our approach, we develop the first semantic parser for
CNLVR, a challenging visual reasoning dataset, where the search space is large
and overcoming spuriousness is critical, because denotations are either TRUE or
FALSE, and thus random programs are likely to lead to a correct denotation. Our
method substantially improves performance, and reaches 82.5% accuracy, a 14.7%
absolute accuracy improvement compared to the best reported accuracy so far.
| 2,019 | Computation and Language |
Modeling Semantic Relatedness using Global Relation Vectors | Word embedding models such as GloVe rely on co-occurrence statistics from a
large corpus to learn vector representations of word meaning. These vectors
have proven to capture surprisingly fine-grained semantic and syntactic
information. While we may similarly expect that co-occurrence statistics can be
used to capture rich information about the relationships between different
words, existing approaches for modeling such relationships have mostly relied
on manipulating pre-trained word vectors. In this paper, we introduce a novel
method which directly learns relation vectors from co-occurrence statistics. To
this end, we first introduce a variant of GloVe, in which there is an explicit
connection between word vectors and PMI weighted co-occurrence vectors. We then
show how relation vectors can be naturally embedded into the resulting vector
space.
| 2,017 | Computation and Language |
Simulating Action Dynamics with Neural Process Networks | Understanding procedural language requires anticipating the causal effects of
actions, even when they are not explicitly stated. In this work, we introduce
Neural Process Networks to understand procedural text through (neural)
simulation of action dynamics. Our model complements existing memory
architectures with dynamic entity tracking by explicitly modeling actions as
state transformers. The model updates the states of the entities by executing
learned action operators. Empirical results demonstrate that our proposed model
can reason about the unstated causal effects of actions, allowing it to provide
more accurate contextual information for understanding and generating
procedural text, all while offering more interpretable internal representations
than existing alternatives.
| 2,018 | Computation and Language |
Supervised and Unsupervised Transfer Learning for Question Answering | Although transfer learning has been shown to be successful for tasks like
object and speech recognition, its applicability to question answering (QA) has
yet to be well-studied. In this paper, we conduct extensive experiments to
investigate the transferability of knowledge learned from a source QA dataset
to a target dataset using two QA models. The performance of both models on a
TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson
et al., 2013) is significantly improved via a simple transfer learning
technique from MovieQA (Tapaswi et al., 2016). In particular, one of the models
achieves the state-of-the-art on all target datasets; for the TOEFL listening
comprehension test, it outperforms the previous best model by 7%. Finally, we
show that transfer learning is helpful even in unsupervised scenarios when
correct answers for target QA dataset examples are not available.
| 2,018 | Computation and Language |
A Deep Learning Approach for Expert Identification in Question Answering
Communities | In this paper, we describe an effective convolutional neural network
framework for identifying the expert in question answering community. This
approach uses the convolutional neural network and combines user feature
representations with question feature representations to compute scores that
the user who gets the highest score is the expert on this question. Unlike
prior work, this method does not measure expert based on measure answer content
quality to identify the expert but only require question sentence and user
embedding feature to identify the expert. Remarkably, Our model can be applied
to different languages and different domains. The proposed framework is trained
on two datasets, The first dataset is Stack Overflow and the second one is
Zhihu. The Top-1 accuracy results of our experiments show that our framework
outperforms the best baseline framework for expert identification.
| 2,017 | Computation and Language |
Attention Focusing for Neural Machine Translation by Bridging Source and
Target Embeddings | In neural machine translation, a source sequence of words is encoded into a
vector from which a target sequence is generated in the decoding phase.
Differently from statistical machine translation, the associations between
source words and their possible target counterparts are not explicitly stored.
Source and target words are at the two ends of a long information processing
procedure, mediated by hidden states at both the source encoding and the target
decoding phases. This makes it possible that a source word is incorrectly
translated into a target word that is not any of its admissible equivalent
counterparts in the target language.
In this paper, we seek to somewhat shorten the distance between source and
target words in that procedure, and thus strengthen their association, by means
of a method we term bridging source and target word embeddings. We experiment
with three strategies: (1) a source-side bridging model, where source word
embeddings are moved one step closer to the output target sequence; (2) a
target-side bridging model, which explores the more relevant source word
embeddings for the prediction of the target sequence; and (3) a direct bridging
model, which directly connects source and target word embeddings seeking to
minimize errors in the translation of ones by the others.
Experiments and analysis presented in this paper demonstrate that the
proposed bridging models are able to significantly improve quality of both
sentence translation, in general, and alignment and translation of individual
source words with target words, in particular.
| 2,018 | Computation and Language |
A Sequential Neural Encoder with Latent Structured Description for
Modeling Sentences | In this paper, we propose a sequential neural encoder with latent structured
description (SNELSD) for modeling sentences. This model introduces latent
chunk-level representations into conventional sequential neural encoders, i.e.,
recurrent neural networks (RNNs) with long short-term memory (LSTM) units, to
consider the compositionality of languages in semantic modeling. An SNELSD
model has a hierarchical structure that includes a detection layer and a
description layer. The detection layer predicts the boundaries of latent word
chunks in an input sentence and derives a chunk-level vector for each word. The
description layer utilizes modified LSTM units to process these chunk-level
vectors in a recurrent manner and produces sequential encoding outputs. These
output vectors are further concatenated with word vectors or the outputs of a
chain LSTM encoder to obtain the final sentence representation. All the model
parameters are learned in an end-to-end manner without a dependency on
additional text chunking or syntax parsing. A natural language inference (NLI)
task and a sentiment analysis (SA) task are adopted to evaluate the performance
of our proposed model. The experimental results demonstrate the effectiveness
of the proposed SNELSD model on exploring task-dependent chunking patterns
during the semantic modeling of sentences. Furthermore, the proposed method
achieves better performance than conventional chain LSTMs and tree-structured
LSTMs on both tasks.
| 2,017 | Computation and Language |
Aicyber's System for NLPCC 2017 Shared Task 2: Voting of Baselines | This paper presents Aicyber's system for NLPCC 2017 shared task 2. It is
formed by a voting of three deep learning based system trained on
character-enhanced word vectors and a well known bag-of-word model.
| 2,017 | Computation and Language |
Tracking Typological Traits of Uralic Languages in Distributed Language
Representations | Although linguistic typology has a long history, computational approaches
have only recently gained popularity. The use of distributed representations in
computational linguistics has also become increasingly popular. A recent
development is to learn distributed representations of language, such that
typologically similar languages are spatially close to one another. Although
empirical successes have been shown for such language representations, they
have not been subjected to much typological probing. In this paper, we first
look at whether this type of language representations are empirically useful
for model transfer between Uralic languages in deep neural networks. We then
investigate which typological features are encoded in these representations by
attempting to predict features in the World Atlas of Language Structures, at
various stages of fine-tuning of the representations. We focus on Uralic
languages, and find that some typological traits can be automatically inferred
with accuracies well above a strong baseline.
| 2,017 | Computation and Language |
Investigating Inner Properties of Multimodal Representation and Semantic
Compositionality with Brain-based Componential Semantics | Multimodal models have been proven to outperform text-based approaches on
learning semantic representations. However, it still remains unclear what
properties are encoded in multimodal representations, in what aspects do they
outperform the single-modality representations, and what happened in the
process of semantic compositionality in different input modalities. Considering
that multimodal models are originally motivated by human concept
representations, we assume that correlating multimodal representations with
brain-based semantics would interpret their inner properties to answer the
above questions. To that end, we propose simple interpretation methods based on
brain-based componential semantics. First we investigate the inner properties
of multimodal representations by correlating them with corresponding
brain-based property vectors. Then we map the distributed vector space to the
interpretable brain-based componential space to explore the inner properties of
semantic compositionality. Ultimately, the present paper sheds light on the
fundamental questions of natural language understanding, such as how to
represent the meaning of words and how to combine word meanings into larger
units.
| 2,017 | Computation and Language |
Detecting and assessing contextual change in diachronic text documents
using context volatility | Terms in diachronic text corpora may exhibit a high degree of semantic
dynamics that is only partially captured by the common notion of semantic
change. The new measure of context volatility that we propose models the degree
by which terms change context in a text collection over time. The computation
of context volatility for a word relies on the significance-values of its
co-occurrent terms and the corresponding co-occurrence ranks in sequential time
spans. We define a baseline and present an efficient computational approach in
order to overcome problems related to computational issues in the data
structure. Results are evaluated both, on synthetic documents that are used to
simulate contextual changes, and a real example based on British newspaper
texts.
| 2,017 | Computation and Language |
Dialogue Act Recognition via CRF-Attentive Structured Network | Dialogue Act Recognition (DAR) is a challenging problem in dialogue
interpretation, which aims to attach semantic labels to utterances and
characterize the speaker's intention. Currently, many existing approaches
formulate the DAR problem ranging from multi-classification to structured
prediction, which suffer from handcrafted feature extensions and attentive
contextual structural dependencies. In this paper, we consider the problem of
DAR from the viewpoint of extending richer Conditional Random Field (CRF)
structural dependencies without abandoning end-to-end training. We incorporate
hierarchical semantic inference with memory mechanism on the utterance
modeling. We then extend structured attention network to the linear-chain
conditional random field layer which takes into account both contextual
utterances and corresponding dialogue acts. The extensive experiments on two
major benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder
Dialogue Act (MRDA) datasets show that our method achieves better performance
than other state-of-the-art solutions to the problem. It is a remarkable fact
that our method is nearly close to the human annotator's performance on SWDA
within 2% gap.
| 2,017 | Computation and Language |
Words are Malleable: Computing Semantic Shifts in Political and Media
Discourse | Recently, researchers started to pay attention to the detection of temporal
shifts in the meaning of words. However, most (if not all) of these approaches
restricted their efforts to uncovering change over time, thus neglecting other
valuable dimensions such as social or political variability. We propose an
approach for detecting semantic shifts between different viewpoints--broadly
defined as a set of texts that share a specific metadata feature, which can be
a time-period, but also a social entity such as a political party. For each
viewpoint, we learn a semantic space in which each word is represented as a low
dimensional neural embedded vector. The challenge is to compare the meaning of
a word in one space to its meaning in another space and measure the size of the
semantic shifts. We compare the effectiveness of a measure based on optimal
transformations between the two spaces with a measure based on the similarity
of the neighbors of the word in the respective spaces. Our experiments
demonstrate that the combination of these two performs best. We show that the
semantic shifts not only occur over time, but also along different viewpoints
in a short period of time. For evaluation, we demonstrate how this approach
captures meaningful semantic shifts and can help improve other tasks such as
the contrastive viewpoint summarization and ideology detection (measured as
classification accuracy) in political texts. We also show that the two laws of
semantic change which were empirically shown to hold for temporal shifts also
hold for shifts across viewpoints. These laws state that frequent words are
less likely to shift meaning while words with many senses are more likely to do
so.
| 2,017 | Computation and Language |
Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time | Dynamic topic modeling facilitates the identification of topical trends over
time in temporal collections of unstructured documents. We introduce a novel
unsupervised neural dynamic topic model named as Recurrent Neural
Network-Replicated Softmax Model (RNNRSM), where the discovered topics at each
time influence the topic discovery in the subsequent time steps. We account for
the temporal ordering of documents by explicitly modeling a joint distribution
of latent topical dependencies over time, using distributional estimators with
temporal recurrent connections. Applying RNN-RSM to 19 years of articles on NLP
research, we demonstrate that compared to state-of-the art topic models, RNNRSM
shows better generalization, topic interpretation, evolution and trends. We
also introduce a metric (named as SPAN) to quantify the capability of dynamic
topic model to capture word evolution in topics over time.
| 2,018 | Computation and Language |
Unsupervised Morphological Expansion of Small Datasets for Improving
Word Embeddings | We present a language independent, unsupervised method for building word
embeddings using morphological expansion of text. Our model handles the problem
of data sparsity and yields improved word embeddings by relying on training
word embeddings on artificially generated sentences. We evaluate our method
using small sized training sets on eleven test sets for the word similarity
task across seven languages. Further, for English, we evaluated the impacts of
our approach using a large training set on three standard test sets. Our method
improved results across all languages.
| 2,017 | Computation and Language |
An Unsupervised Approach for Mapping between Vector Spaces | We present a language independent, unsupervised approach for transforming
word embeddings from source language to target language using a transformation
matrix. Our model handles the problem of data scarcity which is faced by many
languages in the world and yields improved word embeddings for words in the
target language by relying on transformed embeddings of words of the source
language. We initially evaluate our approach via word similarity tasks on a
similar language pair - Hindi as source and Urdu as the target language, while
we also evaluate our method on French and German as target languages and
English as source language. Our approach improves the current state of the art
results - by 13% for French and 19% for German. For Urdu, we saw an increment
of 16% over our initial baseline score. We further explore the prospects of our
approach by applying it on multiple models of the same language and
transferring words between the two models, thus solving the problem of missing
words in a model. We evaluate this on word similarity and word analogy tasks.
| 2,017 | Computation and Language |
ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with
Millions of Machine Translations | We describe PARANMT-50M, a dataset of more than 50 million English-English
sentential paraphrase pairs. We generated the pairs automatically by using
neural machine translation to translate the non-English side of a large
parallel corpus, following Wieting et al. (2017). Our hope is that ParaNMT-50M
can be a valuable resource for paraphrase generation and can provide a rich
source of semantic knowledge to improve downstream natural language
understanding tasks. To show its utility, we use ParaNMT-50M to train
paraphrastic sentence embeddings that outperform all supervised systems on
every SemEval semantic textual similarity competition, in addition to showing
how it can be used for paraphrase generation.
| 2,018 | Computation and Language |
Detecting Egregious Conversations between Customers and Virtual Agents | Virtual agents are becoming a prominent channel of interaction in customer
service. Not all customer interactions are smooth, however, and some can become
almost comically bad. In such instances, a human agent might need to step in
and salvage the conversation. Detecting bad conversations is important since
disappointing customer service may threaten customer loyalty and impact
revenue. In this paper, we outline an approach to detecting such egregious
conversations, using behavioral cues from the user, patterns in agent
responses, and user-agent interaction. Using logs of two commercial systems, we
show that using these features improves the detection F1-score by around 20%
over using textual features alone. In addition, we show that those features are
common across two quite different domains and, arguably, universal.
| 2,018 | Computation and Language |
CMU LiveMedQA at TREC 2017 LiveQA: A Consumer Health Question Answering
System | In this paper, we present LiveMedQA, a question answering system that is
optimized for consumer health question. On top of the general QA system
pipeline, we introduce several new features that aim to exploit domain-specific
knowledge and entity structures for better performance. This includes a
question type/focus analyzer based on deep text classification model, a
tree-based knowledge graph for answer generation and a complementary
structure-aware searcher for answer retrieval. LiveMedQA system is evaluated in
the TREC 2017 LiveQA medical subtask, where it received an average score of
0.356 on a 3 point scale. Evaluation results revealed 3 substantial drawbacks
in current LiveMedQA system, based on which we provide a detailed discussion
and propose a few solutions that constitute the main focus of our subsequent
work.
| 2,017 | Computation and Language |
Finer Grained Entity Typing with TypeNet | We consider the challenging problem of entity typing over an extremely fine
grained set of types, wherein a single mention or entity can have many
simultaneous and often hierarchically-structured types. Despite the importance
of the problem, there is a relative lack of resources in the form of
fine-grained, deep type hierarchies aligned to existing knowledge bases. In
response, we introduce TypeNet, a dataset of entity types consisting of over
1941 types organized in a hierarchy, obtained by manually annotating a mapping
from 1081 Freebase types to WordNet. We also experiment with several models
comparable to state-of-the-art systems and explore techniques to incorporate a
structure loss on the hierarchy with the standard mention typing loss, as a
first step towards future research on this dataset.
| 2,017 | Computation and Language |
Go for a Walk and Arrive at the Answer: Reasoning Over Paths in
Knowledge Bases using Reinforcement Learning | Knowledge bases (KB), both automatically and manually constructed, are often
incomplete --- many valid facts can be inferred from the KB by synthesizing
existing information. A popular approach to KB completion is to infer new
relations by combinatory reasoning over the information found along other paths
connecting a pair of entities. Given the enormous size of KBs and the
exponential number of paths, previous path-based models have considered only
the problem of predicting a missing relation given two entities or evaluating
the truth of a proposed triple. Additionally, these methods have traditionally
used random paths between fixed entity pairs or more recently learned to pick
paths between them. We propose a new algorithm MINERVA, which addresses the
much more difficult and practical task of answering questions where the
relation is known, but only one entity. Since random walks are impractical in a
setting with combinatorially many destinations from a start node, we present a
neural reinforcement learning approach which learns how to navigate the graph
conditioned on the input query to find predictive paths. Empirically, this
approach obtains state-of-the-art results on several datasets, significantly
outperforming prior methods.
| 2,019 | Computation and Language |
Crowdsourcing Question-Answer Meaning Representations | We introduce Question-Answer Meaning Representations (QAMRs), which represent
the predicate-argument structure of a sentence as a set of question-answer
pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled
with very little training, and gather a dataset with over 5,000 sentences and
100,000 questions. A detailed qualitative analysis demonstrates that the
crowd-generated question-answer pairs cover the vast majority of
predicate-argument relationships in existing datasets (including PropBank,
NomBank, QA-SRL, and AMR) along with many previously under-resourced ones,
including implicit arguments and relations. The QAMR data and annotation code
is made publicly available to enable future work on how best to model these
complex phenomena.
| 2,017 | Computation and Language |
An Encoder-Decoder Framework Translating Natural Language to Database
Queries | Machine translation is going through a radical revolution, driven by the
explosive development of deep learning techniques using Convolutional Neural
Network (CNN) and Recurrent Neural Network (RNN). In this paper, we consider a
special case in machine translation problems, targeting to convert natural
language into Structured Query Language (SQL) for data retrieval over
relational database. Although generic CNN and RNN learn the grammar structure
of SQL when trained with sufficient samples, the accuracy and training
efficiency of the model could be dramatically improved, when the translation
model is deeply integrated with the grammar rules of SQL. We present a new
encoder-decoder framework, with a suite of new approaches, including new
semantic features fed into the encoder, grammar-aware states injected into the
memory of decoder, as well as recursive state management for sub-queries. These
techniques help the neural network better focus on understanding semantics of
operations in natural language and save the efforts on SQL grammar learning.
The empirical evaluation on real world database and queries show that our
approach outperform state-of-the-art solution by a significant margin.
| 2,018 | Computation and Language |
ConvAMR: Abstract meaning representation parsing for legal document | Convolutional neural networks (CNN) have recently achieved remarkable
performance in a wide range of applications. In this research, we equip
convolutional sequence-to-sequence (seq2seq) model with an efficient graph
linearization technique for abstract meaning representation parsing. Our
linearization method is better than the prior method at signaling the turn of
graph traveling. Additionally, convolutional seq2seq model is more appropriate
and considerably faster than the recurrent neural network models in this task.
Our method outperforms previous methods by a large margin on both the standard
dataset LDC2014T12. Our result indicates that future works still have a room
for improving parsing model using graph linearization approach.
| 2,017 | Computation and Language |
Addressing Cross-Lingual Word Sense Disambiguation on Low-Density
Languages: Application to Persian | We explore the use of unsupervised methods in Cross-Lingual Word Sense
Disambiguation (CL-WSD) with the application of English to Persian. Our
proposed approach targets the languages with scarce resources (low-density) by
exploiting word embedding and semantic similarity of the words in context. We
evaluate the approach on a recent evaluation benchmark and compare it with the
state-of-the-art unsupervised system (CO-Graph). The results show that our
approach outperforms both the standard baseline and the CO-Graph system in both
of the task evaluation metrics (Out-Of-Five and Best result).
| 2,018 | Computation and Language |
A Generative Approach to Question Answering | Question Answering has come a long way from answer sentence selection,
relational QA to reading and comprehension. We shift our attention to
generative question answering (gQA) by which we facilitate machine to read
passages and answer questions by learning to generate the answers. We frame the
problem as a generative task where the encoder being a network that models the
relationship between question and passage and encoding them to a vector thus
facilitating the decoder to directly form an abstraction of the answer. Not
being able to retain facts and making repetitions are common mistakes that
affect the overall legibility of answers. To counter these issues, we employ
copying mechanism and maintenance of coverage vector in our model respectively.
Our results on MS-MARCO demonstrate it's superiority over baselines and we also
show qualitative examples where we improved in terms of correctness and
readability
| 2,018 | Computation and Language |
Question Asking as Program Generation | A hallmark of human intelligence is the ability to ask rich, creative, and
revealing questions. Here we introduce a cognitive model capable of
constructing human-like questions. Our approach treats questions as formal
programs that, when executed on the state of the world, output an answer. The
model specifies a probability distribution over a complex, compositional space
of programs, favoring concise programs that help the agent learn in the current
context. We evaluate our approach by modeling the types of open-ended questions
generated by humans who were attempting to learn about an ambiguous situation
in a game. We find that our model predicts what questions people will ask, and
can creatively produce novel questions that were not present in the training
set. In addition, we compare a number of model variants, finding that both
question informativeness and complexity are important for producing human-like
questions.
| 2,017 | Computation and Language |
Phonological (un)certainty weights lexical activation | Spoken word recognition involves at least two basic computations. First is
matching acoustic input to phonological categories (e.g. /b/, /p/, /d/). Second
is activating words consistent with those phonological categories. Here we test
the hypothesis that the listener's probability distribution over lexical items
is weighted by the outcome of both computations: uncertainty about phonological
discretisation and the frequency of the selected word(s). To test this, we
record neural responses in auditory cortex using magnetoencephalography, and
model this activity as a function of the size and relative activation of
lexical candidates. Our findings indicate that towards the beginning of a word,
the processing system indeed weights lexical candidates by both phonological
certainty and lexical frequency; however, later into the word, activation is
weighted by frequency alone.
| 2,017 | Computation and Language |
Learning to Organize Knowledge and Answer Questions with N-Gram Machines | Though deep neural networks have great success in natural language
processing, they are limited at more knowledge intensive AI tasks, such as
open-domain Question Answering (QA). Existing end-to-end deep QA models need to
process the entire text after observing the question, and therefore their
complexity in responding a question is linear in the text size. This is
prohibitive for practical tasks such as QA from Wikipedia, a novel, or the Web.
We propose to solve this scalability issue by using symbolic meaning
representations, which can be indexed and retrieved efficiently with complexity
that is independent of the text size. We apply our approach, called the N-Gram
Machine (NGM), to three representative tasks. First as proof-of-concept, we
demonstrate that NGM successfully solves the bAbI tasks of synthetic text.
Second, we show that NGM scales to large corpus by experimenting on "life-long
bAbI", a special version of bAbI that contains millions of sentences. Lastly on
the WikiMovies dataset, we use NGM to induce latent structure (i.e. schema) and
answer questions from natural language Wikipedia text, with only QA pairs as
weak supervision.
| 2,019 | Computation and Language |
Low-dimensional Embeddings for Interpretable Anchor-based Topic
Inference | The anchor words algorithm performs provably efficient topic model inference
by finding an approximate convex hull in a high-dimensional word co-occurrence
space. However, the existing greedy algorithm often selects poor anchor words,
reducing topic quality and interpretability. Rather than finding an approximate
convex hull in a high-dimensional space, we propose to find an exact convex
hull in a visualizable 2- or 3-dimensional space. Such low-dimensional
embeddings both improve topics and clearly show users why the algorithm selects
certain words.
| 2,017 | Computation and Language |
Style Transfer in Text: Exploration and Evaluation | Style transfer is an important problem in natural language processing (NLP).
However, the progress in language style transfer is lagged behind other
domains, such as computer vision, mainly because of the lack of parallel data
and principle evaluation metrics. In this paper, we propose to learn style
transfer with non-parallel data. We explore two models to achieve this goal,
and the key idea behind the proposed models is to learn separate content
representations and style representations using adversarial networks. We also
propose novel evaluation metrics which measure two aspects of style transfer:
transfer strength and content preservation. We access our models and the
evaluation metrics on two tasks: paper-news title transfer, and
positive-negative review transfer. Results show that the proposed content
preservation metric is highly correlate to human judgments, and the proposed
models are able to generate sentences with higher style transfer strength and
similar content preservation score comparing to auto-encoder.
| 2,017 | Computation and Language |
Automatically Extracting Action Graphs from Materials Science Synthesis
Procedures | Computational synthesis planning approaches have achieved recent success in
organic chemistry, where tabulated synthesis procedures are readily available
for supervised learning. The syntheses of inorganic materials, however, exist
primarily as natural language narratives contained within scientific journal
articles. This synthesis information must first be extracted from the text in
order to enable analogous synthesis planning methods for inorganic materials.
In this work, we present a system for automatically extracting structured
representations of synthesis procedures from the texts of materials science
journal articles that describe explicit, experimental syntheses of inorganic
compounds. We define the structured representation as a set of linked events
made up of extracted scientific entities and evaluate two unsupervised
approaches for extracting these structures on expert-annotated articles: a
strong heuristic baseline and a generative model of procedural text. We also
evaluate a variety of supervised models for extracting scientific entities. Our
results provide insight into the nature of the data and directions for further
work in this exciting new area of research.
| 2,017 | Computation and Language |
Is China Entering WTO or shijie maoyi zuzhi--a Corpus Study of English
Acronyms in Chinese Newspapers | This is one of the first studies that quantitatively examine the usage of
English acronyms (e.g. WTO) in Chinese texts. Using newspaper corpora, I try to
answer 1) for all instances of a concept that has an English acronym (e.g.
World Trade Organization), what percentage is expressed in the English acronym
(WTO), and what percentage in its Chinese translation (shijie maoyi zuzhi), and
2) what factors are at play in language users' choice between the English and
Chinese forms? Results show that different concepts have different percentage
for English acronyms (PercentOfEn), ranging from 2% to 98%. Linear models show
that PercentOfEn for individual concepts can be predicted by language economy
(how long the Chinese translation is), concept frequency, and whether the first
appearance of the concept in Chinese newspapers is the English acronym or its
Chinese translation (all p < .05).
| 2,017 | Computation and Language |
A Discourse-Level Named Entity Recognition and Relation Extraction
Dataset for Chinese Literature Text | Named Entity Recognition and Relation Extraction for Chinese literature text
is regarded as the highly difficult problem, partially because of the lack of
tagging sets. In this paper, we build a discourse-level dataset from hundreds
of Chinese literature articles for improving this task. To build a high quality
dataset, we propose two tagging methods to solve the problem of data
inconsistency, including a heuristic tagging method and a machine auxiliary
tagging method. Based on this corpus, we also introduce several widely used
models to conduct experiments. Experimental results not only show the
usefulness of the proposed dataset, but also provide baselines for further
research. The dataset is available at
https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset
| 2,019 | Computation and Language |
Incorporating Syntactic Uncertainty in Neural Machine Translation with
Forest-to-Sequence Model | Incorporating syntactic information in Neural Machine Translation models is a
method to compensate their requirement for a large amount of parallel training
text, especially for low-resource language pairs. Previous works on using
syntactic information provided by (inevitably error-prone) parsers has been
promising. In this paper, we propose a forest-to-sequence Attentional Neural
Machine Translation model to make use of exponentially many parse trees of the
source sentence to compensate for the parser errors. Our method represents the
collection of parse trees as a packed forest, and learns a neural attentional
transduction model from the forest to the target sentence. Experiments on
English to German, Chinese and Persian translation show the superiority of our
method over the tree-to-sequence and vanilla sequence-to-sequence neural
translation models.
| 2,017 | Computation and Language |
Prior-aware Dual Decomposition: Document-specific Topic Inference for
Spectral Topic Models | Spectral topic modeling algorithms operate on matrices/tensors of word
co-occurrence statistics to learn topic-specific word distributions. This
approach removes the dependence on the original documents and produces
substantial gains in efficiency and provable topic inference, but at a cost:
the model can no longer provide information about the topic composition of
individual documents. Recently Thresholded Linear Inverse (TLI) is proposed to
map the observed words of each document back to its topic composition. However,
its linear characteristics limit the inference quality without considering the
important prior information over topics. In this paper, we evaluate Simple
Probabilistic Inverse (SPI) method and novel Prior-aware Dual Decomposition
(PADD) that is capable of learning document-specific topic compositions in
parallel. Experiments show that PADD successfully leverages topic correlations
as a prior, notably outperforming TLI and learning quality topic compositions
comparable to Gibbs sampling on various data.
| 2,017 | Computation and Language |
Fast BTG-Forest-Based Hierarchical Sub-sentential Alignment | In this paper, we propose a novel BTG-forest-based alignment method. Based on
a fast unsupervised initialization of parameters using variational IBM models,
we synchronously parse parallel sentences top-down and align hierarchically
under the constraint of BTG. Our two-step method can achieve the same run-time
and comparable translation performance as fast_align while it yields smaller
phrase tables. Final SMT results show that our method even outperforms in the
experiment of distantly related languages, e.g., English-Japanese.
| 2,017 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.