Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Mention-Ranking Model for Abstract Anaphora Resolution | Resolving abstract anaphora is an important, but difficult task for text
understanding. Yet, with recent advances in representation learning this task
becomes a more tangible aim. A central property of abstract anaphora is that it
establishes a relation between the anaphor embedded in the anaphoric sentence
and its (typically non-nominal) antecedent. We propose a mention-ranking model
that learns how abstract anaphors relate to their antecedents with an
LSTM-Siamese Net. We overcome the lack of training data by generating
artificial anaphoric sentence--antecedent pairs. Our model outperforms
state-of-the-art results on shell noun resolution. We also report first
benchmark results on an abstract anaphora subset of the ARRAU corpus. This
corpus presents a greater challenge due to a mixture of nominal and pronominal
anaphors and a greater range of confounders. We found model variants that
outperform the baselines for nominal anaphors, without training on individual
anaphor data, but still lag behind for pronominal anaphors. Our model selects
syntactically plausible candidates and -- if disregarding syntax --
discriminates candidates using deeper features.
| 2,017 | Computation and Language |
Content-Based Table Retrieval for Web Queries | Understanding the connections between unstructured text and semi-structured
table is an important yet neglected problem in natural language processing. In
this work, we focus on content-based table retrieval. Given a query, the task
is to find the most relevant table from a collection of tables. Further
progress towards improving this area requires powerful models of semantic
matching and richer training and evaluation resources. To remedy this, we
present a ranking based approach, and implement both carefully designed
features and neural network architectures to measure the relevance between a
query and the content of a table. Furthermore, we release an open-domain
dataset that includes 21,113 web queries for 273,816 tables. We conduct
comprehensive experiments on both real world and synthetic datasets. Results
verify the effectiveness of our approach and present the challenges for this
task.
| 2,017 | Computation and Language |
Improving Semantic Relevance for Sequence-to-Sequence Learning of
Chinese Social Media Text Summarization | Current Chinese social media text summarization models are based on an
encoder-decoder framework. Although its generated summaries are similar to
source texts literally, they have low semantic relevance. In this work, our
goal is to improve semantic relevance between source texts and summaries for
Chinese social media summarization. We introduce a Semantic Relevance Based
neural model to encourage high semantic similarity between texts and summaries.
In our model, the source text is represented by a gated attention encoder,
while the summary representation is produced by a decoder. Besides, the
similarity score between the representations is maximized during training. Our
experiments show that the proposed model outperforms baseline systems on a
social media corpus.
| 2,017 | Computation and Language |
The Algorithmic Inflection of Russian and Generation of Grammatically
Correct Text | We present a deterministic algorithm for Russian inflection. This algorithm
is implemented in a publicly available web-service www.passare.ru which
provides functions for inflection of single words, word matching and synthesis
of grammatically correct Russian text. The inflectional functions have been
tested against the annotated corpus of Russian language OpenCorpora.
| 2,017 | Computation and Language |
Dynamic Integration of Background Knowledge in Neural NLU Systems | Common-sense and background knowledge is required to understand natural
language, but in most neural natural language understanding (NLU) systems, this
knowledge must be acquired from training corpora during learning, and then it
is static at test time. We introduce a new architecture for the dynamic
integration of explicit background knowledge in NLU models. A general-purpose
reading module reads background knowledge in the form of free-text statements
(together with task-specific text inputs) and yields refined word
representations to a task-specific NLU architecture that reprocesses the task
inputs with these representations. Experiments on document question answering
(DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness
and flexibility of the approach. Analysis shows that our model learns to
exploit knowledge in a semantically appropriate way.
| 2,018 | Computation and Language |
Advances in Joint CTC-Attention based End-to-End Speech Recognition with
a Deep CNN Encoder and RNN-LM | We present a state-of-the-art end-to-end Automatic Speech Recognition (ASR)
model. We learn to listen and write characters with a joint Connectionist
Temporal Classification (CTC) and attention-based encoder-decoder network. The
encoder is a deep Convolutional Neural Network (CNN) based on the VGG network.
The CTC network sits on top of the encoder and is jointly trained with the
attention-based decoder. During the beam search process, we combine the CTC
predictions, the attention-based decoder predictions and a separately trained
LSTM language model. We achieve a 5-10\% error reduction compared to prior
systems on spontaneous Japanese and Chinese speech, and our end-to-end model
beats out traditional hybrid ASR systems.
| 2,017 | Computation and Language |
Optimizing expected word error rate via sampling for speech recognition | State-level minimum Bayes risk (sMBR) training has become the de facto
standard for sequence-level training of speech recognition acoustic models. It
has an elegant formulation using the expectation semiring, and gives large
improvements in word error rate (WER) over models trained solely using
cross-entropy (CE) or connectionist temporal classification (CTC). sMBR
training optimizes the expected number of frames at which the reference and
hypothesized acoustic states differ. It may be preferable to optimize the
expected WER, but WER does not interact well with the expectation semiring, and
previous approaches based on computing expected WER exactly involve expanding
the lattices used during training. In this paper we show how to perform
optimization of the expected WER by sampling paths from the lattices used
during conventional sMBR training. The gradient of the expected WER is itself
an expectation, and so may be approximated using Monte Carlo sampling. We show
experimentally that optimizing WER during acoustic model training gives 5%
relative improvement in WER over a well-tuned sMBR baseline on a 2-channel
query recognition task (Google Home).
| 2,017 | Computation and Language |
Learning to Embed Words in Context for Syntactic Tasks | We present models for embedding words in the context of surrounding words.
Such models, which we refer to as token embeddings, represent the
characteristics of a word that are specific to a given context, such as word
sense, syntactic category, and semantic role. We explore simple, efficient
token embedding models based on standard neural network architectures. We learn
token embeddings on a large amount of unannotated text and evaluate them as
features for part-of-speech taggers and dependency parsers trained on much
smaller amounts of annotated data. We find that predictors endowed with token
embeddings consistently outperform baseline predictors across a range of
context window and training set sizes.
| 2,017 | Computation and Language |
Assigning personality/identity to a chatting machine for coherent
conversation generation | Endowing a chatbot with personality or an identity is quite challenging but
critical to deliver more realistic and natural conversations. In this paper, we
address the issue of generating responses that are coherent to a pre-specified
agent profile. We design a model consisting of three modules: a profile
detector to decide whether a post should be responded using the profile and
which key should be addressed, a bidirectional decoder to generate responses
forward and backward starting from a selected profile value, and a position
detector that predicts a word position from which decoding should start given a
selected profile value. We show that general conversation data from social
media can be used to generate profile-coherent responses. Manual and automatic
evaluation shows that our model can deliver more coherent, natural, and
diversified responses.
| 2,017 | Computation and Language |
Overview of the NLPCC 2017 Shared Task: Chinese News Headline
Categorization | In this paper, we give an overview for the shared task at the CCF Conference
on Natural Language Processing \& Chinese Computing (NLPCC 2017): Chinese News
Headline Categorization. The dataset of this shared task consists 18 classes,
12,000 short texts along with corresponded labels for each class. The dataset
and example code can be accessed at
https://github.com/FudanNLP/nlpcc2017_news_headline_categorization.
| 2,017 | Computation and Language |
Deriving a Representative Vector for Ontology Classes with Instance Word
Vector Embeddings | Selecting a representative vector for a set of vectors is a very common
requirement in many algorithmic tasks. Traditionally, the mean or median vector
is selected. Ontology classes are sets of homogeneous instance objects that can
be converted to a vector space by word vector embeddings. This study proposes a
methodology to derive a representative vector for ontology classes whose
instances were converted to the vector space. We start by deriving five
candidate vectors which are then used to train a machine learning model that
would calculate a representative vector for the class. We show that our
methodology out-performs the traditional mean and median vector
representations.
| 2,019 | Computation and Language |
Depthwise Separable Convolutions for Neural Machine Translation | Depthwise separable convolutions reduce the number of parameters and
computation used in convolutional operations while increasing representational
efficiency. They have been shown to be successful in image classification
models, both in obtaining better models than previously possible for a given
parameter count (the Xception architecture) and considerably reducing the
number of parameters required to perform at a given level (the MobileNets
family of architectures). Recently, convolutional sequence-to-sequence networks
have been applied to machine translation tasks with good results. In this work,
we study how depthwise separable convolutions can be applied to neural machine
translation. We introduce a new architecture inspired by Xception and ByteNet,
called SliceNet, which enables a significant reduction of the parameter count
and amount of computation needed to obtain results like ByteNet, and, with a
similar parameter count, achieves new state-of-the-art results. In addition to
showing that depthwise separable convolutions perform well for machine
translation, we investigate the architectural changes that they enable: we
observe that thanks to depthwise separability, we can increase the length of
convolution windows, removing the need for filter dilation. We also introduce a
new "super-separable" convolution operation that further reduces the number of
parameters and computational cost for obtaining state-of-the-art results.
| 2,017 | Computation and Language |
Rethinking Skip-thought: A Neighborhood based Approach | We study the skip-thought model with neighborhood information as weak
supervision. More specifically, we propose a skip-thought neighbor model to
consider the adjacent sentences as a neighborhood. We train our skip-thought
neighbor model on a large corpus with continuous sentences, and then evaluate
the trained model on 7 tasks, which include semantic relatedness, paraphrase
detection, and classification benchmarks. Both quantitative comparison and
qualitative investigation are conducted. We empirically show that, our
skip-thought neighbor model performs as well as the skip-thought model on
evaluation tasks. In addition, we found that, incorporating an autoencoder path
in our model didn't aid our model to perform better, while it hurts the
performance of the skip-thought model.
| 2,017 | Computation and Language |
Trimming and Improving Skip-thought Vectors | The skip-thought model has been proven to be effective at learning sentence
representations and capturing sentence semantics. In this paper, we propose a
suite of techniques to trim and improve it. First, we validate a hypothesis
that, given a current sentence, inferring the previous and inferring the next
sentence provide similar supervision power, therefore only one decoder for
predicting the next sentence is preserved in our trimmed skip-thought model.
Second, we present a connection layer between encoder and decoder to help the
model to generalize better on semantic relatedness tasks. Third, we found that
a good word embedding initialization is also essential for learning better
sentence representations. We train our model unsupervised on a large corpus
with contiguous sentences, and then evaluate the trained model on 7 supervised
tasks, which includes semantic relatedness, paraphrase detection, and text
classification benchmarks. We empirically show that, our proposed model is a
faster, lighter-weight and equally powerful alternative to the original
skip-thought model.
| 2,017 | Computation and Language |
Classification of Questions and Learning Outcome Statements (LOS) Into
Blooms Taxonomy (BT) By Similarity Measurements Towards Extracting Of
Learning Outcome from Learning Material | Blooms Taxonomy (BT) have been used to classify the objectives of learning
outcome by dividing the learning into three different domains; the cognitive
domain, the effective domain and the psychomotor domain. In this paper, we are
introducing a new approach to classify the questions and learning outcome
statements (LOS) into Blooms taxonomy (BT) and to verify BT verb lists, which
are being cited and used by academicians to write questions and (LOS). An
experiment was designed to investigate the semantic relationship between the
action verbs used in both questions and LOS to obtain more accurate
classification of the levels of BT. A sample of 775 different action verbs
collected from different universities allows us to measure an accurate and
clear-cut cognitive level for the action verb. It is worth mentioning that
natural language processing techniques were used to develop our rules as to
induce the questions into chunks in order to extract the action verbs. Our
proposed solution was able to classify the action verb into a precise level of
the cognitive domain. We, on our side, have tested and evaluated our proposed
solution using confusion matrix. The results of evaluation tests yielded 97%
for the macro average of precision and 90% for F1. Thus, the outcome of the
research suggests that it is crucial to analyse and verify the action verbs
cited and used by academicians to write LOS and classify their questions based
on blooms taxonomy in order to obtain a definite and more accurate
classification.
| 2,017 | Computation and Language |
Articulation rate in Swedish child-directed speech increases as a
function of the age of the child even when surprisal is controlled for | In earlier work, we have shown that articulation rate in Swedish
child-directed speech (CDS) increases as a function of the age of the child,
even when utterance length and differences in articulation rate between
subjects are controlled for. In this paper we show on utterance level in
spontaneous Swedish speech that i) for the youngest children, articulation rate
in CDS is lower than in adult-directed speech (ADS), ii) there is a significant
negative correlation between articulation rate and surprisal (the negative log
probability) in ADS, and iii) the increase in articulation rate in Swedish CDS
as a function of the age of the child holds, even when surprisal along with
utterance length and differences in articulation rate between speakers are
controlled for. These results indicate that adults adjust their articulation
rate to make it fit the linguistic capacity of the child.
| 2,017 | Computation and Language |
Exploring Automated Essay Scoring for Nonnative English Speakers | Automated Essay Scoring (AES) has been quite popular and is being widely
used. However, lack of appropriate methodology for rating nonnative English
speakers' essays has meant a lopsided advancement in this field. In this paper,
we report initial results of our experiments with nonnative AES that learns
from manual evaluation of nonnative essays. For this purpose, we conducted an
exercise in which essays written by nonnative English speakers in test
environment were rated both manually and by the automated system designed for
the experiment. In the process, we experimented with a few features to learn
about nuances linked to nonnative evaluation. The proposed methodology of
automated essay evaluation has yielded a correlation coefficient of 0.750 with
the manual evaluation.
| 2,018 | Computation and Language |
Generic Axiomatization of Families of Noncrossing Graphs in Dependency
Parsing | We present a simple encoding for unlabeled noncrossing graphs and show how
its latent counterpart helps us to represent several families of directed and
undirected graphs used in syntactic and semantic parsing of natural language as
context-free languages. The families are separated purely on the basis of
forbidden patterns in latent encoding, eliminating the need to differentiate
the families of non-crossing graphs in inference algorithms: one algorithm
works for all when the search space can be controlled in parser input.
| 2,017 | Computation and Language |
A Full Non-Monotonic Transition System for Unrestricted Non-Projective
Parsing | Restricted non-monotonicity has been shown beneficial for the projective
arc-eager dependency parser in previous research, as posterior decisions can
repair mistakes made in previous states due to the lack of information. In this
paper, we propose a novel, fully non-monotonic transition system based on the
non-projective Covington algorithm. As a non-monotonic system requires
exploration of erroneous actions during the training process, we develop
several non-monotonic variants of the recently defined dynamic oracle for the
Covington parser, based on tight approximations of the loss. Experiments on
datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic
dynamic oracle outperforms the monotonic version in the majority of languages.
| 2,017 | Computation and Language |
Dialog Structure Through the Lens of Gender, Gender Environment, and
Power | Understanding how the social context of an interaction affects our dialog
behavior is of great interest to social scientists who study human behavior, as
well as to computer scientists who build automatic methods to infer those
social contexts. In this paper, we study the interaction of power, gender, and
dialog behavior in organizational interactions. In order to perform this study,
we first construct the Gender Identified Enron Corpus of emails, in which we
semi-automatically assign the gender of around 23,000 individuals who authored
around 97,000 email messages in the Enron corpus. This corpus, which is made
freely available, is orders of magnitude larger than previously existing gender
identified corpora in the email domain. Next, we use this corpus to perform a
large-scale data-oriented study of the interplay of gender and manifestations
of power. We argue that, in addition to one's own gender, the "gender
environment" of an interaction, i.e., the gender makeup of one's interlocutors,
also affects the way power is manifested in dialog. We focus especially on
manifestations of power in the dialog structure --- both, in a shallow sense
that disregards the textual content of messages (e.g., how often do the
participants contribute, how often do they get replies etc.), as well as the
structure that is expressed within the textual content (e.g., who issues
requests and how are they made, whose requests get responses etc.). We find
that both gender and gender environment affect the ways power is manifested in
dialog, resulting in patterns that reveal the underlying factors. Finally, we
show the utility of gender information in the problem of automatically
predicting the direction of power between pairs of participants in email
interactions.
| 2,017 | Computation and Language |
Scientific document summarization via citation contextualization and
scientific discourse | The rapid growth of scientific literature has made it difficult for the
researchers to quickly learn about the developments in their respective fields.
Scientific document summarization addresses this challenge by providing
summaries of the important contributions of scientific papers. We present a
framework for scientific summarization which takes advantage of the citations
and the scientific discourse structure. Citation texts often lack the evidence
and context to support the content of the cited paper and are even sometimes
inaccurate. We first address the problem of inaccuracy of the citation texts by
finding the relevant context from the cited paper. We propose three approaches
for contextualizing citations which are based on query reformulation, word
embeddings, and supervised learning. We then train a model to identify the
discourse facets for each citation. We finally propose a method for summarizing
scientific papers by leveraging the faceted citations and their corresponding
contexts. We evaluate our proposed method on two scientific summarization
datasets in the biomedical and computational linguistics domains. Extensive
evaluation results show that our methods can improve over the state of the art
by large margins.
| 2,017 | Computation and Language |
SU-RUG at the CoNLL-SIGMORPHON 2017 shared task: Morphological
Inflection with Attentional Sequence-to-Sequence Models | This paper describes the Stockholm University/University of Groningen
(SU-RUG) system for the SIGMORPHON 2017 shared task on morphological
inflection. Our system is based on an attentional sequence-to-sequence neural
network model using Long Short-Term Memory (LSTM) cells, with joint training of
morphological inflection and the inverse transformation, i.e. lemmatization and
morphological analysis. Our system outperforms the baseline with a large
margin, and our submission ranks as the 4th best team for the track we
participate in (task 1, high-resource).
| 2,017 | Computation and Language |
Candidate sentence selection for language learning exercises: from a
comprehensive framework to an empirical evaluation | We present a framework and its implementation relying on Natural Language
Processing methods, which aims at the identification of exercise item
candidates from corpora. The hybrid system combining heuristics and machine
learning methods includes a number of relevant selection criteria. We focus on
two fundamental aspects: linguistic complexity and the dependence of the
extracted sentences on their original context. Previous work on exercise
generation addressed these two criteria only to a limited extent, and a refined
overall candidate sentence selection framework appears also to be lacking. In
addition to a detailed description of the system, we present the results of an
empirical evaluation conducted with language teachers and learners which
indicate the usefulness of the system for educational purposes. We have
integrated our system into a freely available online learning platform.
| 2,017 | Computation and Language |
Exploring the Syntactic Abilities of RNNs with Multi-task Learning | Recent work has explored the syntactic abilities of RNNs using the
subject-verb agreement task, which diagnoses sensitivity to sentence structure.
RNNs performed this task well in common cases, but faltered in complex
sentences (Linzen et al., 2016). We test whether these errors are due to
inherent limitations of the architecture or to the relatively indirect
supervision provided by most agreement dependencies in a corpus. We trained a
single RNN to perform both the agreement task and an additional task, either
CCG supertagging or language modeling. Multi-task training led to significantly
lower error rates, in particular on complex sentences, suggesting that RNNs
have the ability to evolve more sophisticated syntactic representations than
shown before. We also show that easily available agreement training data can
improve performance on other syntactic tasks, in particular when only a limited
amount of training data is available for those tasks. The multi-task paradigm
can also be leveraged to inject grammatical knowledge into language models.
| 2,017 | Computation and Language |
Neural Domain Adaptation for Biomedical Question Answering | Factoid question answering (QA) has recently benefited from the development
of deep learning (DL) systems. Neural network models outperform traditional
approaches in domains where large datasets exist, such as SQuAD (ca. 100,000
questions) for Wikipedia articles. However, these systems have not yet been
applied to QA in more specific domains, such as biomedicine, because datasets
are generally too small to train a DL system from scratch. For example, the
BioASQ dataset for biomedical QA comprises less then 900 factoid (single
answer) and list (multiple answers) QA instances. In this work, we adapt a
neural QA system trained on a large open-domain dataset (SQuAD, source) to a
biomedical dataset (BioASQ, target) by employing various transfer learning
techniques. Our network architecture is based on a state-of-the-art QA system,
extended with biomedical word embeddings and a novel mechanism to answer list
questions. In contrast to existing biomedical QA systems, our system does not
rely on domain-specific ontologies, parsers or entity taggers, which are
expensive to create. Despite this fact, our systems achieve state-of-the-art
results on factoid questions and competitive results on list questions.
| 2,017 | Computation and Language |
Acoustic data-driven lexicon learning based on a greedy pronunciation
selection framework | Speech recognition systems for irregularly-spelled languages like English
normally require hand-written pronunciations. In this paper, we describe a
system for automatically obtaining pronunciations of words for which
pronunciations are not available, but for which transcribed data exists. Our
method integrates information from the letter sequence and from the acoustic
evidence. The novel aspect of the problem that we address is the problem of how
to prune entries from such a lexicon (since, empirically, lexicons with too
many entries do not tend to be good for ASR performance). Experiments on
various ASR tasks show that, with the proposed framework, starting with an
initial lexicon of several thousand words, we are able to learn a lexicon which
performs close to a full expert lexicon in terms of WER performance on test
data, and is better than lexicons built using G2P alone or with a pruning
criterion based on pronunciation probability.
| 2,017 | Computation and Language |
Semantic Entity Retrieval Toolkit | Unsupervised learning of low-dimensional, semantic representations of words
and entities has recently gained attention. In this paper we describe the
Semantic Entity Retrieval Toolkit (SERT) that provides implementations of our
previously published entity representation models. The toolkit provides a
unified interface to different representation learning algorithms, fine-grained
parsing configuration and can be used transparently with GPUs. In addition,
users can easily modify existing models or implement their own models in the
framework. After model training, SERT can be used to rank entities according to
a textual query and extract the learned entity/word representation for use in
downstream algorithms, such as clustering or recommendation.
| 2,017 | Computation and Language |
Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data.
| 2,023 | Computation and Language |
Verb Physics: Relative Physical Knowledge of Actions and Objects | Learning commonsense knowledge from natural language text is nontrivial due
to reporting bias: people rarely state the obvious, e.g., "My house is bigger
than me." However, while rarely stated explicitly, this trivial everyday
knowledge does influence the way people talk about the world, which provides
indirect clues to reason about the world. For example, a statement like, "Tyler
entered his house" implies that his house is bigger than Tyler.
In this paper, we present an approach to infer relative physical knowledge of
actions and objects along five dimensions (e.g., size, weight, and strength)
from unstructured natural language text. We frame knowledge acquisition as
joint inference over two closely related problems: learning (1) relative
physical knowledge of object pairs and (2) physical implications of actions
when applied to those object pairs. Empirical results demonstrate that it is
possible to extract knowledge of actions and objects from language and that
joint inference over different types of knowledge improves performance.
| 2,017 | Computation and Language |
Encoding of phonology in a recurrent neural model of grounded speech | We study the representation and encoding of phonemes in a recurrent neural
network model of grounded speech. We use a model which processes images and
their spoken descriptions, and projects the visual and auditory representations
into the same semantic space. We perform a number of analyses on how
information about individual phonemes is encoded in the MFCC features extracted
from the speech signal, and the activations of the layers of the model. Via
experiments with phoneme decoding and phoneme discrimination we show that
phoneme representations are most salient in the lower layers of the model,
where low-level signals are processed at a fine-grained level, although a large
amount of phonological information is retain at the top recurrent layer. We
further find out that the attention mechanism following the top recurrent layer
significantly attenuates encoding of phonology and makes the utterance
embeddings much more invariant to synonymy. Moreover, a hierarchical clustering
of phoneme representations learned by the network shows an organizational
structure of phonemes similar to those proposed in linguistics.
| 2,018 | Computation and Language |
Query-by-Example Search with Discriminative Neural Acoustic Word
Embeddings | Query-by-example search often uses dynamic time warping (DTW) for comparing
queries and proposed matching segments. Recent work has shown that comparing
speech segments by representing them as fixed-dimensional vectors --- acoustic
word embeddings --- and measuring their vector distance (e.g., cosine distance)
can discriminate between words more accurately than DTW-based approaches. We
consider an approach to query-by-example search that embeds both the query and
database segments according to a neural model, followed by nearest-neighbor
search to find the matching segments. Earlier work on embedding-based
query-by-example, using template-based acoustic word embeddings, achieved
competitive performance. We find that our embeddings, based on recurrent neural
networks trained to optimize word discrimination, achieve substantial
improvements in performance and run-time efficiency over the previous
approaches.
| 2,017 | Computation and Language |
Attention-based Vocabulary Selection for NMT Decoding | Neural Machine Translation (NMT) models usually use large target vocabulary
sizes to capture most of the words in the target language. The vocabulary size
is a big factor when decoding new sentences as the final softmax layer
normalizes over all possible target words. To address this problem, it is
widely common to restrict the target vocabulary with candidate lists based on
the source sentence. Usually, the candidate lists are a combination of external
word-to-word aligner, phrase table entries or most frequent words. In this
work, we propose a simple and yet novel approach to learn candidate lists
directly from the attention layer during NMT training. The candidate lists are
highly optimized for the current NMT model and do not need any external
computation of the candidate pool. We show significant decoding speedup
compared with using the entire vocabulary, without losing any translation
quality for two language pairs.
| 2,017 | Computation and Language |
Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation.
| 2,017 | Computation and Language |
A Supervised Approach to Extractive Summarisation of Scientific Papers | Automatic summarisation is a popular approach to reduce a document to its
main arguments. Recent research in the area has focused on neural approaches to
summarisation, which can be very data-hungry. However, few large datasets exist
and none for the traditionally popular domain of scientific publications, which
opens up challenging research avenues centered on encoding large, complex
documents. In this paper, we introduce a new dataset for summarisation of
computer science publications by exploiting a large resource of author provided
summaries and show straightforward ways of extending it further. We develop
models on the dataset making use of both neural sentence encoding and
traditionally used summarisation features and show that models which encode
sentences as well as their local and global context perform best, significantly
outperforming well-established baseline methods.
| 2,017 | Computation and Language |
Modelling prosodic structure using Artificial Neural Networks | The ability to accurately perceive whether a speaker is asking a question or
is making a statement is crucial for any successful interaction. However,
learning and classifying tonal patterns has been a challenging task for
automatic speech recognition and for models of tonal representation, as tonal
contours are characterized by significant variation. This paper provides a
classification model of Cypriot Greek questions and statements. We evaluate two
state-of-the-art network architectures: a Long Short-Term Memory (LSTM) network
and a convolutional network (ConvNet). The ConvNet outperforms the LSTM in the
classification task and exhibited an excellent performance with 95%
classification accuracy.
| 2,017 | Computation and Language |
Zero-Shot Relation Extraction via Reading Comprehension | We show that relation extraction can be reduced to answering simple reading
comprehension questions, by associating one or more natural-language questions
with each relation slot. This reduction has several advantages: we can (1)
learn relation-extraction models by extending recent neural
reading-comprehension techniques, (2) build very large training sets for those
models by combining relation-specific crowd-sourced questions with distant
supervision, and even (3) do zero-shot learning by extracting new relation
types that are only specified at test-time, for which we have no labeled
training examples. Experiments on a Wikipedia slot-filling task demonstrate
that the approach can generalize to new questions for known relation types with
high accuracy, and that zero-shot generalization to unseen relation types is
possible, at lower accuracy levels, setting the bar for future work on this
task.
| 2,017 | Computation and Language |
An Exploration of Neural Sequence-to-Sequence Architectures for
Automatic Post-Editing | In this work, we explore multiple neural architectures adapted for the task
of automatic post-editing of machine translation output. We focus on neural
end-to-end models that combine both inputs $mt$ (raw MT output) and $src$
(source language input) in a single neural architecture, modeling $\{mt, src\}
\rightarrow pe$ directly. Apart from that, we investigate the influence of
hard-attention models which seem to be well-suited for monolingual tasks, as
well as combinations of both ideas. We report results on data sets provided
during the WMT-2016 shared task on automatic post-editing and can demonstrate
that dual-attention models that incorporate all available data in the APE
scenario in a single model improve on the best shared task system and on all
other published results after the shared task. Dual-attention models that are
combined with hard attention remain competitive despite applying fewer changes
to the input.
| 2,017 | Computation and Language |
Identifying Condition-Action Statements in Medical Guidelines Using
Domain-Independent Features | This paper advances the state of the art in text understanding of medical
guidelines by releasing two new annotated clinical guidelines datasets, and
establishing baselines for using machine learning to extract condition-action
pairs. In contrast to prior work that relies on manually created rules, we
report experiment with several supervised machine learning techniques to
classify sentences as to whether they express conditions and actions. We show
the limitations and possible extensions of this work on text mining of medical
guidelines.
| 2,017 | Computation and Language |
Transfer Learning for Neural Semantic Parsing | The goal of semantic parsing is to map natural language to a machine
interpretable meaning representation language (MRL). One of the constraints
that limits full exploration of deep learning technologies for semantic parsing
is the lack of sufficient annotation training data. In this paper, we propose
using sequence-to-sequence in a multi-task setup for semantic parsing with a
focus on transfer learning. We explore three multi-task architectures for
sequence-to-sequence modeling and compare their performance with an
independently trained model. Our experiments show that the multi-task setup
aids transfer learning from an auxiliary task with large labeled data to a
target task with smaller labeled data. We see absolute accuracy gains ranging
from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging
from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and
semantic auxiliary tasks.
| 2,017 | Computation and Language |
Fine-grained human evaluation of neural versus phrase-based machine
translation | We compare three approaches to statistical machine translation (pure
phrase-based, factored phrase-based and neural) by performing a fine-grained
manual evaluation via error annotation of the systems' outputs. The error types
in our annotation are compliant with the multidimensional quality metrics
(MQM), and the annotation is performed by two annotators. Inter-annotator
agreement is high for such a task, and results show that the best performing
system (neural) reduces the errors produced by the worst system (phrase-based)
by 54%.
| 2,017 | Computation and Language |
Idea density for predicting Alzheimer's disease from transcribed speech | Idea Density (ID) measures the rate at which ideas or elementary predications
are expressed in an utterance or in a text. Lower ID is found to be associated
with an increased risk of developing Alzheimer's disease (AD) (Snowdon et al.,
1996; Engelman et al., 2010). ID has been used in two different versions:
propositional idea density (PID) counts the expressed ideas and can be applied
to any text while semantic idea density (SID) counts pre-defined information
content units and is naturally more applicable to normative domains, such as
picture description tasks. In this paper, we develop DEPID, a novel
dependency-based method for computing PID, and its version DEPID-R that enables
to exclude repeating ideas---a feature characteristic to AD speech. We conduct
the first comparison of automatically extracted PID and SID in the diagnostic
classification task on two different AD datasets covering both closed-topic and
free-recall domains. While SID performs better on the normative dataset, adding
PID leads to a small but significant improvement (+1.7 F-score). On the
free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in
F-score) but adding the features derived from the word embedding clustering
underlying the automatic SID increases the results considerably, leading to an
F-score of 84.8.
| 2,017 | Computation and Language |
Neural Models for Key Phrase Detection and Question Generation | We propose a two-stage neural model to tackle question generation from
documents. First, our model estimates the probability that word sequences in a
document are ones that a human would pick when selecting candidate answers by
training a neural key-phrase extractor on the answers in a question-answering
corpus. Predicted key phrases then act as target answers and condition a
sequence-to-sequence question-generation model with a copy mechanism.
Empirically, our key-phrase extraction model significantly outperforms an
entity-tagging baseline and existing rule-based approaches. We further
demonstrate that our question generation system formulates fluent, answerable
questions from key phrases. This two-stage system could be used to augment or
generate reading comprehension datasets, which may be leveraged to improve
machine reading systems or in educational settings.
| 2,018 | Computation and Language |
S-Net: From Answer Extraction to Answer Generation for Machine Reading
Comprehension | In this paper, we present a novel approach to machine reading comprehension
for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a
question with exact text spans in a passage, the MS-MARCO dataset defines the
task as answering a question from multiple passages and the words in the answer
are not necessary in the passages. We therefore develop an
extraction-then-synthesis framework to synthesize answers from extraction
results. Specifically, the answer extraction model is first employed to predict
the most important sub-spans from the passage as evidence, and the answer
synthesis model takes the evidence as additional features along with the
question and passage to further elaborate the final answers. We build the
answer extraction model with state-of-the-art neural networks for single
passage reading comprehension, and propose an additional task of passage
ranking to help answer extraction in multiple passages. The answer synthesis
model is based on the sequence-to-sequence neural networks with extracted
evidences as features. Experiments show that our extraction-then-synthesis
method outperforms state-of-the-art methods.
| 2,018 | Computation and Language |
Towards a theory of word order. Comment on "Dependency distance: a new
perspective on syntactic patterns in natural language" by Haitao Liu et al | Comment on "Dependency distance: a new perspective on syntactic patterns in
natural language" by Haitao Liu et al
| 2,017 | Computation and Language |
A Survey Of Cross-lingual Word Embedding Models | Cross-lingual representations of words enable us to reason about word meaning
in multilingual contexts and are a key facilitator of cross-lingual transfer
when developing natural language processing models for low-resource languages.
In this survey, we provide a comprehensive typology of cross-lingual word
embedding models. We compare their data requirements and objective functions.
The recurring theme of the survey is that many of the models presented in the
literature optimize for the same objectives, and that seemingly different
models are often equivalent modulo optimization strategies, hyper-parameters,
and such. We also discuss the different ways cross-lingual word embeddings are
evaluated, as well as future challenges and research horizons.
| 2,019 | Computation and Language |
German in Flux: Detecting Metaphoric Change via Word Entropy | This paper explores the information-theoretic measure entropy to detect
metaphoric change, transferring ideas from hypernym detection to research on
language change. We also build the first diachronic test set for German as a
standard for metaphoric change annotation. Our model shows high performance, is
unsupervised, language-independent and generalizable to other processes of
semantic change.
| 2,017 | Computation and Language |
Extracting Formal Models from Normative Texts | We are concerned with the analysis of normative texts - documents based on
the deontic notions of obligation, permission, and prohibition. Our goal is to
make queries about these notions and verify that a text satisfies certain
properties concerning causality of actions and timing constraints. This
requires taking the original text and building a representation (model) of it
in a formal language, in our case the C-O Diagram formalism. We present an
experimental, semi-automatic aid that helps to bridge the gap between a
normative text in natural language and its C-O Diagram representation. Our
approach consists of using dependency structures obtained from the
state-of-the-art Stanford Parser, and applying our own rules and heuristics in
order to extract the relevant components. The result is a tabular data
structure where each sentence is split into suitable fields, which can then be
converted into a C-O Diagram. The process is not fully automatic however, and
some post-editing is generally required of the user. We apply our tool and
perform experiments on documents from different domains, and report an initial
evaluation of the accuracy and feasibility of our approach.
| 2,017 | Computation and Language |
Joint Extraction of Entities and Relations Based on a Novel Tagging
Scheme | Joint extraction of entities and relations is an important task in
information extraction. To tackle this problem, we firstly propose a novel
tagging scheme that can convert the joint extraction task to a tagging problem.
Then, based on our tagging scheme, we study different end-to-end models to
extract entities and their relations directly, without identifying entities and
relations separately. We conduct experiments on a public dataset produced by
distant supervision method and the experimental results show that the tagging
based methods are better than most of the existing pipelined and joint learning
methods. What's more, the end-to-end model proposed in this paper, achieves the
best results on the public dataset.
| 2,017 | Computation and Language |
Ensembling Factored Neural Machine Translation Models for Automatic
Post-Editing and Quality Estimation | This work presents a novel approach to Automatic Post-Editing (APE) and
Word-Level Quality Estimation (QE) using ensembles of specialized Neural
Machine Translation (NMT) systems. Word-level features that have proven
effective for QE are included as input factors, expanding the representation of
the original source and the machine translation hypothesis, which are used to
generate an automatically post-edited hypothesis. We train a suite of NMT
models that use different input representations, but share the same output
space. These models are then ensembled together, and tuned for both the APE and
the QE task. We thus attempt to connect the state-of-the-art approaches to APE
and QE within a single framework. Our models achieve state-of-the-art results
in both tasks, with the only difference in the tuning step which learns weights
for each component of the ensemble.
| 2,017 | Computation and Language |
Topic supervised non-negative matrix factorization | Topic models have been extensively used to organize and interpret the
contents of large, unstructured corpora of text documents. Although topic
models often perform well on traditional training vs. test set evaluations, it
is often the case that the results of a topic model do not align with human
interpretation. This interpretability fallacy is largely due to the
unsupervised nature of topic models, which prohibits any user guidance on the
results of a model. In this paper, we introduce a semi-supervised method called
topic supervised non-negative matrix factorization (TS-NMF) that enables the
user to provide labeled example documents to promote the discovery of more
meaningful semantic structure of a corpus. In this way, the results of TS-NMF
better match the intuition and desired labeling of the user. The core of TS-NMF
relies on solving a non-convex optimization problem for which we derive an
iterative algorithm that is shown to be monotonic and convergent to a local
optimum. We demonstrate the practical utility of TS-NMF on the Reuters and
PubMed corpora, and find that TS-NMF is especially useful for conceptual or
broad topics, where topic key terms are not well understood. Although
identifying an optimal latent structure for the data is not a primary objective
of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard
similarity scores than the contemporary methods, (unsupervised) NMF and latent
Dirichlet allocation, at supervision rates as low as 10% to 20%.
| 2,017 | Computation and Language |
Plan, Attend, Generate: Character-level Neural Machine Translation with
Planning in the Decoder | We investigate the integration of a planning mechanism into an
encoder-decoder architecture with an explicit alignment for character-level
machine translation. We develop a model that plans ahead when it computes
alignments between the source and target sequences, constructing a matrix of
proposed future alignments and a commitment vector that governs whether to
follow or recompute the plan. This mechanism is inspired by the strategic
attentive reader and writer (STRAW) model. Our proposed model is end-to-end
trainable with fully differentiable operations. We show that it outperforms a
strong baseline on three character-level decoder neural machine translation on
WMT'15 corpus. Our analysis demonstrates that our model can compute
qualitatively intuitive alignments and achieves superior performance with fewer
parameters.
| 2,017 | Computation and Language |
Number game | CLARIN (Common Language Resources and Technology Infrastructure) is regarded
as one of the most important European research infrastructures, offering and
promoting a wide array of useful services for (digital) research in linguistics
and humanities. However, the assessment of the users for its core technical
development has been highly limited, therefore, it is unclear if the community
is thoroughly aware of the status-quo of the growing infrastructure. In
addition, CLARIN does not seem to be fully materialised marketing and business
plans and strategies despite its strong technical assets. This article analyses
the web traffic of the Virtual Language Observatory, one of the main web
applications of CLARIN and a symbol of pan-European re-search cooperation, to
evaluate the users and performance of the service in a transparent and
scientific way. It is envisaged that the paper can raise awareness of the
pressing issues on objective and transparent operation of the infrastructure
though Open Evaluation, and the synergy between marketing and technical
development. It also investigates the "science of web analytics" in an attempt
to document the research process for the purpose of reusability and
reproducibility, thus to find universal lessons for the use of a web analytics,
rather than to merely produce a statistical report of a particular website
which loses its value outside its context.
| 2,017 | Computation and Language |
A Mixture Model for Learning Multi-Sense Word Embeddings | Word embeddings are now a standard technique for inducing meaning
representations for words. For getting good representations, it is important to
take into account different senses of a word. In this paper, we propose a
mixture model for learning multi-sense word embeddings. Our model generalizes
the previous works in that it allows to induce different weights of different
senses of a word. The experimental results show that our model outperforms
previous models on standard evaluation tasks.
| 2,017 | Computation and Language |
Bib2vec: An Embedding-based Search System for Bibliographic Information | We propose a novel embedding model that represents relationships among
several elements in bibliographic information with high representation ability
and flexibility. Based on this model, we present a novel search system that
shows the relationships among the elements in the ACL Anthology Reference
Corpus. The evaluation results show that our model can achieve a high
prediction ability and produce reasonable search results.
| 2,017 | Computation and Language |
An Automatic Approach for Document-level Topic Model Evaluation | Topic models jointly learn topics and document-level topic distribution.
Extrinsic evaluation of topic models tends to focus exclusively on topic-level
evaluation, e.g. by assessing the coherence of topics. We demonstrate that
there can be large discrepancies between topic- and document-level model
quality, and that basing model evaluation on topic-level analysis can be highly
misleading. We propose a method for automatically predicting topic model
quality based on analysis of document-level topic allocations, and provide
empirical evidence for its robustness.
| 2,017 | Computation and Language |
Towards Neural Phrase-based Machine Translation | In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our
method explicitly models the phrase structures in output sequences using
Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence
modeling method. To mitigate the monotonic alignment requirement of SWAN, we
introduce a new layer to perform (soft) local reordering of input sequences.
Different from existing neural machine translation (NMT) approaches, NPMT does
not use attention-based decoding mechanisms. Instead, it directly outputs
phrases in a sequential order and can decode in linear time. Our experiments
show that NPMT achieves superior performances on IWSLT 2014
German-English/English-German and IWSLT 2015 English-Vietnamese machine
translation tasks compared with strong NMT baselines. We also observe that our
method produces meaningful phrases in output languages.
| 2,018 | Computation and Language |
Accelerating Innovation Through Analogy Mining | The availability of large idea repositories (e.g., the U.S. patent database)
could significantly accelerate innovation and discovery by providing people
with inspiration from solutions to analogous problems. However, finding useful
analogies in these large, messy, real-world repositories remains a persistent
challenge for either human or automated methods. Previous approaches include
costly hand-created databases that have high relational structure (e.g.,
predicate calculus representations) but are very sparse. Simpler
machine-learning/information-retrieval similarity metrics can scale to large,
natural-language datasets, but struggle to account for structural similarity,
which is central to analogy. In this paper we explore the viability and value
of learning simpler structural representations, specifically, "problem
schemas", which specify the purpose of a product and the mechanisms by which it
achieves that purpose. Our approach combines crowdsourcing and recurrent neural
networks to extract purpose and mechanism vector representations from product
descriptions. We demonstrate that these learned vectors allow us to find
analogies with higher precision and recall than traditional
information-retrieval methods. In an ideation experiment, analogies retrieved
by our models significantly increased people's likelihood of generating
creative ideas compared to analogies retrieved by traditional methods. Our
results suggest a promising approach to enabling computational analogy at scale
is to learn and leverage weaker structural representations.
| 2,017 | Computation and Language |
Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural
Network Approach | Knowledge base completion (KBC) aims to predict missing information in a
knowledge base.In this paper, we address the out-of-knowledge-base (OOKB)
entity problem in KBC:how to answer queries concerning test entities not
observed at training time. Existing embedding-based KBC models assume that all
test entities are available at training time, making it unclear how to obtain
embeddings for new entities without costly retraining. To solve the OOKB entity
problem without retraining, we use graph neural networks (Graph-NNs) to compute
the embeddings of OOKB entities, exploiting the limited auxiliary knowledge
provided at test time.The experimental results show the effectiveness of our
proposed model in the OOKB setting.Additionally, in the standard KBC setting in
which OOKB entities are not involved, our model achieves state-of-the-art
performance on the WordNet dataset. The code and dataset are available at
https://github.com/takuo-h/GNN-for-OOKB
| 2,018 | Computation and Language |
Detecting Large Concept Extensions for Conceptual Analysis | When performing a conceptual analysis of a concept, philosophers are
interested in all forms of expression of a concept in a text---be it direct or
indirect, explicit or implicit. In this paper, we experiment with topic-based
methods of automating the detection of concept expressions in order to
facilitate philosophical conceptual analysis. We propose six methods based on
LDA, and evaluate them on a new corpus of court decision that we had annotated
by experts and non-experts. Our results indicate that these methods can yield
important improvements over the keyword heuristic, which is often used as a
concept detection heuristic in many contexts. While more work remains to be
done, this indicates that detecting concepts through topics can serve as a
general-purpose method for at least some forms of concept expression that are
not captured using naive keyword approaches.
| 2,017 | Computation and Language |
An Empirical Study of Mini-Batch Creation Strategies for Neural Machine
Translation | Training of neural machine translation (NMT) models usually uses mini-batches
for efficiency purposes. During the mini-batched training process, it is
necessary to pad shorter sentences in a mini-batch to be equal in length to the
longest sentence therein for efficient computation. Previous work has noted
that sorting the corpus based on the sentence length before making mini-batches
reduces the amount of padding and increases the processing speed. However,
despite the fact that mini-batch creation is an essential step in NMT training,
widely used NMT toolkits implement disparate strategies for doing so, which
have not been empirically validated or compared. This work investigates
mini-batch creation strategies with experiments over two different datasets.
Our results suggest that the choice of a mini-batch creation strategy has a
large effect on NMT training and some length-based sorting strategies do not
always work well compared with simple shuffling.
| 2,017 | Computation and Language |
Topic Modeling for Classification of Clinical Reports | Electronic health records (EHRs) contain important clinical information about
patients. Efficient and effective use of this information could supplement or
even replace manual chart review as a means of studying and improving the
quality and safety of healthcare delivery. However, some of these clinical data
are in the form of free text and require pre-processing before use in automated
systems. A common free text data source is radiology reports, typically
dictated by radiologists to explain their interpretations. We sought to
demonstrate machine learning classification of computed tomography (CT) imaging
reports into binary outcomes, i.e. positive and negative for fracture, using
regular text classification and classifiers based on topic modeling. Topic
modeling provides interpretable themes (topic distributions) in reports, a
representation that is more compact than the commonly used bag-of-words
representation and can be processed faster than raw text in subsequent
automated processes. We demonstrate new classifiers based on this topic
modeling representation of the reports. Aggregate topic classifier (ATC) and
confidence-based topic classifier (CTC) use a single topic that is determined
from the training dataset based on different measures to classify the reports
on the test dataset. Alternatively, similarity-based topic classifier (STC)
measures the similarity between the reports' topic distributions to determine
the predicted class. Our proposed topic modeling-based classifier systems are
shown to be competitive with existing text classification techniques and
provides an efficient and interpretable representation.
| 2,017 | Computation and Language |
Sub-domain Modelling for Dialogue Management with Hierarchical
Reinforcement Learning | Human conversation is inherently complex, often spanning many different
topics/domains. This makes policy learning for dialogue systems very
challenging. Standard flat reinforcement learning methods do not provide an
efficient framework for modelling such dialogues. In this paper, we focus on
the under-explored problem of multi-domain dialogue management. First, we
propose a new method for hierarchical reinforcement learning using the option
framework. Next, we show that the proposed architecture learns faster and
arrives at a better policy than the existing flat ones do. Moreover, we show
how pretrained policies can be adapted to more complex systems with an
additional set of new actions. In doing that, we show that our approach has the
potential to facilitate policy optimisation for more sophisticated multi-domain
dialogue systems.
| 2,017 | Computation and Language |
Improving text classification with vectors of reduced precision | This paper presents the analysis of the impact of a floating-point number
precision reduction on the quality of text classification. The precision
reduction of the vectors representing the data (e.g. TF-IDF representation in
our case) allows for a decrease of computing time and memory footprint on
dedicated hardware platforms. The impact of precision reduction on the
classification quality was performed on 5 corpora, using 4 different
classifiers. Also, dimensionality reduction was taken into account. Results
indicate that the precision reduction improves classification accuracy for most
cases (up to 25% of error reduction). In general, the reduction from 64 to 4
bits gives the best scores and ensures that the results will not be worse than
with the full floating-point representation.
| 2,017 | Computation and Language |
THUMT: An Open Source Toolkit for Neural Machine Translation | This paper introduces THUMT, an open-source toolkit for neural machine
translation (NMT) developed by the Natural Language Processing Group at
Tsinghua University. THUMT implements the standard attention-based
encoder-decoder framework on top of Theano and supports three training
criteria: maximum likelihood estimation, minimum risk training, and
semi-supervised training. It features a visualization tool for displaying the
relevance between hidden states in neural networks and contextual words, which
helps to analyze the internal workings of NMT. Experiments on Chinese-English
datasets show that THUMT using minimum risk training significantly outperforms
GroundHog, a state-of-the-art toolkit for NMT.
| 2,017 | Computation and Language |
An online sequence-to-sequence model for noisy speech recognition | Generative models have long been the dominant approach for speech
recognition. The success of these models however relies on the use of
sophisticated recipes and complicated machinery that is not easily accessible
to non-practitioners. Recent innovations in Deep Learning have given rise to an
alternative - discriminative models called Sequence-to-Sequence models, that
can almost match the accuracy of state of the art generative models. While
these models are easy to train as they can be trained end-to-end in a single
step, they have a practical limitation that they can only be used for offline
recognition. This is because the models require that the entirety of the input
sequence be available at the beginning of inference, an assumption that is not
valid for instantaneous speech recognition. To address this problem, online
sequence-to-sequence models were recently introduced. These models are able to
start producing outputs as data arrives, and the model feels confident enough
to output partial transcripts. These models, like sequence-to-sequence are
causal - the output produced by the model until any time, $t$, affects the
features that are computed subsequently. This makes the model inherently more
powerful than generative models that are unable to change features that are
computed from the data. This paper highlights two main contributions - an
improvement to online sequence-to-sequence model training, and its application
to noisy settings with mixed speech from two speakers.
| 2,017 | Computation and Language |
Extract with Order for Coherent Multi-Document Summarization | In this work, we aim at developing an extractive summarizer in the
multi-document setting. We implement a rank based sentence selection using
continuous vector representations along with key-phrases. Furthermore, we
propose a model to tackle summary coherence for increasing readability. We
conduct experiments on the Document Understanding Conference (DUC) 2004
datasets using ROUGE toolkit. Our experiments demonstrate that the methods
bring significant improvements over the state of the art methods in terms of
informativity and coherence.
| 2,020 | Computation and Language |
Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world.
| 2,017 | Computation and Language |
Graph-based Neural Multi-Document Summarization | We propose a neural multi-document summarization (MDS) system that
incorporates sentence relation graphs. We employ a Graph Convolutional Network
(GCN) on the relation graphs, with sentence embeddings obtained from Recurrent
Neural Networks as input node features. Through multiple layer-wise
propagation, the GCN generates high-level hidden sentence features for salience
estimation. We then use a greedy heuristic to extract salient sentences while
avoiding redundancy. In our experiments on DUC 2004, we consider three types of
sentence relation graphs and demonstrate the advantage of combining sentence
relations in graphs with the representation power of deep neural networks. Our
model improves upon traditional graph-based extractive approaches and the
vanilla GRU sequence model with no graph, and it achieves competitive results
against other state-of-the-art multi-document summarization systems.
| 2,017 | Computation and Language |
Neural-based Natural Language Generation in Dialogue using RNN
Encoder-Decoder with Semantic Aggregation | Natural language generation (NLG) is an important component in spoken
dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder
which is an extension of an Recurrent Neural Network based Encoder-Decoder
architecture. The proposed Semantic Aggregator consists of two components: an
Aligner and a Refiner. The Aligner is a conventional attention calculated over
the encoded input information, while the Refiner is another attention or gating
mechanism stacked over the attentive Aligner in order to further select and
aggregate the semantic elements. The proposed model can be jointly trained both
sentence planning and surface realization to produce natural language
utterances. The model was extensively assessed on four different NLG domains,
in which the experimental results showed that the proposed generator
consistently outperforms the previous methods on all the NLG domains.
| 2,017 | Computation and Language |
Cross-language Learning with Adversarial Neural Networks: Application to
Community Question Answering | We address the problem of cross-language adaptation for question-question
similarity reranking in community question answering, with the objective to
port a system trained on one input language to another input language given
labeled training data for the first language and only unlabeled data for the
second language. In particular, we propose to use adversarial training of
neural networks to learn high-level features that are discriminative for the
main learning task, and at the same time are invariant across the input
languages. The evaluation results show sizable improvements for our
cross-language adversarial neural network (CLANN) model over a strong
non-adversarial system.
| 2,017 | Computation and Language |
JaTeCS an open-source JAva TExt Categorization System | JaTeCS is an open source Java library that supports research on automatic
text categorization and other related problems, such as ordinal regression and
quantification, which are of special interest in opinion mining applications.
It covers all the steps of an experimental activity, from reading the corpus to
the evaluation of the experimental results. As JaTeCS is focused on text as the
main input data, it provides the user with many text-dedicated tools, e.g.:
data readers for many formats, including the most commonly used text corpora
and lexical resources, natural language processing tools, multi-language
support, methods for feature selection and weighting, the implementation of
many machine learning algorithms as well as wrappers for well-known external
software (e.g., SVM_light) which enable their full control from code. JaTeCS
support its expansion by abstracting through interfaces many of the typical
tools and procedures used in text processing tasks. The library also provides a
number of "template" implementations of typical experimental setups (e.g.,
train-test, k-fold validation, grid-search optimization, randomized runs) which
enable fast realization of experiments just by connecting the templates with
data readers, learning algorithms and evaluation measures.
| 2,017 | Computation and Language |
Stance Detection in Turkish Tweets | Stance detection is a classification problem in natural language processing
where for a text and target pair, a class result from the set {Favor, Against,
Neither} is expected. It is similar to the sentiment analysis problem but
instead of the sentiment of the text author, the stance expressed for a
particular target is investigated in stance detection. In this paper, we
present a stance detection tweet data set for Turkish comprising stance
annotations of these tweets for two popular sports clubs as targets.
Additionally, we provide the evaluation results of SVM classifiers for each
target on this data set, where the classifiers use unigram, bigram, and hashtag
features. This study is significant as it presents one of the initial stance
detection data sets proposed so far and the first one for Turkish language, to
the best of our knowledge. The data set and the evaluation results of the
corresponding SVM-based approaches will form plausible baselines for the
comparison of future studies on stance detection.
| 2,017 | Computation and Language |
Effective Spoken Language Labeling with Deep Recurrent Neural Networks | Understanding spoken language is a highly complex problem, which can be
decomposed into several simpler tasks. In this paper, we focus on Spoken
Language Understanding (SLU), the module of spoken dialog systems responsible
for extracting a semantic interpretation from the user utterance. The task is
treated as a labeling problem. In the past, SLU has been performed with a wide
variety of probabilistic models. The rise of neural networks, in the last
couple of years, has opened new interesting research directions in this domain.
Recurrent Neural Networks (RNNs) in particular are able not only to represent
several pieces of information as embeddings but also, thanks to their recurrent
architecture, to encode as embeddings relatively long contexts. Such long
contexts are in general out of reach for models previously used for SLU. In
this paper we propose novel RNNs architectures for SLU which outperform
previous ones. Starting from a published idea as base block, we design new deep
RNNs achieving state-of-the-art results on two widely used corpora for SLU:
ATIS (Air Traveling Information System), in English, and MEDIA (Hotel
information and reservation in France), in French.
| 2,017 | Computation and Language |
A Generative Model of Group Conversation | Conversations with non-player characters (NPCs) in games are typically
confined to dialogue between a human player and a virtual agent, where the
conversation is initiated and controlled by the player. To create richer, more
believable environments for players, we need conversational behavior to reflect
initiative on the part of the NPCs, including conversations that include
multiple NPCs who interact with one another as well as the player. We describe
a generative computational model of group conversation between agents, an
abstract simulation of discussion in a small group setting. We define
conversational interactions in terms of rules for turn taking and interruption,
as well as belief change, sentiment change, and emotional response, all of
which are dependent on agent personality, context, and relationships. We
evaluate our model using a parameterized expressive range analysis, observing
correlations between simulation parameters and features of the resulting
conversations. This analysis confirms, for example, that character
personalities will predict how often they speak, and that heterogeneous groups
of characters will generate more belief change.
| 2,017 | Computation and Language |
Statistical Inferences for Polarity Identification in Natural Language | Information forms the basis for all human behavior, including the ubiquitous
decision-making that people constantly perform in their every day lives. It is
thus the mission of researchers to understand how humans process information to
reach decisions. In order to facilitate this task, this work proposes a novel
method of studying the reception of granular expressions in natural language.
The approach utilizes LASSO regularization as a statistical tool to extract
decisive words from textual content and draw statistical inferences based on
the correspondence between the occurrences of words and an exogenous response
variable. Accordingly, the method immediately suggests significant implications
for social sciences and Information Systems research: everyone can now identify
text segments and word choices that are statistically relevant to authors or
readers and, based on this knowledge, test hypotheses from behavioral research.
We demonstrate the contribution of our method by examining how authors
communicate subjective information through narrative materials. This allows us
to answer the question of which words to choose when communicating negative
information. On the other hand, we show that investors trade not only upon
facts in financial disclosures but are distracted by filler words and
non-informative language. Practitioners - for example those in the fields of
investor communications or marketing - can exploit our insights to enhance
their writings based on the true perception of word choice.
| 2,019 | Computation and Language |
RelNet: End-to-End Modeling of Entities & Relations | We introduce RelNet: a new model for relational reasoning. RelNet is a memory
augmented neural network which models entities as abstract memory slots and is
equipped with an additional relational memory which models relations between
all memory pairs. The model thus builds an abstract knowledge graph on the
entities and relations present in a document which can then be used to answer
questions about the document. It is trained end-to-end: only supervision to the
model is in the form of correct answers to the questions. We test the model on
the 20 bAbI question-answering tasks with 10k examples per task and find that
it solves all the tasks with a mean error of 0.3%, achieving 0% error on 11 of
the 20 tasks.
| 2,017 | Computation and Language |
Explaining Recurrent Neural Network Predictions in Sentiment Analysis | Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown
to deliver insightful explanations in the form of input space relevances for
understanding feed-forward neural network classification decisions. In the
present work, we extend the usage of LRP to recurrent neural networks. We
propose a specific propagation rule applicable to multiplicative connections as
they arise in recurrent network architectures such as LSTMs and GRUs. We apply
our technique to a word-based bi-directional LSTM model on a five-class
sentiment prediction task, and evaluate the resulting LRP relevances both
qualitatively and quantitatively, obtaining better results than a
gradient-based related method which was used in previous work.
| 2,017 | Computation and Language |
Automatic Quality Estimation for ASR System Combination | Recognizer Output Voting Error Reduction (ROVER) has been widely used for
system combination in automatic speech recognition (ASR). In order to select
the most appropriate words to insert at each position in the output
transcriptions, some ROVER extensions rely on critical information such as
confidence scores and other ASR decoder features. This information, which is
not always available, highly depends on the decoding process and sometimes
tends to over estimate the real quality of the recognized words. In this paper
we propose a novel variant of ROVER that takes advantage of ASR quality
estimation (QE) for ranking the transcriptions at "segment level" instead of:
i) relying on confidence scores, or ii) feeding ROVER with randomly ordered
hypotheses. We first introduce an effective set of features to compensate for
the absence of ASR decoder information. Then, we apply QE techniques to perform
accurate hypothesis ranking at segment-level before starting the fusion
process. The evaluation is carried out on two different tasks, in which we
respectively combine hypotheses coming from independent ASR systems and
multi-microphone recordings. In both tasks, it is assumed that the ASR decoder
information is not available. The proposed approach significantly outperforms
standard ROVER and it is competitive with two strong oracles that e xploit
prior knowledge about the real quality of the hypotheses to be combined.
Compared to standard ROVER, the abs olute WER improvements in the two
evaluation scenarios range from 0.5% to 7.3%.
| 2,017 | Computation and Language |
Jointly Learning Word Embeddings and Latent Topics | Word embedding models such as Skip-gram learn a vector-space representation
for each word, based on the local word collocation patterns that are observed
in a text corpus. Latent topic models, on the other hand, take a more global
view, looking at the word distributions across the corpus to assign a topic to
each word occurrence. These two paradigms are complementary in how they
represent the meaning of word occurrences. While some previous works have
already looked at using word embeddings for improving the quality of latent
topics, and conversely, at using latent topics for improving word embeddings,
such "two-step" methods cannot capture the mutual interaction between the two
paradigms. In this paper, we propose STE, a framework which can learn word
embeddings and latent topics in a unified manner. STE naturally obtains
topic-specific word embeddings, and thus addresses the issue of polysemy. At
the same time, it also learns the term distributions of the topics, and the
topic distributions of the documents. Our experimental results demonstrate that
the STE model can indeed generate useful topic-specific word embeddings and
coherent latent topics in an effective and efficient way.
| 2,017 | Computation and Language |
End-to-end Conversation Modeling Track in DSTC6 | End-to-end training of neural networks is a promising approach to automatic
construction of dialog systems using a human-to-human dialog corpus. Recently,
Vinyals et al. tested neural conversation models using OpenSubtitles. Lowe et
al. released the Ubuntu Dialogue Corpus for researching unstructured multi-turn
dialogue systems. Furthermore, the approach has been extended to accomplish
task oriented dialogs to provide information properly with natural
conversation. For example, Ghazvininejad et al. proposed a knowledge grounded
neural conversation model [3], where the research is aiming at combining
conversational dialogs with task-oriented knowledge using unstructured data
such as Twitter data for conversation and Foursquare data for external
knowledge.However, the task is still limited to a restaurant information
service, and has not yet been tested with a wide variety of dialog tasks. In
addition, it is still unclear how to create intelligent dialog systems that can
respond like a human agent.
In consideration of these problems, we proposed a challenge track to the 6th
dialog system technology challenges (DSTC6) using human-to-human dialog data to
mimic human dialog behaviors. The focus of the challenge track is to train
end-to-end conversation models from human-to-human conversation and accomplish
end-to-end dialog tasks in various situations assuming a customer service, in
which a system plays a role of human agent and generates natural and
informative sentences in response to user's questions or comments given dialog
context.
| 2,018 | Computation and Language |
Personalization in Goal-Oriented Dialog | The main goal of modeling human conversation is to create agents which can
interact with people in both open-ended and goal-oriented scenarios. End-to-end
trained neural dialog systems are an important line of research for such
generalized dialog models as they do not resort to any situation-specific
handcrafting of rules. However, incorporating personalization into such systems
is a largely unexplored topic as there are no existing corpora to facilitate
such work. In this paper, we present a new dataset of goal-oriented dialogs
which are influenced by speaker profiles attached to them. We analyze the
shortcomings of an existing end-to-end dialog system based on Memory Networks
and propose modifications to the architecture which enable personalization. We
also investigate personalization in dialog as a multi-task learning problem,
and show that a single model which shares features among various profiles
outperforms separate models for each profile.
| 2,019 | Computation and Language |
Neural Machine Translation with Gumbel-Greedy Decoding | Previous neural machine translation models used some heuristic search
algorithms (e.g., beam search) in order to avoid solving the maximum a
posteriori problem over translation sentences at test time. In this paper, we
propose the Gumbel-Greedy Decoding which trains a generative network to predict
translation under a trained model. We solve such a problem using the
Gumbel-Softmax reparameterization, which makes our generative network
differentiable and trainable through standard stochastic gradient methods. We
empirically demonstrate that our proposed model is effective for generating
sequences of discrete words.
| 2,017 | Computation and Language |
Named Entity Recognition with stack residual LSTM and trainable bias
decoding | Recurrent Neural Network models are the state-of-the-art for Named Entity
Recognition (NER). We present two innovations to improve the performance of
these models. The first innovation is the introduction of residual connections
between the Stacked Recurrent Neural Network model to address the degradation
problem of deep neural networks. The second innovation is a bias decoding
mechanism that allows the trained system to adapt to non-differentiable and
externally computed objectives, such as the entity-based F-measure. Our work
improves the state-of-the-art results for both Spanish and English languages on
the standard train/development/test split of the CoNLL 2003 Shared Task NER
dataset.
| 2,017 | Computation and Language |
Comparison of Modified Kneser-Ney and Witten-Bell Smoothing Techniques
in Statistical Language Model of Bahasa Indonesia | Smoothing is one technique to overcome data sparsity in statistical language
model. Although in its mathematical definition there is no explicit dependency
upon specific natural language, different natures of natural languages result
in different effects of smoothing techniques. This is true for Russian language
as shown by Whittaker (1998). In this paper, We compared Modified Kneser-Ney
and Witten-Bell smoothing techniques in statistical language model of Bahasa
Indonesia. We used train sets of totally 22M words that we extracted from
Indonesian version of Wikipedia. As far as we know, this is the largest train
set used to build statistical language model for Bahasa Indonesia. The
experiments with 3-gram, 5-gram, and 7-gram showed that Modified Kneser-Ney
consistently outperforms Witten-Bell smoothing technique in term of perplexity
values. It is interesting to note that our experiments showed 5-gram model for
Modified Kneser-Ney smoothing technique outperforms that of 7-gram. Meanwhile,
Witten-Bell smoothing is consistently improving over the increase of n-gram
order.
| 2,017 | Computation and Language |
Encoder-Decoder Shift-Reduce Syntactic Parsing | Starting from NMT, encoder-decoder neu- ral networks have been used for many
NLP problems. Graph-based models and transition-based models borrowing the en-
coder components achieve state-of-the-art performance on dependency parsing and
constituent parsing, respectively. How- ever, there has not been work
empirically studying the encoder-decoder neural net- works for transition-based
parsing. We apply a simple encoder-decoder to this end, achieving comparable
results to the parser of Dyer et al. (2015) on standard de- pendency parsing,
and outperforming the parser of Vinyals et al. (2015) on con- stituent parsing.
| 2,017 | Computation and Language |
A Deep Neural Architecture for Sentence-level Sentiment Classification
in Twitter Social Networking | This paper introduces a novel deep learning framework including a
lexicon-based approach for sentence-level prediction of sentiment label
distribution. We propose to first apply semantic rules and then use a Deep
Convolutional Neural Network (DeepCNN) for character-level embeddings in order
to increase information for word-level embedding. After that, a Bidirectional
Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature
representation from the word-level embedding. We evaluate our approach on three
Twitter sentiment classification datasets. Experimental results show that our
model can improve the classification accuracy of sentence-level sentiment
analysis in Twitter social networking.
| 2,017 | Computation and Language |
Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context | Word embeddings, which represent a word as a point in a vector space, have
become ubiquitous to several NLP tasks. A recent line of work uses bilingual
(two languages) corpora to learn a different vector for each sense of a word,
by exploiting crosslingual signals to aid sense identification. We present a
multi-view Bayesian non-parametric algorithm which improves multi-sense word
embeddings by (a) using multilingual (i.e., more than two languages) corpora to
significantly improve sense embeddings beyond what one achieves with bilingual
information, and (b) uses a principled approach to learn a variable number of
senses per word, in a data-driven manner. Ours is the first approach with the
ability to leverage multilingual corpora efficiently for multi-sense
representation learning. Experiments show that multilingual training
significantly improves performance over monolingual and bilingual training, by
allowing us to combine different parallel corpora to leverage multilingual
context. Multilingual training yields comparable performance to a state of the
art mono-lingual model trained on five times more training data.
| 2,017 | Computation and Language |
Automated text summarisation and evidence-based medicine: A survey of
two domains | The practice of evidence-based medicine (EBM) urges medical practitioners to
utilise the latest research evidence when making clinical decisions. Because of
the massive and growing volume of published research on various medical topics,
practitioners often find themselves overloaded with information. As such,
natural language processing research has recently commenced exploring
techniques for performing medical domain-specific automated text summarisation
(ATS) techniques-- targeted towards the task of condensing large medical texts.
However, the development of effective summarisation techniques for this task
requires cross-domain knowledge. We present a survey of EBM, the
domain-specific needs for EBM, automated summarisation techniques, and how they
have been applied hitherto. We envision that this survey will serve as a first
resource for the development of future operational text summarisation
techniques for EBM.
| 2,017 | Computation and Language |
Automatic Synonym Discovery with Knowledge Bases | Recognizing entity synonyms from text has become a crucial task in many
entity-leveraging applications. However, discovering entity synonyms from
domain-specific text corpora (e.g., news articles, scientific papers) is rather
challenging. Current systems take an entity name string as input to find out
other names that are synonymous, ignoring the fact that often times a name
string can refer to multiple entities (e.g., "apple" could refer to both Apple
Inc and the fruit apple). Moreover, most existing methods require training data
manually created by domain experts to construct supervised-learning systems. In
this paper, we study the problem of automatic synonym discovery with knowledge
bases, that is, identifying synonyms for knowledge base entities in a given
domain-specific corpus. The manually-curated synonyms for each entity stored in
a knowledge base not only form a set of name strings to disambiguate the
meaning for each other, but also can serve as "distant" supervision to help
determine important features for the task. We propose a novel framework, called
DPE, to integrate two kinds of mutually-complementing signals for synonym
discovery, i.e., distributional features based on corpus-level statistics and
textual patterns based on local contexts. In particular, DPE jointly optimizes
the two kinds of signals in conjunction with distant supervision, so that they
can mutually enhance each other in the training stage. At the inference stage,
both signals will be utilized to discover synonyms for the given entities.
Experimental results prove the effectiveness of the proposed framework.
| 2,017 | Computation and Language |
English-Japanese Neural Machine Translation with
Encoder-Decoder-Reconstructor | Neural machine translation (NMT) has recently become popular in the field of
machine translation. However, NMT suffers from the problem of repeating or
missing words in the translation. To address this problem, Tu et al. (2017)
proposed an encoder-decoder-reconstructor framework for NMT using
back-translation. In this method, they selected the best forward translation
model in the same manner as Bahdanau et al. (2015), and then trained a
bi-directional translation model as fine-tuning. Their experiments show that it
offers significant improvement in BLEU scores in Chinese-English translation
task. We confirm that our re-implementation also shows the same tendency and
alleviates the problem of repeating and missing words in the translation on a
English-Japanese task too. In addition, we evaluate the effectiveness of
pre-training by comparing it with a jointly-trained model of forward
translation and back-translation.
| 2,017 | Computation and Language |
Generative Encoder-Decoder Models for Task-Oriented Spoken Dialog
Systems with Chatting Capability | Generative encoder-decoder models offer great promise in developing
domain-general dialog systems. However, they have mainly been applied to
open-domain conversations. This paper presents a practical and novel framework
for building task-oriented dialog systems based on encoder-decoder models. This
framework enables encoder-decoder models to accomplish slot-value independent
decision-making and interact with external databases. Moreover, this paper
shows the flexibility of the proposed method by interleaving chatting
capability with a slot-filling system for better out-of-domain recovery. The
models were trained on both real-user data from a bus information system and
human-human chat data. Results show that the proposed framework achieves good
performance in both offline evaluation metrics and in task success rate with
human users.
| 2,017 | Computation and Language |
Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog | A number of recent works have proposed techniques for end-to-end learning of
communication protocols among cooperative multi-agent populations, and have
simultaneously found the emergence of grounded human-interpretable language in
the protocols developed by the agents, all learned without any human
supervision!
In this paper, using a Task and Tell reference game between two agents as a
testbed, we present a sequence of 'negative' results culminating in a
'positive' one -- showing that while most agent-invented languages are
effective (i.e. achieve near-perfect task rewards), they are decidedly not
interpretable or compositional.
In essence, we find that natural language does not emerge 'naturally',
despite the semblance of ease of natural-language-emergence that one may gather
from recent literature. We discuss how it is possible to coax the invented
languages to become more and more human-like and compositional by increasing
restrictions on how two agents may communicate.
| 2,017 | Computation and Language |
Neural Question Answering at BioASQ 5B | This paper describes our submission to the 2017 BioASQ challenge. We
participated in Task B, Phase B which is concerned with biomedical question
answering (QA). We focus on factoid and list question, using an extractive QA
model, that is, we restrict our system to output substrings of the provided
text snippets. At the core of our system, we use FastQA, a state-of-the-art
neural QA system. We extended it with biomedical word embeddings and changed
its answer layer to be able to answer list questions in addition to factoid
questions. We pre-trained the model on a large-scale open-domain QA dataset,
SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our
approach, we achieve state-of-the-art results on factoid questions and
competitive results on list questions.
| 2,017 | Computation and Language |
The Minor Fall, the Major Lift: Inferring Emotional Valence of Musical
Chords through Lyrics | We investigate the association between musical chords and lyrics by analyzing
a large dataset of user-contributed guitar tablatures. Motivated by the idea
that the emotional content of chords is reflected in the words used in
corresponding lyrics, we analyze associations between lyrics and chord
categories. We also examine the usage patterns of chords and lyrics in
different musical genres, historical eras, and geographical regions. Our
overall results confirms a previously known association between Major chords
and positive valence. We also report a wide variation in this association
across regions, genres, and eras. Our results suggest possible existence of
different emotional associations for other types of chords.
| 2,017 | Computation and Language |
Memory-augmented Chinese-Uyghur Neural Machine Translation | Neural machine translation (NMT) has achieved notable performance recently.
However, this approach has not been widely applied to the translation task
between Chinese and Uyghur, partly due to the limited parallel data resource
and the large proportion of rare words caused by the agglutinative nature of
Uyghur. In this paper, we collect ~200,000 sentence pairs and show that with
this middle-scale database, an attention-based NMT can perform very well on
Chinese-Uyghur/Uyghur-Chinese translation. To tackle rare words, we propose a
novel memory structure to assist the NMT inference. Our experiments
demonstrated that the memory-augmented NMT (M-NMT) outperforms both the vanilla
NMT and the phrase-based statistical machine translation (SMT). Interestingly,
the memory structure provides an elegant way for dealing with words that are
out of vocabulary.
| 2,017 | Computation and Language |
CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection
in 52 Languages | The CoNLL-SIGMORPHON 2017 shared task on supervised morphological generation
required systems to be trained and tested in each of 52 typologically diverse
languages. In sub-task 1, submitted systems were asked to predict a specific
inflected form of a given lemma. In sub-task 2, systems were given a lemma and
some of its specific inflected forms, and asked to complete the inflectional
paradigm by predicting all of the remaining inflected forms. Both sub-tasks
included high, medium, and low-resource conditions. Sub-task 1 received 24
system submissions, while sub-task 2 received 3 system submissions. Following
the success of neural sequence-to-sequence models in the SIGMORPHON 2016 shared
task, all but one of the submissions included a neural component. The results
show that high performance can be achieved with small training datasets, so
long as models have appropriate inductive bias or make use of additional
unlabeled data or synthetic data. However, different biasing and data
augmentation resulted in disjoint sets of inflected forms being predicted
correctly, suggesting that there is room for future improvement.
| 2,017 | Computation and Language |
Named Entity Disambiguation for Noisy Text | We address the task of Named Entity Disambiguation (NED) for noisy text. We
present WikilinksNED, a large-scale NED dataset of text fragments from the web,
which is significantly noisier and more challenging than existing news-based
datasets. To capture the limited and noisy local context surrounding each
mention, we design a neural model and train it with a novel method for sampling
informative negative examples. We also describe a new way of initializing word
and entity embeddings that significantly improves performance. Our model
significantly outperforms existing state-of-the-art methods on WikilinksNED
while achieving comparable performance on a smaller newswire dataset.
| 2,017 | Computation and Language |
The E2E Dataset: New Challenges For End-to-End Generation | This paper describes the E2E data, a new dataset for training end-to-end,
data-driven natural language generation systems in the restaurant domain, which
is ten times bigger than existing, frequently used datasets in this area. The
E2E dataset poses new challenges: (1) its human reference texts show more
lexical richness and syntactic variation, including discourse phenomena; (2)
generating from this set requires content selection. As such, learning from
this dataset promises more natural, varied and less template-like system
utterances. We also establish a baseline on this dataset, which illustrates
some of the difficulties associated with this data.
| 2,017 | Computation and Language |
Generating Appealing Brand Names | Providing appealing brand names to newly launched products, newly formed
companies or for renaming existing companies is highly important as it can play
a crucial role in deciding its success or failure. In this work, we propose a
computational method to generate appealing brand names based on the description
of such entities. We use quantitative scores for readability, pronounceability,
memorability and uniqueness of the generated names to rank order them. A set of
diverse appealing names is recommended to the user for the brand naming task.
Experimental results show that the names generated by our approach are more
appealing than names which prior approaches and recruited humans could come up.
| 2,017 | Computation and Language |
Data-driven Natural Language Generation: Paving the Road to Success | We argue that there are currently two major bottlenecks to the commercial use
of statistical machine learning approaches for natural language generation
(NLG): (a) The lack of reliable automatic evaluation metrics for NLG, and (b)
The scarcity of high quality in-domain corpora. We address the first problem by
thoroughly analysing current evaluation metrics and motivating the need for a
new, more reliable metric. The second problem is addressed by presenting a
novel framework for developing and evaluating a high quality corpus for NLG
training.
| 2,017 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.