Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Neural Machine Translation Training in a Multi-Domain Scenario | In this paper, we explore alternative ways to train a neural machine
translation system in a multi-domain scenario. We investigate data
concatenation (with fine tuning), model stacking (multi-level fine tuning),
data selection and multi-model ensemble. Our findings show that the best
translation quality can be achieved by building an initial system on a
concatenation of available out-of-domain data and then fine-tuning it on
in-domain data. Model stacking works best when training begins with the
furthest out-of-domain data and the model is incrementally fine-tuned with the
next furthest domain and so on. Data selection did not give the best results,
but can be considered as a decent compromise between training time and
translation quality. A weighted ensemble of different individual models
performed better than data selection. It is beneficial in a scenario when there
is no time for fine-tuning an already trained model.
| 2,017 | Computation and Language |
A Simple LSTM model for Transition-based Dependency Parsing | We present a simple LSTM-based transition-based dependency parser. Our model
is composed of a single LSTM hidden layer replacing the hidden layer in the
usual feed-forward network architecture. We also propose a new initialization
method that uses the pre-trained weights from a feed-forward neural network to
initialize our LSTM-based model. We also show that using dropout on the input
layer has a positive effect on performance. Our final parser achieves a 93.06%
unlabeled and 91.01% labeled attachment score on the Penn Treebank. We
additionally replace LSTMs with GRUs and Elman units in our model and explore
the effectiveness of our initialization method on individual gates constituting
all three types of RNN units.
| 2,017 | Computation and Language |
Unsupervised Terminological Ontology Learning based on Hierarchical
Topic Modeling | In this paper, we present hierarchical relationbased latent Dirichlet
allocation (hrLDA), a data-driven hierarchical topic model for extracting
terminological ontologies from a large number of heterogeneous documents. In
contrast to traditional topic models, hrLDA relies on noun phrases instead of
unigrams, considers syntax and document structures, and enriches topic
hierarchies with topic relations. Through a series of experiments, we
demonstrate the superiority of hrLDA over existing topic models, especially for
building hierarchies. Furthermore, we illustrate the robustness of hrLDA in the
settings of noisy data sets, which are likely to occur in many practical
scenarios. Our ontology evaluation results show that ontologies extracted from
hrLDA are very competitive with the ontologies created by domain experts.
| 2,017 | Computation and Language |
PersonaBank: A Corpus of Personal Narratives and Their Story Intention
Graphs | We present a new corpus, PersonaBank, consisting of 108 personal stories from
weblogs that have been annotated with their Story Intention Graphs, a deep
representation of the fabula of a story. We describe the topics of the stories
and the basis of the Story Intention Graph representation, as well as the
process of annotating the stories to produce the Story Intention Graphs and the
challenges of adapting the tool to this new personal narrative domain We also
discuss how the corpus can be used in applications that retell the story using
different styles of tellings, co-tellings, or as a content planner.
| 2,017 | Computation and Language |
Argument Strength is in the Eye of the Beholder: Audience Effects in
Persuasion | Americans spend about a third of their time online, with many participating
in online conversations on social and political issues. We hypothesize that
social media arguments on such issues may be more engaging and persuasive than
traditional media summaries, and that particular types of people may be more or
less convinced by particular styles of argument, e.g. emotional arguments may
resonate with some personalities while factual arguments resonate with others.
We report a set of experiments testing at large scale how audience variables
interact with argument style to affect the persuasiveness of an argument, an
under-researched topic within natural language processing. We show that belief
change is affected by personality factors, with conscientious, open and
agreeable people being more convinced by emotional arguments.
| 2,017 | Computation and Language |
Automating Direct Speech Variations in Stories and Games | Dialogue authoring in large games requires not only content creation but the
subtlety of its delivery, which can vary from character to character. Manually
authoring this dialogue can be tedious, time-consuming, or even altogether
infeasible. This paper utilizes a rich narrative representation for modeling
dialogue and an expressive natural language generation engine for realizing it,
and expands upon a translation tool that bridges the two. We add functionality
to the translator to allow direct speech to be modeled by the narrative
representation, whereas the original translator supports only narratives told
by a third person narrator. We show that we can perform character substitution
in dialogues. We implement and evaluate a potential application to dialogue
implementation: generating dialogue for games with big, dynamic, or
procedurally-generated open worlds. We present a pilot study on human
perceptions of the personalities of characters using direct speech, assuming
unknown personality types at the time of authoring.
| 2,017 | Computation and Language |
Paradigm Completion for Derivational Morphology | The generation of complex derived word forms has been an overlooked problem
in NLP; we fill this gap by applying neural sequence-to-sequence models to the
task. We overview the theoretical motivation for a paradigmatic treatment of
derivational morphology, and introduce the task of derivational paradigm
completion as a parallel to inflectional paradigm completion. State-of-the-art
neural models, adapted from the inflection task, are able to learn a range of
derivation patterns, and outperform a non-neural baseline by 16.4%. However,
due to semantic, historical, and lexical considerations involved in
derivational morphology, future work will be needed to achieve performance
parity with inflection-generating systems.
| 2,017 | Computation and Language |
Cross-lingual, Character-Level Neural Morphological Tagging | Even for common NLP tasks, sufficient supervision is not available in many
languages -- morphological tagging is no exception. In the work presented here,
we explore a transfer learning scheme, whereby we train character-level
recurrent neural taggers to predict morphological taggings for high-resource
languages and low-resource languages together. Learning joint character
representations among multiple related languages successfully enables knowledge
transfer from the high-resource languages to the low-resource ones, improving
accuracy by up to 30%
| 2,020 | Computation and Language |
An Empirical Study of Discriminative Sequence Labeling Models for
Vietnamese Text Processing | This paper presents an empirical study of two widely-used sequence prediction
models, Conditional Random Fields (CRFs) and Long Short-Term Memory Networks
(LSTMs), on two fundamental tasks for Vietnamese text processing, including
part-of-speech tagging and named entity recognition. We show that a strong
lower bound for labeling accuracy can be obtained by relying only on simple
word-based features with minimal hand-crafted feature engineering, of 90.65\%
and 86.03\% performance scores on the standard test sets for the two tasks
respectively. In particular, we demonstrate empirically the surprising
efficiency of word embeddings in both of the two tasks, with both of the two
models. We point out that the state-of-the-art LSTMs model does not always
outperform significantly the traditional CRFs model, especially on
moderate-sized data sets. Finally, we give some suggestions and discussions for
efficient use of sequence labeling models in practical applications.
| 2,017 | Computation and Language |
Look-ahead Attention for Generation in Neural Machine Translation | The attention model has become a standard component in neural machine
translation (NMT) and it guides translation process by selectively focusing on
parts of the source sentence when predicting each target word. However, we find
that the generation of a target word does not only depend on the source
sentence, but also rely heavily on the previous generated target words,
especially the distant words which are difficult to model by using recurrent
neural networks. To solve this problem, we propose in this paper a novel
look-ahead attention mechanism for generation in NMT, which aims at directly
capturing the dependency relationship between target words. We further design
three patterns to integrate our look-ahead attention into the conventional
attention model. Experiments on NIST Chinese-to-English and WMT
English-to-German translation tasks show that our proposed look-ahead attention
mechanism achieves substantial improvements over state-of-the-art baselines.
| 2,017 | Computation and Language |
TANKER: Distributed Architecture for Named Entity Recognition and
Disambiguation | Named Entity Recognition and Disambiguation (NERD) systems have recently been
widely researched to deal with the significant growth of the Web. NERD systems
are crucial for several Natural Language Processing (NLP) tasks such as
summarization, understanding, and machine translation. However, there is no
standard interface specification, i.e. these systems may vary significantly
either for exporting their outputs or for processing the inputs. Thus, when a
given company desires to implement more than one NERD system, the process is
quite exhaustive and prone to failure. In addition, industrial solutions demand
critical requirements, e.g., large-scale processing, completeness, versatility,
and licenses. Commonly, these requirements impose a limitation, making good
NERD models to be ignored by companies. This paper presents TANKER, a
distributed architecture which aims to overcome scalability, reliability and
failure tolerance limitations related to industrial needs by combining NERD
systems. To this end, TANKER relies on a micro-services oriented architecture,
which enables agile development and delivery of complex enterprise
applications. In addition, TANKER provides a standardized API which makes
possible to combine several NERD systems at once.
| 2,017 | Computation and Language |
Fighting with the Sparsity of Synonymy Dictionaries | Graph-based synset induction methods, such as MaxMax and Watset, induce
synsets by performing a global clustering of a synonymy graph. However, such
methods are sensitive to the structure of the input synonymy graph: sparseness
of the input dictionary can substantially reduce the quality of the extracted
synsets. In this paper, we propose two different approaches designed to
alleviate the incompleteness of the input dictionaries. The first one performs
a pre-processing of the graph by adding missing edges, while the second one
performs a post-processing by merging similar synset clusters. We evaluate
these approaches on two datasets for the Russian language and discuss their
impact on the performance of synset induction methods. Finally, we perform an
extensive error analysis of each approach and discuss prominent alternative
methods for coping with the problem of the sparsity of the synonymy
dictionaries.
| 2,018 | Computation and Language |
Fast(er) Exact Decoding and Global Training for Transition-Based
Dependency Parsing via a Minimal Feature Set | We first present a minimal feature set for transition-based dependency
parsing, continuing a recent trend started by Kiperwasser and Goldberg (2016a)
and Cross and Huang (2016a) of using bi-directional LSTM features. We plug our
minimal feature set into the dynamic-programming framework of Huang and Sagae
(2010) and Kuhlmann et al. (2011) to produce the first implementation of
worst-case O(n^3) exact decoders for arc-hybrid and arc-eager transition
systems. With our minimal features, we also present O(n^3) global training
methods. Finally, using ensembles including our new parsers, we achieve the
best unlabeled attachment score reported (to our knowledge) on the Chinese
Treebank and the "second-best-in-class" result on the English Penn Treebank.
| 2,017 | Computation and Language |
LangPro: Natural Language Theorem Prover | LangPro is an automated theorem prover for natural language
(https://github.com/kovvalsky/LangPro). Given a set of premises and a
hypothesis, it is able to prove semantic relations between them. The prover is
based on a version of analytic tableau method specially designed for natural
logic. The proof procedure operates on logical forms that preserve linguistic
expressions to a large extent. %This property makes the logical forms easily
obtainable from syntactic trees. %, in particular, Combinatory Categorial
Grammar derivation trees. The nature of proofs is deductive and transparent. On
the FraCaS and SICK textual entailment datasets, the prover achieves high
results comparable to state-of-the-art.
| 2,017 | Computation and Language |
Learning Fine-Grained Knowledge about Contingent Relations between
Everyday Events | Much of the user-generated content on social media is provided by ordinary
people telling stories about their daily lives. We develop and test a novel
method for learning fine-grained common-sense knowledge from these stories
about contingent (causal and conditional) relationships between everyday
events. This type of knowledge is useful for text and story understanding,
information extraction, question answering, and text summarization. We test and
compare different methods for learning contingency relation, and compare what
is learned from topic-sorted story collections vs. general-domain stories. Our
experiments show that using topic-specific datasets enables learning
finer-grained knowledge about events and results in significant improvement
over the baselines. An evaluation on Amazon Mechanical Turk shows 82% of the
relations between events that we learn from topic-sorted stories are judged as
contingent.
| 2,016 | Computation and Language |
Inference of Fine-Grained Event Causality from Blogs and Films | Human understanding of narrative is mainly driven by reasoning about causal
relations between events and thus recognizing them is a key capability for
computational models of language understanding. Computational work in this area
has approached this via two different routes: by focusing on acquiring a
knowledge base of common causal relations between events, or by attempting to
understand a particular story or macro-event, along with its storyline. In this
position paper, we focus on knowledge acquisition approach and claim that
newswire is a relatively poor source for learning fine-grained causal relations
between everyday events. We describe experiments using an unsupervised method
to learn causal relations between events in the narrative genres of
first-person narratives and film scene descriptions. We show that our method
learns fine-grained causal relations, judged by humans as likely to be causal
over 80% of the time. We also demonstrate that the learned event pairs do not
exist in publicly available event-pair datasets extracted from newswire.
| 2,017 | Computation and Language |
Inferring Narrative Causality between Event Pairs in Films | To understand narrative, humans draw inferences about the underlying
relations between narrative events. Cognitive theories of narrative
understanding define these inferences as four different types of causality,
that include pairs of events A, B where A physically causes B (X drop, X
break), to pairs of events where A causes emotional state B (Y saw X, Y felt
fear). Previous work on learning narrative relations from text has either
focused on "strict" physical causality, or has been vague about what relation
is being learned. This paper learns pairs of causal events from a corpus of
film scene descriptions which are action rich and tend to be told in
chronological order. We show that event pairs induced using our methods are of
high quality and are judged to have a stronger causal relation than event pairs
from Rel-grams.
| 2,017 | Computation and Language |
Unsupervised Induction of Contingent Event Pairs from Film Scenes | Human engagement in narrative is partially driven by reasoning about
discourse relations between narrative events, and the expectations about what
is likely to happen next that results from such reasoning. Researchers in NLP
have tackled modeling such expectations from a range of perspectives, including
treating it as the inference of the contingent discourse relation, or as a type
of common-sense causal reasoning. Our approach is to model likelihood between
events by drawing on several of these lines of previous work. We implement and
evaluate different unsupervised methods for learning event pairs that are
likely to be contingent on one another. We refine event pairs that we learn
from a corpus of film scene descriptions utilizing web search counts, and
evaluate our results by collecting human judgments of contingency. Our results
indicate that the use of web search counts increases the average accuracy of
our best method to 85.64% over a baseline of 50%, as compared to an average
accuracy of 75.15% without web search.
| 2,013 | Computation and Language |
Identifying Products in Online Cybercrime Marketplaces: A Dataset for
Fine-grained Domain Adaptation | One weakness of machine-learned NLP models is that they typically perform
poorly on out-of-domain data. In this work, we study the task of identifying
products being bought and sold in online cybercrime forums, which exhibits
particularly challenging cross-domain effects. We formulate a task that
represents a hybrid of slot-filling information extraction and named entity
recognition and annotate data from four different forums. Each of these forums
constitutes its own "fine-grained domain" in that the forums cover different
market sectors with different properties, even though all forums are in the
broad domain of cybercrime. We characterize these domain differences in the
context of a learning-based system: supervised models see decreased accuracy
when applied to new forums, and standard techniques for semi-supervised
learning and domain adaptation have limited effectiveness on this data, which
suggests the need to improve these techniques. We release a dataset of 1,938
annotated posts from across the four forums.
| 2,017 | Computation and Language |
Human and Machine Judgements for Russian Semantic Relatedness | Semantic relatedness of terms represents similarity of meaning by a numerical
score. On the one hand, humans easily make judgments about semantic
relatedness. On the other hand, this kind of information is useful in language
processing systems. While semantic relatedness has been extensively studied for
English using numerous language resources, such as associative norms, human
judgments, and datasets generated from lexical databases, no evaluation
resources of this kind have been available for Russian to date. Our
contribution addresses this problem. We present five language resources of
different scale and purpose for Russian semantic relatedness, each being a list
of triples (word_i, word_j, relatedness_ij). Four of them are designed for
evaluation of systems for computing semantic relatedness, complementing each
other in terms of the semantic relation type they represent. These benchmarks
were used to organize a shared task on Russian semantic relatedness, which
attracted 19 teams. We use one of the best approaches identified in this
competition to generate the fifth high-coverage resource, the first open
distributional thesaurus of Russian. Multiple evaluations of this thesaurus,
including a large-scale crowdsourcing study involving native speakers, indicate
its high accuracy.
| 2,016 | Computation and Language |
Learning Lexico-Functional Patterns for First-Person Affect | Informal first-person narratives are a unique resource for computational
models of everyday events and people's affective reactions to them. People
blogging about their day tend not to explicitly say I am happy. Instead they
describe situations from which other humans can readily infer their affective
reactions. However current sentiment dictionaries are missing much of the
information needed to make similar inferences. We build on recent work that
models affect in terms of lexical predicate functions and affect on the
predicate's arguments. We present a method to learn proxies for these functions
from first-person narratives. We construct a novel fine-grained test set, and
show that the patterns we learn improve our ability to predict first-person
affective reactions to everyday events, from a Stanford sentiment baseline of
.67F to .75F.
| 2,017 | Computation and Language |
Transfer Learning across Low-Resource, Related Languages for Neural
Machine Translation | We present a simple method to improve neural translation of a low-resource
language pair using parallel data from a related, also low-resource, language
pair. The method is based on the transfer method of Zoph et al., but whereas
their method ignores any source vocabulary overlap, ours exploits it. First, we
split words using Byte Pair Encoding (BPE) to increase vocabulary overlap.
Then, we train a model on the first language pair and transfer its parameters,
including its source word embeddings, to another model and continue training on
the second language pair. Our experiments show that transfer learning helps
word-based translation only slightly, but when used on top of a much stronger
BPE baseline, it yields larger improvements of up to 4.3 BLEU.
| 2,017 | Computation and Language |
R$^3$: Reinforced Reader-Ranker for Open-Domain Question Answering | In recent years researchers have achieved considerable success applying
neural network methods to question answering (QA). These approaches have
achieved state of the art results in simplified closed-domain settings such as
the SQuAD (Rajpurkar et al., 2016) dataset, which provides a pre-selected
passage, from which the answer to a given question may be extracted. More
recently, researchers have begun to tackle open-domain QA, in which the model
is given a question and access to a large corpus (e.g., wikipedia) instead of a
pre-selected passage (Chen et al., 2017a). This setting is more complex as it
requires large-scale search for relevant passages by an information retrieval
component, combined with a reading comprehension model that "reads" the
passages to generate an answer to the question. Performance in this setting
lags considerably behind closed-domain performance. In this paper, we present a
novel open-domain QA system called Reinforced Ranker-Reader $(R^3)$, based on
two algorithmic innovations. First, we propose a new pipeline for open-domain
QA with a Ranker component, which learns to rank retrieved passages in terms of
likelihood of generating the ground-truth answer to a given question. Second,
we propose a novel method that jointly trains the Ranker along with an
answer-generation Reader model, based on reinforcement learning. We report
extensive experimental results showing that our method significantly improves
on the state of the art for multiple open-domain QA datasets.
| 2,017 | Computation and Language |
Glyph-aware Embedding of Chinese Characters | Given the advantage and recent success of English character-level and
subword-unit models in several NLP tasks, we consider the equivalent modeling
problem for Chinese. Chinese script is logographic and many Chinese logograms
are composed of common substructures that provide semantic, phonetic and
syntactic hints. In this work, we propose to explicitly incorporate the visual
appearance of a character's glyph in its representation, resulting in a novel
glyph-aware embedding of Chinese characters. Being inspired by the success of
convolutional neural networks in computer vision, we use them to incorporate
the spatio-structural patterns of Chinese glyphs as rendered in raw pixels. In
the context of two basic Chinese NLP tasks of language modeling and word
segmentation, the model learns to represent each character's task-relevant
semantic and syntactic information in the character-level embedding.
| 2,018 | Computation and Language |
Linguistic Reflexes of Well-Being and Happiness in Echo | Different theories posit different sources for feelings of well-being and
happiness. Appraisal theory grounds our emotional responses in our goals and
desires and their fulfillment, or lack of fulfillment. Self Determination
theory posits that the basis for well-being rests on our assessment of our
competence, autonomy, and social connection. And surveys that measure happiness
empirically note that people require their basic needs to be met for food and
shelter, but beyond that tend to be happiest when socializing, eating or having
sex. We analyze a corpus of private microblogs from a well-being application
called ECHO, where users label each written post about daily events with a
happiness score between 1 and 9. Our goal is to ground the linguistic
descriptions of events that users experience in theories of well-being and
happiness, and then examine the extent to which different theoretical accounts
can explain the variance in the happiness scores. We show that recurrent event
types, such as OBLIGATION and INCOMPETENCE, which affect people's feelings of
well-being are not captured in current lexical or semantic resources.
| 2,017 | Computation and Language |
Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning | A significant amount of the world's knowledge is stored in relational
databases. However, the ability for users to retrieve facts from a database is
limited due to a lack of understanding of query languages such as SQL. We
propose Seq2SQL, a deep neural network for translating natural language
questions to corresponding SQL queries. Our model leverages the structure of
SQL queries to significantly reduce the output space of generated queries.
Moreover, we use rewards from in-the-loop query execution over the database to
learn a policy to generate unordered parts of the query, which we show are less
suitable for optimization via cross entropy loss. In addition, we will publish
WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL
queries distributed across 24241 tables from Wikipedia. This dataset is
required to train our model and is an order of magnitude larger than comparable
datasets. By applying policy-based reinforcement learning with a query
execution environment to WikiSQL, our model Seq2SQL outperforms attentional
sequence to sequence models, improving execution accuracy from 35.9% to 59.4%
and logical form accuracy from 23.4% to 48.3%.
| 2,017 | Computation and Language |
Order-Planning Neural Text Generation From Structured Data | Generating texts from structured data (e.g., a table) is important for
various natural language processing tasks such as question answering and dialog
systems. In recent studies, researchers use neural language models and
encoder-decoder frameworks for table-to-text generation. However, these neural
network-based approaches do not model the order of contents during text
generation. When a human writes a summary based on a given table, he or she
would probably consider the content order before wording. In a biography, for
example, the nationality of a person is typically mentioned before occupation
in a biography. In this paper, we propose an order-planning text generation
model to capture the relationship between different fields and use such
relationship to make the generated text more fluent and smooth. We conducted
experiments on the WikiBio dataset and achieve significantly higher performance
than previous methods in terms of BLEU, ROUGE, and NIST scores.
| 2,017 | Computation and Language |
Variational Inference for Logical Inference | Functional Distributional Semantics is a framework that aims to learn, from
text, semantic representations which can be interpreted in terms of truth. Here
we make two contributions to this framework. The first is to show how a type of
logical inference can be performed by evaluating conditional probabilities. The
second is to make these calculations tractable by means of a variational
approximation. This approximation also enables faster convergence during
training, allowing us to close the gap with state-of-the-art vector space
models when evaluating on semantic similarity. We demonstrate promising
performance on two tasks.
| 2,017 | Computation and Language |
Semantic Composition via Probabilistic Model Theory | Semantic composition remains an open problem for vector space models of
semantics. In this paper, we explain how the probabilistic graphical model used
in the framework of Functional Distributional Semantics can be interpreted as a
probabilistic version of model theory. Building on this, we explain how various
semantic phenomena can be recast in terms of conditional probabilities in the
graphical model. This connection between formal semantics and machine learning
is helpful in both directions: it gives us an explicit mechanism for modelling
context-dependent meanings (a challenge for formal semantics), and also gives
us well-motivated techniques for composing distributed representations (a
challenge for distributional semantics). We present results on two datasets
that go beyond word similarity, showing how these semantically-motivated
techniques improve on the performance of vector models.
| 2,017 | Computation and Language |
Making "fetch" happen: The influence of social and linguistic context on
nonstandard word growth and decline | In an online community, new words come and go: today's "haha" may be replaced
by tomorrow's "lol." Changes in online writing are usually studied as a social
process, with innovations diffusing through a network of individuals in a
speech community. But unlike other types of innovation, language change is
shaped and constrained by the system in which it takes part. To investigate the
links between social and structural factors in language change, we undertake a
large-scale analysis of nonstandard word growth in the online community Reddit.
We find that dissemination across many linguistic contexts is a sign of growth:
words that appear in more linguistic contexts grow faster and survive longer.
We also find that social dissemination likely plays a less important role in
explaining word growth and decline than previously hypothesized.
| 2,018 | Computation and Language |
Query-by-example Spoken Term Detection using Attention-based Multi-hop
Networks | Retrieving spoken content with spoken queries, or query-by- example spoken
term detection (STD), is attractive because it makes possible the matching of
signals directly on the acoustic level without transcribing them into text.
Here, we propose an end-to-end query-by-example STD model based on an
attention-based multi-hop network, whose input is a spoken query and an audio
segment containing several utterances; the output states whether the audio
segment includes the query. The model can be trained in either a supervised
scenario using labeled data, or in an unsupervised fashion. In the supervised
scenario, we find that the attention mechanism and multiple hops improve
performance, and that the attention weights indicate the time span of the
detected terms. In the unsupervised setting, the model mimics the behavior of
the existing query-by-example STD system, yielding performance comparable to
the existing system but with a lower search time complexity.
| 2,018 | Computation and Language |
MIT-QCRI Arabic Dialect Identification System for the 2017 Multi-Genre
Broadcast Challenge | In order to successfully annotate the Arabic speech con- tent found in
open-domain media broadcasts, it is essential to be able to process a diverse
set of Arabic dialects. For the 2017 Multi-Genre Broadcast challenge (MGB-3)
there were two possible tasks: Arabic speech recognition, and Arabic Dialect
Identification (ADI). In this paper, we describe our efforts to create an ADI
system for the MGB-3 challenge, with the goal of distinguishing amongst four
major Arabic dialects, as well as Modern Standard Arabic. Our research fo-
cused on dialect variability and domain mismatches between the training and
test domain. In order to achieve a robust ADI system, we explored both Siamese
neural network models to learn similarity and dissimilarities among Arabic
dialects, as well as i-vector post-processing to adapt domain mismatches. Both
Acoustic and linguistic features were used for the final MGB-3 submissions,
with the best primary system achieving 75% accuracy on the official 10hr test
set.
| 2,017 | Computation and Language |
End-to-end Learning for Short Text Expansion | Effectively making sense of short texts is a critical task for many real
world applications such as search engines, social media services, and
recommender systems. The task is particularly challenging as a short text
contains very sparse information, often too sparse for a machine learning
algorithm to pick up useful signals. A common practice for analyzing short text
is to first expand it with external information, which is usually harvested
from a large collection of longer texts. In literature, short text expansion
has been done with all kinds of heuristics. We propose an end-to-end solution
that automatically learns how to expand short text to optimize a given learning
task. A novel deep memory network is proposed to automatically find relevant
information from a collection of longer documents and reformulate the short
text through a gating mechanism. Using short text classification as a
demonstrating task, we show that the deep memory network significantly
outperforms classical text expansion methods with comprehensive experiments on
real world data sets.
| 2,017 | Computation and Language |
Arc-Standard Spinal Parsing with Stack-LSTMs | We present a neural transition-based parser for spinal trees, a dependency
representation of constituent trees. The parser uses Stack-LSTMs that compose
constituent nodes with dependency-based derivations. In experiments, we show
that this model adapts to different styles of dependency relations, but this
choice has little effect for predicting constituent structure, suggesting that
LSTMs induce useful states by themselves.
| 2,017 | Computation and Language |
Patterns versus Characters in Subword-aware Neural Language Modeling | Words in some natural languages can have a composite structure. Elements of
this structure include the root (that could also be composite), prefixes and
suffixes with which various nuances and relations to other words can be
expressed. Thus, in order to build a proper word representation one must take
into account its internal structure. From a corpus of texts we extract a set of
frequent subwords and from the latter set we select patterns, i.e. subwords
which encapsulate information on character $n$-gram regularities. The selection
is made using the pattern-based Conditional Random Field model with $l_1$
regularization. Further, for every word we construct a new sequence over an
alphabet of patterns. The new alphabet's symbols confine a local statistical
context stronger than the characters, therefore they allow better
representations in ${\mathbb{R}}^n$ and are better building blocks for word
representation. In the task of subword-aware language modeling, pattern-based
models outperform character-based analogues by 2-20 perplexity points. Also, a
recurrent neural network in which a word is represented as a sum of embeddings
of its patterns is on par with a competitive and significantly more
sophisticated character-based convolutional architecture.
| 2,017 | Computation and Language |
Grasping the Finer Point: A Supervised Similarity Network for Metaphor
Detection | The ubiquity of metaphor in our everyday communication makes it an important
problem for natural language understanding. Yet, the majority of metaphor
processing systems to date rely on hand-engineered features and there is still
no consensus in the field as to which features are optimal for this task. In
this paper, we present the first deep learning architecture designed to capture
metaphorical composition. Our results demonstrate that it outperforms the
existing approaches in the metaphor identification task.
| 2,017 | Computation and Language |
Challenging Language-Dependent Segmentation for Arabic: An Application
to Machine Translation and Part-of-Speech Tagging | Word segmentation plays a pivotal role in improving any Arabic NLP
application. Therefore, a lot of research has been spent in improving its
accuracy. Off-the-shelf tools, however, are: i) complicated to use and ii)
domain/dialect dependent. We explore three language-independent alternatives to
morphological segmentation using: i) data-driven sub-word units, ii) characters
as a unit of learning, and iii) word embeddings learned using a character CNN
(Convolution Neural Network). On the tasks of Machine Translation and POS
tagging, we found these methods to achieve close to, and occasionally surpass
state-of-the-art performance. In our analysis, we show that a neural machine
translation system is sensitive to the ratio of source and target tokens, and a
ratio close to 1 or greater, gives optimal performance.
| 2,017 | Computation and Language |
Investigating how well contextual features are captured by
bi-directional recurrent neural network models | Learning algorithms for natural language processing (NLP) tasks traditionally
rely on manually defined relevant contextual features. On the other hand,
neural network models using an only distributional representation of words have
been successfully applied for several NLP tasks. Such models learn features
automatically and avoid explicit feature engineering. Across several domains,
neural models become a natural choice specifically when limited characteristics
of data are known. However, this flexibility comes at the cost of
interpretability. In this paper, we define three different methods to
investigate ability of bi-directional recurrent neural networks (RNNs) in
capturing contextual features. In particular, we analyze RNNs for sequence
tagging tasks. We perform a comprehensive analysis on general as well as
biomedical domain datasets. Our experiments focus on important contextual words
as features, which can easily be extended to analyze various other feature
types. We also investigate positional effects of context words and show how the
developed methods can be used for error analysis.
| 2,017 | Computation and Language |
Disentangling ASR and MT Errors in Speech Translation | The main aim of this paper is to investigate automatic quality assessment for
spoken language translation (SLT). More precisely, we investigate SLT errors
that can be due to transcription (ASR) or to translation (MT) modules. This
paper investigates automatic detection of SLT errors using a single classifier
based on joint ASR and MT features. We evaluate both 2-class (good/bad) and
3-class (good/badASR/badMT ) labeling tasks. The 3-class problem necessitates
to disentangle ASR and MT errors in the speech translation output and we
propose two label extraction methods for this non trivial step. This enables -
as a by-product - qualitative analysis on the SLT errors and their origin (are
they due to transcription or to translation step?) on our large in-house corpus
for French-to-English speech translation.
| 2,017 | Computation and Language |
Understanding the Logical and Semantic Structure of Large Documents | Current language understanding approaches focus on small documents, such as
newswire articles, blog posts, product reviews and discussion forum entries.
Understanding and extracting information from large documents like legal
briefs, proposals, technical manuals and research articles is still a
challenging task. We describe a framework that can analyze a large document and
help people to know where a particular information is in that document. We aim
to automatically identify and classify semantic sections of documents and
assign consistent and human-understandable labels to similar sections across
documents. A key contribution of our research is modeling the logical and
semantic structure of an electronic document. We apply machine learning
techniques, including deep learning, in our prototype system. We also make
available a dataset of information about a collection of scholarly articles
from the arXiv eprints collection that includes a wide range of metadata for
each article, including a table of contents, section labels, section
summarizations and more. We hope that this dataset will be a useful resource
for the machine learning and NLP communities in information retrieval,
content-based question answering and language modeling.
| 2,017 | Computation and Language |
From Review to Rating: Exploring Dependency Measures for Text
Classification | Various text analysis techniques exist, which attempt to uncover unstructured
information from text. In this work, we explore using statistical dependence
measures for textual classification, representing text as word vectors. Student
satisfaction scores on a 3-point scale and their free text comments written
about university subjects are used as the dataset. We have compared two textual
representations: a frequency word representation and term frequency
relationship to word vectors, and found that word vectors provide a greater
accuracy. However, these word vectors have a large number of features which
aggravates the burden of computational complexity. Thus, we explored using a
non-linear dependency measure for feature selection by maximizing the
dependence between the text reviews and corresponding scores. Our quantitative
and qualitative analysis on a student satisfaction dataset shows that our
approach achieves comparable accuracy to the full feature vector, while being
an order of magnitude faster in testing. These text analysis and feature
reduction techniques can be used for other textual data applications such as
sentiment analysis.
| 2,018 | Computation and Language |
Hypothesis Testing based Intrinsic Evaluation of Word Embeddings | We introduce the cross-match test - an exact, distribution free,
high-dimensional hypothesis test as an intrinsic evaluation metric for word
embeddings. We show that cross-match is an effective means of measuring
distributional similarity between different vector representations and of
evaluating the statistical significance of different vector embedding models.
Additionally, we find that cross-match can be used to provide a quantitative
measure of linguistic similarity for selecting bridge languages for machine
translation. We demonstrate that the results of the hypothesis test align with
our expectations and note that the framework of two sample hypothesis testing
is not limited to word embeddings and can be extended to all vector
representations.
| 2,017 | Computation and Language |
Learning Word Embeddings from the Portuguese Twitter Stream: A Study of
some Practical Aspects | This paper describes a preliminary study for producing and distributing a
large-scale database of embeddings from the Portuguese Twitter stream. We start
by experimenting with a relatively small sample and focusing on three
challenges: volume of training data, vocabulary size and intrinsic evaluation
metrics. Using a single GPU, we were able to scale up vocabulary size from 2048
words embedded and 500K training examples to 32768 words over 10M training
examples while keeping a stable validation loss and approximately linear trend
on training time per epoch. We also observed that using less than 50\% of the
available training examples for each vocabulary size might result in
overfitting. Results on intrinsic evaluation show promising performance for a
vocabulary size of 32768 words. Nevertheless, intrinsic evaluation metrics
suffer from over-sensitivity to their corresponding cosine similarity
thresholds, indicating that a wider range of metrics need to be developed to
track progress.
| 2,017 | Computation and Language |
Getting Reliable Annotations for Sarcasm in Online Dialogues | The language used in online forums differs in many ways from that of
traditional language resources such as news. One difference is the use and
frequency of nonliteral, subjective dialogue acts such as sarcasm. Whether the
aim is to develop a theory of sarcasm in dialogue, or engineer automatic
methods for reliably detecting sarcasm, a major challenge is simply the
difficulty of getting enough reliably labelled examples. In this paper we
describe our work on methods for achieving highly reliable sarcasm annotations
from untrained annotators on Mechanical Turk. We explore the use of a number of
common statistical reliability measures, such as Kappa, Karger's, Majority
Class, and EM. We show that more sophisticated measures do not appear to yield
better results for our data than simple measures such as assuming that the
correct label is the one that a majority of Turkers apply.
| 2,017 | Computation and Language |
A Unified Query-based Generative Model for Question Generation and
Question Answering | We propose a query-based generative model for solving both tasks of question
generation (QG) and question an- swering (QA). The model follows the classic
encoder- decoder framework. The encoder takes a passage and a query as input
then performs query understanding by matching the query with the passage from
multiple per- spectives. The decoder is an attention-based Long Short Term
Memory (LSTM) model with copy and coverage mechanisms. In the QG task, a
question is generated from the system given the passage and the target answer,
whereas in the QA task, the answer is generated given the question and the
passage. During the training stage, we leverage a policy-gradient reinforcement
learning algorithm to overcome exposure bias, a major prob- lem resulted from
sequence learning with cross-entropy loss. For the QG task, our experiments
show higher per- formances than the state-of-the-art results. When used as
additional training data, the automatically generated questions even improve
the performance of a strong ex- tractive QA system. In addition, our model
shows bet- ter performance than the state-of-the-art baselines of the
generative QA task.
| 2,018 | Computation and Language |
Do latent tree learning models identify meaningful structure in
sentences? | Recent work on the problem of latent tree learning has made it possible to
train neural networks that learn to both parse a sentence and use the resulting
parse to interpret the sentence, all without exposure to ground-truth parse
trees at training time. Surprisingly, these models often perform better at
sentence understanding tasks than models that use parse trees from conventional
parsers. This paper aims to investigate what these latent tree learning models
learn. We replicate two such models in a shared codebase and find that (i) only
one of these models outperforms conventional tree-structured models on sentence
classification, (ii) its parsing strategies are not especially consistent
across random restarts, (iii) the parses it produces tend to be shallower than
standard Penn Treebank (PTB) parses, and (iv) they do not resemble those of PTB
or any other semantic or syntactic formalism that the authors are aware of.
| 2,018 | Computation and Language |
Learning Neural Word Salience Scores | Measuring the salience of a word is an essential step in numerous NLP tasks.
Heuristic approaches such as tfidf have been used so far to estimate the
salience of words. We propose \emph{Neural Word Salience} (NWS) scores, unlike
heuristics, are learnt from a corpus. Specifically, we learn word salience
scores such that, using pre-trained word embeddings as the input, can
accurately predict the words that appear in a sentence, given the words that
appear in the sentences preceding or succeeding that sentence. Experimental
results on sentence similarity prediction show that the learnt word salience
scores perform comparably or better than some of the state-of-the-art
approaches for representing sentences on benchmark datasets for sentence
similarity, while using only a fraction of the training and prediction times
required by prior methods. Moreover, our NWS scores positively correlate with
psycholinguistic measures such as concreteness, and imageability implying a
close connection to the salience as perceived by humans.
| 2,017 | Computation and Language |
Satirical News Detection and Analysis using Attention Mechanism and
Linguistic Features | Satirical news is considered to be entertainment, but it is potentially
deceptive and harmful. Despite the embedded genre in the article, not everyone
can recognize the satirical cues and therefore believe the news as true news.
We observe that satirical cues are often reflected in certain paragraphs rather
than the whole document. Existing works only consider document-level features
to detect the satire, which could be limited. We consider paragraph-level
linguistic features to unveil the satire by incorporating neural network and
attention mechanism. We investigate the difference between paragraph-level
features and document-level features, and analyze them on a large satirical
news dataset. The evaluation shows that the proposed model detects satirical
news effectively and reveals what features are important at which level.
| 2,017 | Computation and Language |
Compositional Approaches for Representing Relations Between Words: A
Comparative Study | Identifying the relations that exist between words (or entities) is important
for various natural language processing tasks such as, relational search,
noun-modifier classification and analogy detection. A popular approach to
represent the relations between a pair of words is to extract the patterns in
which the words co-occur with from a corpus, and assign each word-pair a vector
of pattern frequencies. Despite the simplicity of this approach, it suffers
from data sparseness, information scalability and linguistic creativity as the
model is unable to handle previously unseen word pairs in a corpus. In
contrast, a compositional approach for representing relations between words
overcomes these issues by using the attributes of each individual word to
indirectly compose a representation for the common relations that hold between
the two words. This study aims to compare different operations for creating
relation representations from word-level representations. We investigate the
performance of the compositional methods by measuring the relational
similarities using several benchmark datasets for word analogy. Moreover, we
evaluate the different relation representations in a knowledge base completion
task.
| 2,017 | Computation and Language |
Using $k$-way Co-occurrences for Learning Word Embeddings | Co-occurrences between two words provide useful insights into the semantics
of those words. Consequently, numerous prior work on word embedding learning
have used co-occurrences between two words as the training signal for learning
word embeddings. However, in natural language texts it is common for multiple
words to be related and co-occurring in the same context. We extend the notion
of co-occurrences to cover $k(\geq\!\!2)$-way co-occurrences among a set of
$k$-words. Specifically, we prove a theoretical relationship between the joint
probability of $k(\geq\!\!2)$ words, and the sum of $\ell_2$ norms of their
embeddings. Next, we propose a learning objective motivated by our theoretical
result that utilises $k$-way co-occurrences for learning word embeddings. Our
experimental results show that the derived theoretical relationship does indeed
hold empirically, and despite data sparsity, for some smaller $k$ values,
$k$-way embeddings perform comparably or better than $2$-way embeddings in a
range of tasks.
| 2,017 | Computation and Language |
Optimizing for Measure of Performance in Max-Margin Parsing | Many statistical learning problems in the area of natural language processing
including sequence tagging, sequence segmentation and syntactic parsing has
been successfully approached by means of structured prediction methods. An
appealing property of the corresponding discriminative learning algorithms is
their ability to integrate the loss function of interest directly into the
optimization process, which potentially can increase the resulting performance
accuracy. Here, we demonstrate on the example of constituency parsing how to
optimize for F1-score in the max-margin framework of structural SVM. In
particular, the optimization is with respect to the original (not binarized)
trees.
| 2,017 | Computation and Language |
Sequence Prediction with Neural Segmental Models | Segments that span contiguous parts of inputs, such as phonemes in speech,
named entities in sentences, actions in videos, occur frequently in sequence
prediction problems. Segmental models, a class of models that explicitly
hypothesizes segments, have allowed the exploration of rich segment features
for sequence prediction. However, segmental models suffer from slow decoding,
hampering the use of computationally expensive features.
In this thesis, we introduce discriminative segmental cascades, a multi-pass
inference framework that allows us to improve accuracy by adding higher-order
features and neural segmental features while maintaining efficiency. We also
show that instead of including more features to obtain better accuracy,
segmental cascades can be used to speed up training and decoding.
Segmental models, similarly to conventional speech recognizers, are typically
trained in multiple stages. In the first stage, a frame classifier is trained
with manual alignments, and then in the second stage, segmental models are
trained with manual alignments and the out- puts of the frame classifier.
However, obtaining manual alignments are time-consuming and expensive. We
explore end-to-end training for segmental models with various loss functions,
and show how end-to-end training with marginal log loss can eliminate the need
for detailed manual alignments.
We draw the connections between the marginal log loss and a popular
end-to-end training approach called connectionist temporal classification. We
present a unifying framework for various end-to-end graph search-based models,
such as hidden Markov models, connectionist temporal classification, and
segmental models. Finally, we discuss possible extensions of segmental models
to large-vocabulary sequence prediction tasks.
| 2,018 | Computation and Language |
The Voynich Manuscript is Written in Natural Language: The Pahlavi
Hypothesis | The late medieval Voynich Manuscript (VM) has resisted decryption and was
considered a meaningless hoax or an unsolvable cipher. Here, we provide
evidence that the VM is written in natural language by establishing a relation
of the Voynich alphabet and the Iranian Pahlavi script. Many of the Voynich
characters are upside-down versions of their Pahlavi counterparts, which may be
an effect of different writing directions. Other Voynich letters can be
explained as ligatures or departures from Pahlavi with the intent to cope with
known problems due to the stupendous ambiguity of Pahlavi text. While a
translation of the VM text is not attempted here, we can confirm the
Voynich-Pahlavi relation at the character level by the transcription of many
words from the VM illustrations and from parts of the main text. Many of the
transcribed words can be identified as terms from Zoroastrian cosmology which
is in line with the use of Pahlavi script in Zoroastrian communities from
medieval times.
| 2,017 | Computation and Language |
A Neural Language Model for Dynamically Representing the Meanings of
Unknown Words and Entities in a Discourse | This study addresses the problem of identifying the meaning of unknown words
or entities in a discourse with respect to the word embedding approaches used
in neural language models. We proposed a method for on-the-fly construction and
exploitation of word embeddings in both the input and output layers of a neural
model by tracking contexts. This extends the dynamic entity representation used
in Kobayashi et al. (2016) and incorporates a copy mechanism proposed
independently by Gu et al. (2016) and Gulcehre et al. (2016). In addition, we
construct a new task and dataset called Anonymized Language Modeling for
evaluating the ability to capture word meanings while reading. Experiments
conducted using our novel dataset show that the proposed variant of RNN
language model outperformed the baseline model. Furthermore, the experiments
also demonstrate that dynamic updates of an output layer help a model predict
reappearing entities, whereas those of an input layer are effective to predict
words following reappearing entities.
| 2,017 | Computation and Language |
Spoken English Intelligibility Remediation with PocketSphinx Alignment
and Feature Extraction Improves Substantially over the State of the Art | We use automatic speech recognition to assess spoken English learner
pronunciation based on the authentic intelligibility of the learners' spoken
responses determined from support vector machine (SVM) classifier or deep
learning neural network model predictions of transcription correctness. Using
numeric features produced by PocketSphinx alignment mode and many recognition
passes searching for the substitution and deletion of each expected phoneme and
insertion of unexpected phonemes in sequence, the SVM models achieve 82 percent
agreement with the accuracy of Amazon Mechanical Turk crowdworker
transcriptions, up from 75 percent reported by multiple independent
researchers. Using such features with SVM classifier probability prediction
models can help computer-aided pronunciation teaching (CAPT) systems provide
intelligibility remediation.
| 2,018 | Computation and Language |
Information-Propogation-Enhanced Neural Machine Translation by Relation
Model | Even though sequence-to-sequence neural machine translation (NMT) model have
achieved state-of-art performance in the recent fewer years, but it is widely
concerned that the recurrent neural network (RNN) units are very hard to
capture the long-distance state information, which means RNN can hardly find
the feature with long term dependency as the sequence becomes longer.
Similarly, convolutional neural network (CNN) is introduced into NMT for
speeding recently, however, CNN focus on capturing the local feature of the
sequence; To relieve this issue, we incorporate a relation network into the
standard encoder-decoder framework to enhance information-propogation in neural
network, ensuring that the information of the source sentence can flow into the
decoder adequately. Experiments show that proposed framework outperforms the
statistical MT model and the state-of-art NMT model significantly on two data
sets with different scales.
| 2,018 | Computation and Language |
Depression and Self-Harm Risk Assessment in Online Forums | Users suffering from mental health conditions often turn to online resources
for support, including specialized online support communities or general
communities such as Twitter and Reddit. In this work, we present a neural
framework for supporting and studying users in both types of communities. We
propose methods for identifying posts in support communities that may indicate
a risk of self-harm, and demonstrate that our approach outperforms strong
previously proposed methods for identifying such posts. Self-harm is closely
related to depression, which makes identifying depressed users on general
forums a crucial related task. We introduce a large-scale general forum dataset
("RSDD") consisting of users with self-reported depression diagnoses matched
with control users. We show how our method can be applied to effectively
identify depressed users from their use of language alone. We demonstrate that
our method outperforms strong baselines on this general forum dataset.
| 2,017 | Computation and Language |
Measuring the Similarity of Sentential Arguments in Dialog | When people converse about social or political topics, similar arguments are
often paraphrased by different speakers, across many different conversations.
Debate websites produce curated summaries of arguments on such topics; these
summaries typically consist of lists of sentences that represent frequently
paraphrased propositions, or labels capturing the essence of one particular
aspect of an argument, e.g. Morality or Second Amendment. We call these
frequently paraphrased propositions ARGUMENT FACETS. Like these curated sites,
our goal is to induce and identify argument facets across multiple
conversations, and produce summaries. However, we aim to do this automatically.
We frame the problem as consisting of two steps: we first extract sentences
that express an argument from raw social media dialogs, and then rank the
extracted arguments in terms of their similarity to one another. Sets of
similar arguments are used to represent argument facets. We show here that we
can predict ARGUMENT FACET SIMILARITY with a correlation averaging 0.63
compared to a human topline averaging 0.68 over three debate topics, easily
beating several reasonable baselines.
| 2,018 | Computation and Language |
Language Modeling by Clustering with Word Embeddings for Text
Readability Assessment | We present a clustering-based language model using word embeddings for text
readability prediction. Presumably, an Euclidean semantic space hypothesis
holds true for word embeddings whose training is done by observing word
co-occurrences. We argue that clustering with word embeddings in the metric
space should yield feature representations in a higher semantic space
appropriate for text regression. Also, by representing features in terms of
histograms, our approach can naturally address documents of varying lengths. An
empirical evaluation using the Common Core Standards corpus reveals that the
features formed on our clustering-based language model significantly improve
the previously known results for the same corpus in readability prediction. We
also evaluate the task of sentence matching based on semantic relatedness using
the Wiki-SimpleWiki corpus and find that our features lead to superior matching
performance.
| 2,017 | Computation and Language |
A Semi-Supervised Approach to Detecting Stance in Tweets | Stance classification aims to identify, for a particular issue under
discussion, whether the speaker or author of a conversational turn has Pro
(Favor) or Con (Against) stance on the issue. Detecting stance in tweets is a
new task proposed for SemEval-2016 Task6, involving predicting stance for a
dataset of tweets on the topics of abortion, atheism, climate change, feminism
and Hillary Clinton. Given the small size of the dataset, our team created our
own topic-specific training corpus by developing a set of high precision
hashtags for each topic that were used to query the twitter API, with the aim
of developing a large training corpus without additional human labeling of
tweets for stance. The hashtags selected for each topic were predicted to be
stance-bearing on their own. Experimental results demonstrate good performance
for our features for opinion-target pairs based on generalizing dependency
features using sentiment lexicons.
| 2,016 | Computation and Language |
Towards Neural Machine Translation with Latent Tree Attention | Building models that take advantage of the hierarchical structure of language
without a priori annotation is a longstanding goal in natural language
processing. We introduce such a model for the task of machine translation,
pairing a recurrent neural network grammar encoder with a novel attentional
RNNG decoder and applying policy gradient reinforcement learning to induce
unsupervised tree structures on both the source and target. When trained on
character-level datasets with no explicit segmentation or parse annotation, the
model learns a plausible segmentation and shallow parse, obtaining performance
close to an attentional baseline.
| 2,017 | Computation and Language |
"Having 2 hours to write a paper is fun!": Detecting Sarcasm in
Numerical Portions of Text | Sarcasm occurring due to the presence of numerical portions in text has been
quoted as an error made by automatic sarcasm detection approaches in the past.
We present a first study in detecting sarcasm in numbers, as in the case of the
sentence 'Love waking up at 4 am'. We analyze the challenges of the problem,
and present Rule-based, Machine Learning and Deep Learning approaches to detect
sarcasm in numerical portions of text. Our Deep Learning approach outperforms
four past works for sarcasm detection and Rule-based and Machine learning
approaches on a dataset of tweets, obtaining an F1-score of 0.93. This shows
that special attention to text containing numbers may be useful to improve
state-of-the-art in sarcasm detection.
| 2,017 | Computation and Language |
Translating Terminological Expressions in Knowledge Bases with Neural
Machine Translation | Our work presented in this paper focuses on the translation of terminological
expressions represented in semantically structured resources, like ontologies
or knowledge graphs. The challenge of translating ontology labels or
terminological expressions documented in knowledge bases lies in the highly
specific vocabulary and the lack of contextual information, which can guide a
machine translation system to translate ambiguous words into the targeted
domain. Due to these challenges, we evaluate the translation quality of
domain-specific expressions in the medical and financial domain with
statistical as well as with neural machine translation methods and experiment
domain adaptation of the translation models with terminological expressions
only. Furthermore, we perform experiments on the injection of external
terminological expressions into the translation systems. Through these
experiments, we observed a significant advantage in domain adaptation for the
domain-specific resource in the medical and financial domain and the benefit of
subword models over word-based neural machine translation models for
terminology translation.
| 2,019 | Computation and Language |
Leveraging Discourse Information Effectively for Authorship Attribution | We explore techniques to maximize the effectiveness of discourse information
in the task of authorship attribution. We present a novel method to embed
discourse features in a Convolutional Neural Network text classifier, which
achieves a state-of-the-art result by a substantial margin. We empirically
investigate several featurization methods to understand the conditions under
which discourse features contribute non-trivial performance gains, and analyze
discourse embeddings.
| 2,017 | Computation and Language |
Cynical Selection of Language Model Training Data | The Moore-Lewis method of "intelligent selection of language model training
data" is very effective, cheap, efficient... and also has structural problems.
(1) The method defines relevance by playing language models trained on the
in-domain and the out-of-domain (or data pool) corpora against each other. This
powerful idea-- which we set out to preserve-- treats the two corpora as the
opposing ends of a single spectrum. This lack of nuance does not allow for the
two corpora to be very similar. In the extreme case where the come from the
same distribution, all of the sentences have a Moore-Lewis score of zero, so
there is no resulting ranking. (2) The selected sentences are not guaranteed to
be able to model the in-domain data, nor to even cover the in-domain data. They
are simply well-liked by the in-domain model; this is necessary, but not
sufficient. (3) There is no way to tell what is the optimal number of sentences
to select, short of picking various thresholds and building the systems.
We present a greedy, lazy, approximate, and generally efficient
information-theoretic method of accomplishing the same goal using only
vocabulary counts. The method has the following properties: (1) Is responsive
to the extent to which two corpora differ. (2) Quickly reaches near-optimal
vocabulary coverage. (3) Takes into account what has already been selected. (4)
Does not involve defining any kind of domain, nor any kind of classifier. (6)
Knows approximately when to stop. This method can be used as an
inherently-meaningful measure of similarity, as it measures the bits of
information to be gained by adding one text to another.
| 2,017 | Computation and Language |
A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data.
| 2,017 | Computation and Language |
Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture.
| 2,018 | Computation and Language |
A Statistical Comparison of Some Theories of NP Word Order | A frequent object of study in linguistic typology is the order of elements
{demonstrative, adjective, numeral, noun} in the noun phrase. The goal is to
predict the relative frequencies of these orders across languages. Here we use
Poisson regression to statistically compare some prominent accounts of this
variation. We compare feature systems derived from Cinque (2005) to feature
systems given in Cysouw (2010) and Dryer (in prep). In this setting, we do not
find clear reasons to prefer the model of Cinque (2005) or Dryer (in prep), but
we find both of these models have substantially better fit to the typological
data than the model from Cysouw (2010).
| 2,017 | Computation and Language |
Globally Normalized Reader | Rapid progress has been made towards question answering (QA) systems that can
extract answers from text. Existing neural approaches make use of expensive
bi-directional attention mechanisms or score all possible answer spans,
limiting scalability. We propose instead to cast extractive QA as an iterative
search problem: select the answer's sentence, start word, and end word. This
representation reduces the space of each search step and allows computation to
be conditionally allocated to promising search paths. We show that globally
normalizing the decision process and back-propagating through beam search makes
this representation viable and learning efficient. We empirically demonstrate
the benefits of this approach using our model, Globally Normalized Reader
(GNR), which achieves the second highest single model performance on the
Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster
than bi-attention-flow. We also introduce a data-augmentation method to produce
semantically valid examples by aligning named entities to a knowledge base and
swapping them with new entities of the same type. This method improves the
performance of all models considered in this work and is of independent
interest for a variety of NLP tasks.
| 2,017 | Computation and Language |
Combining LSTM and Latent Topic Modeling for Mortality Prediction | There is a great need for technologies that can predict the mortality of
patients in intensive care units with both high accuracy and accountability. We
present joint end-to-end neural network architectures that combine long
short-term memory (LSTM) and a latent topic model to simultaneously train a
classifier for mortality prediction and learn latent topics indicative of
mortality from textual clinical notes. For topic interpretability, the topic
modeling layer has been carefully designed as a single-layer network with
constraints inspired by LDA. Experiments on the MIMIC-III dataset show that our
models significantly outperform prior models that are based on LDA topics in
mortality prediction. However, we achieve limited success with our method for
interpreting topics from the trained models by looking at the neural network
weights.
| 2,017 | Computation and Language |
CLaC at SemEval-2016 Task 11: Exploring linguistic and psycho-linguistic
Features for Complex Word Identification | This paper describes the system deployed by the CLaC-EDLK team to the
"SemEval 2016, Complex Word Identification task". The goal of the task is to
identify if a given word in a given context is "simple" or "complex". Our
system relies on linguistic features and cognitive complexity. We used several
supervised models, however the Random Forest model outperformed the others.
Overall our best configuration achieved a G-score of 68.8% in the task, ranking
our system 21 out of 45.
| 2,017 | Computation and Language |
Semi-Supervised Instance Population of an Ontology using Word Vector
Embeddings | In many modern day systems such as information extraction and knowledge
management agents, ontologies play a vital role in maintaining the concept
hierarchies of the selected domain. However, ontology population has become a
problematic process due to its nature of heavy coupling with manual human
intervention. With the use of word embeddings in the field of natural language
processing, it became a popular topic due to its ability to cope up with
semantic sensitivity. Hence, in this study, we propose a novel way of
semi-supervised ontology population through word embeddings as the basis. We
built several models including traditional benchmark models and new types of
models which are based on word embeddings. Finally, we ensemble them together
to come up with a synergistic model with better accuracy. We demonstrate that
our ensemble model can outperform the individual models.
| 2,019 | Computation and Language |
Steering Output Style and Topic in Neural Response Generation | We propose simple and flexible training and decoding methods for influencing
output style and topic in neural encoder-decoder based language generation.
This capability is desirable in a variety of applications, including
conversational systems, where successful agents need to produce language in a
specific style and generate responses steered by a human puppeteer or external
knowledge. We decompose the neural generation process into empirically easier
sub-problems: a faithfulness model and a decoding method based on
selective-sampling. We also describe training and sampling algorithms that bias
the generation process with a specific language style restriction, or a topic
restriction. Human evaluation results show that our proposed methods are able
to restrict style and topic without degrading output quality in conversational
tasks.
| 2,017 | Computation and Language |
Abductive Matching in Question Answering | We study question-answering over semi-structured data. We introduce a new way
to apply the technique of semantic parsing by applying machine learning only to
provide annotations that the system infers to be missing; all the other parsing
logic is in the form of manually authored rules. In effect, the machine
learning is used to provide non-syntactic matches, a step that is ill-suited to
manual rules. The advantage of this approach is in its debuggability and in its
transparency to the end-user. We demonstrate the effectiveness of the approach
by achieving state-of-the-art performance of 40.42% accuracy on a standard
benchmark dataset over tables from Wikipedia.
| 2,017 | Computation and Language |
AppTechMiner: Mining Applications and Techniques from Scientific
Articles | This paper presents AppTechMiner, a rule-based information extraction
framework that automatically constructs a knowledge base of all application
areas and problem solving techniques. Techniques include tools, methods,
datasets or evaluation metrics. We also categorize individual research articles
based on their application areas and the techniques proposed/improved in the
article. Our system achieves high average precision (~82%) and recall (~84%) in
knowledge base creation. It also performs well in application and technique
assignment to an individual article (average accuracy ~66%). In the end, we
further present two use cases presenting a trivial information retrieval system
and an extensive temporal analysis of the usage of techniques and application
areas. At present, we demonstrate the framework for the domain of computational
linguistics but this can be easily generalized to any other field of research.
| 2,017 | Computation and Language |
Debbie, the Debate Bot of the Future | Chatbots are a rapidly expanding application of dialogue systems with
companies switching to bot services for customer support, and new applications
for users interested in casual conversation. One style of casual conversation
is argument, many people love nothing more than a good argument. Moreover,
there are a number of existing corpora of argumentative dialogues, annotated
for agreement and disagreement, stance, sarcasm and argument quality. This
paper introduces Debbie, a novel arguing bot, that selects arguments from
conversational corpora, and aims to use them appropriately in context. We
present an initial working prototype of Debbie, with some preliminary
evaluation and describe future work.
| 2,017 | Computation and Language |
Data-Driven Dialogue Systems for Social Agents | In order to build dialogue systems to tackle the ambitious task of holding
social conversations, we argue that we need a data driven approach that
includes insight into human conversational chit chat, and which incorporates
different natural language processing modules. Our strategy is to analyze and
index large corpora of social media data, including Twitter conversations,
online debates, dialogues between friends, and blog posts, and then to couple
this data retrieval with modules that perform tasks such as sentiment and style
analysis, topic modeling, and summarization. We aim for personal assistants
that can learn more nuanced human language, and to grow from task-oriented
agents to more personable social bots.
| 2,017 | Computation and Language |
KnowNER: Incremental Multilingual Knowledge in Named Entity Recognition | KnowNER is a multilingual Named Entity Recognition (NER) system that
leverages different degrees of external knowledge. A novel modular framework
divides the knowledge into four categories according to the depth of knowledge
they convey. Each category consists of a set of features automatically
generated from different information sources (such as a knowledge-base, a list
of names or document-specific semantic annotations) and is used to train a
conditional random field (CRF). Since those information sources are usually
multilingual, KnowNER can be easily trained for a wide range of languages. In
this paper, we show that the incorporation of deeper knowledge systematically
boosts accuracy and compare KnowNER with state-of-the-art NER approaches across
three languages (i.e., English, German and Spanish) performing amongst
state-of-the art systems in all of them.
| 2,017 | Computation and Language |
Capturing Long-range Contextual Dependencies with Memory-enhanced
Conditional Random Fields | Despite successful applications across a broad range of NLP tasks,
conditional random fields ("CRFs"), in particular the linear-chain variant, are
only able to model local features. While this has important benefits in terms
of inference tractability, it limits the ability of the model to capture
long-range dependencies between items. Attempts to extend CRFs to capture
long-range dependencies have largely come at the cost of computational
complexity and approximate inference. In this work, we propose an extension to
CRFs by integrating external memory, taking inspiration from memory networks,
thereby allowing CRFs to incorporate information far beyond neighbouring steps.
Experiments across two tasks show substantial improvements over strong CRF and
LSTM baselines.
| 2,017 | Computation and Language |
Small-footprint Keyword Spotting Using Deep Neural Network and
Connectionist Temporal Classifier | Mainly for the sake of solving the lack of keyword-specific data, we propose
one Keyword Spotting (KWS) system using Deep Neural Network (DNN) and
Connectionist Temporal Classifier (CTC) on power-constrained small-footprint
mobile devices, taking full advantage of general corpus from continuous speech
recognition which is of great amount. DNN is to directly predict the posterior
of phoneme units of any personally customized key-phrase, and CTC to produce a
confidence score of the given phoneme sequence as responsive decision-making
mechanism. The CTC-KWS has competitive performance in comparison with purely
DNN based keyword specific KWS, but not increasing any computational
complexity.
| 2,017 | Computation and Language |
Cross-lingual Word Segmentation and Morpheme Segmentation as Sequence
Labelling | This paper presents our segmentation system developed for the MLP 2017 shared
tasks on cross-lingual word segmentation and morpheme segmentation. We model
both word and morpheme segmentation as character-level sequence labelling
tasks. The prevalent bidirectional recurrent neural network with conditional
random fields as the output interface is adapted as the baseline system, which
is further improved via ensemble decoding. Our universal system is applied to
and extensively evaluated on all the official data sets without any
language-specific adjustment. The official evaluation results indicate that the
proposed model achieves outstanding accuracies both for word and morpheme
segmentation on all the languages in various types when compared to the other
participating systems.
| 2,017 | Computation and Language |
Language Models of Spoken Dutch | In Flanders, all TV shows are subtitled. However, the process of subtitling
is a very time-consuming one and can be sped up by providing the output of a
speech recognizer run on the audio of the TV show, prior to the subtitling.
Naturally, this speech recognition will perform much better if the employed
language model is adapted to the register and the topic of the program. We
present several language models trained on subtitles of television shows
provided by the Flemish public-service broadcaster VRT. This data was gathered
in the context of the project STON which has as purpose to facilitate the
process of subtitling TV shows. One model is trained on all available data (46M
word tokens), but we also trained models on a specific type of TV show or
domain/topic. Language models of spoken language are quite rare due to the lack
of training data. The size of this corpus is relatively large for a corpus of
spoken language (compare with e.g. CGN which has 9M words), but still rather
small for a language model. Thus, in practice it is advised to interpolate
these models with a large background language model trained on written
language. The models can be freely downloaded on
http://www.esat.kuleuven.be/psi/spraak/downloads/.
| 2,017 | Computation and Language |
SYSTRAN Purely Neural MT Engines for WMT2017 | This paper describes SYSTRAN's systems submitted to the WMT 2017 shared news
translation task for English-German, in both translation directions. Our
systems are built using OpenNMT, an open-source neural machine translation
system, implementing sequence-to-sequence models with LSTM encoder/decoders and
attention. We experimented using monolingual data automatically
back-translated. Our resulting models are further hyper-specialised with an
adaptation technique that finely tunes models according to the evaluation test
sentences.
| 2,017 | Computation and Language |
OpenNMT: Open-source Toolkit for Neural Machine Translation | We introduce an open-source toolkit for neural machine translation (NMT) to
support research into model architectures, feature representations, and source
modalities, while maintaining competitive performance, modularity and
reasonable training requirements.
| 2,017 | Computation and Language |
StarSpace: Embed All The Things! | We present StarSpace, a general-purpose neural embedding model that can solve
a wide variety of problems: labeling tasks such as text classification, ranking
tasks such as information retrieval/web search, collaborative filtering-based
or content-based recommendation, embedding of multi-relational graphs, and
learning word, sentence or document level embeddings. In each case the model
works by embedding those entities comprised of discrete features and comparing
them against each other -- learning similarities dependent on the task.
Empirical results on a number of tasks show that StarSpace is highly
competitive with existing methods, whilst also being generally applicable to
new cases where those methods are not.
| 2,017 | Computation and Language |
Human Associations Help to Detect Conventionalized Multiword Expressions | In this paper we show that if we want to obtain human evidence about
conventionalization of some phrases, we should ask native speakers about
associations they have to a given phrase and its component words. We have shown
that if component words of a phrase have each other as frequent associations,
then this phrase can be considered as conventionalized. Another type of
conventionalized phrases can be revealed using two factors: low entropy of
phrase associations and low intersection of component word and phrase
associations. The association experiments were performed for the Russian
language.
| 2,017 | Computation and Language |
Hash Embeddings for Efficient Word Representations | We present hash embeddings, an efficient method for representing words in a
continuous vector form. A hash embedding may be seen as an interpolation
between a standard word embedding and a word embedding created using a random
hash function (the hashing trick). In hash embeddings each token is represented
by $k$ $d$-dimensional embeddings vectors and one $k$ dimensional weight
vector. The final $d$ dimensional representation of the token is the product of
the two. Rather than fitting the embedding vectors for each token these are
selected by the hashing trick from a shared pool of $B$ embedding vectors. Our
experiments show that hash embeddings can easily deal with huge vocabularies
consisting of millions of tokens. When using a hash embedding there is no need
to create a dictionary before training nor to perform any kind of vocabulary
pruning after training. We show that models trained using hash embeddings
exhibit at least the same level of performance as models trained using regular
embeddings across a wide range of tasks. Furthermore, the number of parameters
needed by such an embedding is only a fraction of what is required by a regular
embedding. Since standard embeddings and embeddings constructed using the
hashing trick are actually just special cases of a hash embedding, hash
embeddings can be considered an extension and improvement over the existing
regular embedding types.
| 2,017 | Computation and Language |
Affective Neural Response Generation | Existing neural conversational models process natural language primarily on a
lexico-syntactic level, thereby ignoring one of the most crucial components of
human-to-human dialogue: its affective content. We take a step in this
direction by proposing three novel ways to incorporate affective/emotional
aspects into long short term memory (LSTM) encoder-decoder neural conversation
models: (1) affective word embeddings, which are cognitively engineered, (2)
affect-based objective functions that augment the standard cross-entropy loss,
and (3) affectively diverse beam search for decoding. Experiments show that
these techniques improve the open-domain conversational prowess of
encoder-decoder networks by enabling them to produce emotionally rich responses
that are more interesting and natural.
| 2,017 | Computation and Language |
Refining Source Representations with Relation Networks for Neural
Machine Translation | Although neural machine translation (NMT) with the encoder-decoder framework
has achieved great success in recent times, it still suffers from some
drawbacks: RNNs tend to forget old information which is often useful and the
encoder only operates through words without considering word relationship. To
solve these problems, we introduce a relation networks (RN) into NMT to refine
the encoding representations of the source. In our method, the RN first
augments the representation of each source word with its neighbors and reasons
all the possible pairwise relations between them. Then the source
representations and all the relations are fed to the attention module and the
decoder together, keeping the main encoder-decoder architecture unchanged.
Experiments on two Chinese-to-English data sets in different scales both show
that our method can outperform the competitive baselines significantly.
| 2,018 | Computation and Language |
Addressee and Response Selection in Multi-Party Conversations with
Speaker Interaction RNNs | In this paper, we study the problem of addressee and response selection in
multi-party conversations. Understanding multi-party conversations is
challenging because of complex speaker interactions: multiple speakers exchange
messages with each other, playing different roles (sender, addressee,
observer), and these roles vary across turns. To tackle this challenge, we
propose the Speaker Interaction Recurrent Neural Network (SI-RNN). Whereas the
previous state-of-the-art system updated speaker embeddings only for the
sender, SI-RNN uses a novel dialog encoder to update speaker embeddings in a
role-sensitive way. Additionally, unlike the previous work that selected the
addressee and response separately, SI-RNN selects them jointly by viewing the
task as a sequence prediction problem. Experimental results show that SI-RNN
significantly improves the accuracy of addressee and response selection,
particularly in complex conversations with many speakers and responses to
distant messages many turns in the past.
| 2,017 | Computation and Language |
Empower Sequence Labeling with Task-Aware Neural Language Model | Linguistic sequence labeling is a general modeling approach that encompasses
a variety of problems, such as part-of-speech tagging and named entity
recognition. Recent advances in neural networks (NNs) make it possible to build
reliable models without handcrafted features. However, in many cases, it is
hard to obtain sufficient annotations to train these models. In this study, we
develop a novel neural framework to extract abundant knowledge hidden in raw
texts to empower the sequence labeling task. Besides word-level knowledge
contained in pre-trained word embeddings, character-aware neural language
models are incorporated to extract character-level knowledge. Transfer learning
techniques are further adopted to mediate different components and guide the
language model towards the key knowledge. Comparing to previous methods, these
task-specific knowledge allows us to adopt a more concise model and conduct
more efficient training. Different from most transfer learning methods, the
proposed framework does not rely on any additional supervision. It extracts
knowledge from self-contained order information of training sequences.
Extensive experiments on benchmark datasets demonstrate the effectiveness of
leveraging character-level knowledge and the efficiency of co-training. For
example, on the CoNLL03 NER task, model training completes in about 6 hours on
a single GPU, reaching F1 score of 91.71$\pm$0.10 without using any extra
annotation.
| 2,017 | Computation and Language |
Assessing State-of-the-Art Sentiment Models on State-of-the-Art
Sentiment Datasets | There has been a good amount of progress in sentiment analysis over the past
10 years, including the proposal of new methods and the creation of benchmark
datasets. In some papers, however, there is a tendency to compare models only
on one or two datasets, either because of time restraints or because the model
is tailored to a specific task. Accordingly, it is hard to understand how well
a certain model generalizes across different tasks and datasets. In this paper,
we contribute to this situation by comparing several models on six different
benchmarks, which belong to different domains and additionally have different
levels of granularity (binary, 3-class, 4-class and 5-class). We show that
Bi-LSTMs perform well across datasets and that both LSTMs and Bi-LSTMs are
particularly good at fine-grained sentiment tasks (i. e., with more than two
classes). Incorporating sentiment information into word embeddings during
training gives good results for datasets that are lexically similar to the
training data. With our experiments, we contribute to a better understanding of
the performance of different model architectures on different data sets.
Consequently, we detect novel state-of-the-art results on the SenTube datasets.
| 2,017 | Computation and Language |
Dialogue Act Sequence Labeling using Hierarchical encoder with CRF | Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to
utterances in a conversation. The problem of associating semantic labels to
utterances can be treated as a sequence labeling problem. In this work, we
build a hierarchical recurrent neural network using bidirectional LSTM as a
base unit and the conditional random field (CRF) as the top layer to classify
each utterance into its corresponding dialogue act. The hierarchical network
learns representations at multiple levels, i.e., word level, utterance level,
and conversation level. The conversation level representations are input to the
CRF layer, which takes into account not only all previous utterances but also
their dialogue acts, thus modeling the dependency among both, labels and
utterances, an important consideration of natural dialogue. We validate our
approach on two different benchmark data sets, Switchboard and Meeting Recorder
Dialogue Act, and show performance improvement over the state-of-the-art
methods by $2.2\%$ and $4.1\%$ absolute points, respectively. It is worth
noting that the inter-annotator agreement on Switchboard data set is $84\%$,
and our method is able to achieve the accuracy of about $79\%$ despite being
trained on the noisy data.
| 2,017 | Computation and Language |
Flexible End-to-End Dialogue System for Knowledge Grounded Conversation | In knowledge grounded conversation, domain knowledge plays an important role
in a special domain such as Music. The response of knowledge grounded
conversation might contain multiple answer entities or no entity at all.
Although existing generative question answering (QA) systems can be applied to
knowledge grounded conversation, they either have at most one entity in a
response or cannot deal with out-of-vocabulary entities. We propose a fully
data-driven generative dialogue system GenDS that is capable of generating
responses based on input message and related knowledge base (KB). To generate
arbitrary number of answer entities even when these entities never appear in
the training set, we design a dynamic knowledge enquirer which selects
different answer entities at different positions in a single response,
according to different local context. It does not rely on the representations
of entities, enabling our model deal with out-of-vocabulary entities. We
collect a human-human conversation data (ConversMusic) with knowledge
annotations. The proposed method is evaluated on CoversMusic and a public
question answering dataset. Our proposed GenDS system outperforms baseline
methods significantly in terms of the BLEU, entity accuracy, entity recall and
human evaluation. Moreover,the experiments also demonstrate that GenDS works
better even on small datasets.
| 2,017 | Computation and Language |
Natural Language Inference over Interaction Space | Natural Language Inference (NLI) task requires an agent to determine the
logical relationship between a natural language premise and a natural language
hypothesis. We introduce Interactive Inference Network (IIN), a novel class of
neural network architectures that is able to achieve high-level understanding
of the sentence pair by hierarchically extracting semantic features from
interaction space. We show that an interaction tensor (attention weight)
contains semantic information to solve natural language inference, and a denser
interaction tensor contains richer semantic information. One instance of such
architecture, Densely Interactive Inference Network (DIIN), demonstrates the
state-of-the-art performance on large scale NLI copora and large-scale NLI
alike corpus. It's noteworthy that DIIN achieve a greater than 20% error
reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to
the strongest published system.
| 2,018 | Computation and Language |
Linguistic Features of Genre and Method Variation in Translation: A
Computational Perspective | In this paper we describe the use of text classification methods to
investigate genre and method variation in an English - German translation
corpus. For this purpose we use linguistically motivated features representing
texts using a combination of part-of-speech tags arranged in bigrams, trigrams,
and 4-grams. The classification method used in this paper is a Bayesian
classifier with Laplace smoothing. We use the output of the classifiers to
carry out an extensive feature analysis on the main difference between genres
and methods of translation.
| 2,017 | Computation and Language |
A Review of Evaluation Techniques for Social Dialogue Systems | In contrast with goal-oriented dialogue, social dialogue has no clear measure
of task success. Consequently, evaluation of these systems is notoriously hard.
In this paper, we review current evaluation methods, focusing on automatic
metrics. We conclude that turn-based metrics often ignore the context and do
not account for the fact that several replies are valid, while end-of-dialogue
rewards are mainly hand-crafted. Both lack grounding in human perceptions.
| 2,017 | Computation and Language |
Analyzing Hidden Representations in End-to-End Automatic Speech
Recognition Systems | Neural models have become ubiquitous in automatic speech recognition systems.
While neural networks are typically used as acoustic models in more complex
systems, recent studies have explored end-to-end speech recognition systems
based on neural networks, which can be trained to directly predict text from
input acoustic features. Although such systems are conceptually elegant and
simpler than traditional systems, it is less obvious how to interpret the
trained models. In this work, we analyze the speech representations learned by
a deep end-to-end model that is based on convolutional and recurrent layers,
and trained with a connectionist temporal classification (CTC) loss. We use a
pre-trained model to generate frame-level features which are given to a
classifier that is trained on frame classification into phones. We evaluate
representations from different layers of the deep model and compare their
quality for predicting phone labels. Our experiments shed light on important
aspects of the end-to-end model such as layer depth, model complexity, and
other design choices.
| 2,017 | Computation and Language |
Method for Aspect-Based Sentiment Annotation Using Rhetorical Analysis | This paper fills a gap in aspect-based sentiment analysis and aims to present
a new method for preparing and analysing texts concerning opinion and
generating user-friendly descriptive reports in natural language. We present a
comprehensive set of techniques derived from Rhetorical Structure Theory and
sentiment analysis to extract aspects from textual opinions and then build an
abstractive summary of a set of opinions. Moreover, we propose aspect-aspect
graphs to evaluate the importance of aspects and to filter out unimportant ones
from the summary. Additionally, the paper presents a prototype solution of data
flow with interesting and valuable results. The proposed method's results
proved the high accuracy of aspect detection when applied to the gold standard
dataset.
| 2,017 | Computation and Language |
Using NLU in Context for Question Answering: Improving on Facebook's
bAbI Tasks | For the next step in human to machine interaction, Artificial Intelligence
(AI) should interact predominantly using natural language because, if it
worked, it would be the fastest way to communicate. Facebook's toy tasks (bAbI)
provide a useful benchmark to compare implementations for conversational AI.
While the published experiments so far have been based on exploiting the
distributional hypothesis with machine learning, our model exploits natural
language understanding (NLU) with the decomposition of language based on Role
and Reference Grammar (RRG) and the brain-based Patom theory. Our combinatorial
system for conversational AI based on linguistics has many advantages: passing
bAbI task tests without parsing or statistics while increasing scalability. Our
model validates both the training and test data to find 'garbage' input and
output (GIGO). It is not rules-based, nor does it use parts of speech, but
instead relies on meaning. While Deep Learning is difficult to debug and fix,
every step in our model can be understood and changed like any non-statistical
computer program. Deep Learning's lack of explicable reasoning has raised
opposition to AI, partly due to fear of the unknown. To support the goals of
AI, we propose extended tasks to use human-level statements with tense, aspect
and voice, and embedded clauses with junctures: and answers to be natural
language generation (NLG) instead of keywords. While machine learning permits
invalid training data to produce incorrect test responses, our system cannot
because the context tracking would need to be intentionally broken. We believe
no existing learning systems can currently solve these extended natural
language tests. There appears to be a knowledge gap between NLP researchers and
linguists, but ongoing competitive results such as these promise to narrow that
gap.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.