Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Emergent Translation in Multi-Agent Communication | While most machine translation systems to date are trained on large parallel
corpora, humans learn language in a different way: by being grounded in an
environment and interacting with other humans. In this work, we propose a
communication game where two agents, native speakers of their own respective
languages, jointly learn to solve a visual referential task. We find that the
ability to understand and translate a foreign language emerges as a means to
achieve shared goals. The emergent translation is interactive and multimodal,
and crucially does not require parallel corpora, but only monolingual,
independent text and corresponding images. Our proposed translation model
achieves this by grounding the source and target languages into a shared visual
modality, and outperforms several baselines on both word-level and
sentence-level translation tasks. Furthermore, we show that agents in a
multilingual community learn to translate better and faster than in a bilingual
communication setting.
| 2,018 | Computation and Language |
Adapting general-purpose speech recognition engine output for
domain-specific natural language question answering | Speech-based natural language question-answering interfaces to enterprise
systems are gaining a lot of attention. General-purpose speech engines can be
integrated with NLP systems to provide such interfaces. Usually,
general-purpose speech engines are trained on large `general' corpus. However,
when such engines are used for specific domains, they may not recognize
domain-specific words well, and may produce erroneous output. Further, the
accent and the environmental conditions in which the speaker speaks a sentence
may induce the speech engine to inaccurately recognize certain words. The
subsequent natural language question-answering does not produce the requisite
results as the question does not accurately represent what the speaker
intended. Thus, the speech engine's output may need to be adapted for a domain
before further natural language processing is carried out. We present two
mechanisms for such an adaptation, one based on evolutionary development and
the other based on machine learning, and show how we can repair the
speech-output to make the subsequent natural language question-answering
better.
| 2,017 | Computation and Language |
OhioState at IJCNLP-2017 Task 4: Exploring Neural Architectures for
Multilingual Customer Feedback Analysis | This paper describes our systems for IJCNLP 2017 Shared Task on Customer
Feedback Analysis. We experimented with simple neural architectures that gave
competitive performance on certain tasks. This includes shallow CNN and
Bi-Directional LSTM architectures with Facebook's Fasttext as a baseline model.
Our best performing model was in the Top 5 systems using the Exact-Accuracy and
Micro-Average-F1 metrics for the Spanish (85.28% for both) and French (70% and
73.17% respectively) task, and outperformed all the other models on comment
(87.28%) and meaningless (51.85%) tags using Micro Average F1 by Tags metric
for the French task.
| 2,017 | Computation and Language |
Embedding-Based Speaker Adaptive Training of Deep Neural Networks | An embedding-based speaker adaptive training (SAT) approach is proposed and
investigated in this paper for deep neural network acoustic modeling. In this
approach, speaker embedding vectors, which are a constant given a particular
speaker, are mapped through a control network to layer-dependent element-wise
affine transformations to canonicalize the internal feature representations at
the output of hidden layers of a main network. The control network for
generating the speaker-dependent mappings is jointly estimated with the main
network for the overall speaker adaptive acoustic modeling. Experiments on
large vocabulary continuous speech recognition (LVCSR) tasks show that the
proposed SAT scheme can yield superior performance over the widely-used
speaker-aware training using i-vectors with speaker-adapted input features.
| 2,017 | Computation and Language |
SLING: A framework for frame semantic parsing | We describe SLING, a framework for parsing natural language into semantic
frames. SLING supports general transition-based, neural-network parsing with
bidirectional LSTM input encoding and a Transition Based Recurrent Unit (TBRU)
for output decoding. The parsing model is trained end-to-end using only the
text tokens as input. The transition system has been designed to output frame
graphs directly without any intervening symbolic representation. The SLING
framework includes an efficient and scalable frame store implementation as well
as a neural network JIT compiler for fast inference during parsing. SLING is
implemented in C++ and it is available for download on GitHub.
| 2,017 | Computation and Language |
Unsupervised Context-Sensitive Spelling Correction of English and Dutch
Clinical Free-Text with Word and Character N-Gram Embeddings | We present an unsupervised context-sensitive spelling correction method for
clinical free-text that uses word and character n-gram embeddings. Our method
generates misspelling replacement candidates and ranks them according to their
semantic fit, by calculating a weighted cosine similarity between the
vectorized representation of a candidate and the misspelling context. To tune
the parameters of this model, we generate self-induced spelling error corpora.
We perform our experiments for two languages. For English, we greatly
outperform off-the-shelf spelling correction tools on a manually annotated
MIMIC-III test set, and counter the frequency bias of a noisy channel model,
showing that neural embeddings can be successfully exploited to improve upon
the state-of-the-art. For Dutch, we also outperform an off-the-shelf spelling
correction tool on manually annotated clinical records from the Antwerp
University Hospital, but can offer no empirical evidence that our method
counters the frequency bias of a noisy channel model in this case as well.
However, both our context-sensitive model and our implementation of the noisy
channel model obtain high scores on the test set, establishing a
state-of-the-art for Dutch clinical spelling correction with the noisy channel
model.
| 2,017 | Computation and Language |
Findings of the Second Shared Task on Multimodal Machine Translation and
Multilingual Image Description | We present the results from the second shared task on multimodal machine
translation and multilingual image description. Nine teams submitted 19 systems
to two tasks. The multimodal translation task, in which the source sentence is
supplemented by an image, was extended with a new language (French) and two new
test sets. The multilingual image description task was changed such that at
test time, only the image is given. Compared to last year, multimodal systems
improved, but text-only systems remain competitive.
| 2,017 | Computation and Language |
Multi-Task Label Embedding for Text Classification | Multi-task learning in text classification leverages implicit correlations
among related tasks to extract common features and yield performance gains.
However, most previous works treat labels of each task as independent and
meaningless one-hot vectors, which cause a loss of potential information and
makes it difficult for these models to jointly learn three or more tasks. In
this paper, we propose Multi-Task Label Embedding to convert labels in text
classification into semantic vectors, thereby turning the original tasks into
vector matching tasks. We implement unsupervised, supervised and
semi-supervised models of Multi-Task Label Embedding, all utilizing semantic
correlations among tasks and making it particularly convenient to scale and
transfer as more tasks are involved. Extensive experiments on five benchmark
datasets for text classification show that our models can effectively improve
performances of related tasks with semantic representations of labels and
additional information from each other.
| 2,017 | Computation and Language |
Multi-Task Learning for Speaker-Role Adaptation in Neural Conversation
Models | Building a persona-based conversation agent is challenging owing to the lack
of large amounts of speaker-specific conversation data for model training. This
paper addresses the problem by proposing a multi-task learning approach to
training neural conversation models that leverages both conversation data
across speakers and other types of data pertaining to the speaker and speaker
roles to be modeled. Experiments show that our approach leads to significant
improvements over baseline model quality, generating responses that capture
more precisely speakers' traits and speaking styles. The model offers the
benefits of being algorithmically simple and easy to implement, and not relying
on large quantities of data representing specific individual speakers.
| 2,017 | Computation and Language |
Recognizing Explicit and Implicit Hate Speech Using a Weakly Supervised
Two-path Bootstrapping Approach | In the wake of a polarizing election, social media is laden with hateful
content. To address various limitations of supervised hate speech
classification methods including corpus bias and huge cost of annotation, we
propose a weakly supervised two-path bootstrapping approach for an online hate
speech detection model leveraging large-scale unlabeled data. This system
significantly outperforms hate speech detection systems that are trained in a
supervised manner using manually annotated data. Applying this model on a large
quantity of tweets collected before, after, and on election day reveals
motivations and patterns of inflammatory language.
| 2,018 | Computation and Language |
Detecting Online Hate Speech Using Context Aware Models | In the wake of a polarizing election, the cyber world is laden with hate
speech. Context accompanying a hate speech text is useful for identifying hate
speech, which however has been largely overlooked in existing datasets and hate
speech detection models. In this paper, we provide an annotated corpus of hate
speech with context information well kept. Then we propose two types of hate
speech detection models that incorporate context information, a logistic
regression model with context features and a neural network model with learning
components for context. Our evaluation shows that both models outperform a
strong baseline by around 3% to 4% in F1 score and combining these two models
further improve the performance by another 7% in F1 score.
| 2,018 | Computation and Language |
A Semantically Motivated Approach to Compute ROUGE Scores | ROUGE is one of the first and most widely used evaluation metrics for text
summarization. However, its assessment merely relies on surface similarities
between peer and model summaries. Consequently, ROUGE is unable to fairly
evaluate abstractive summaries including lexical variations and paraphrasing.
Exploring the effectiveness of lexical resource-based models to address this
issue, we adopt a graph-based algorithm into ROUGE to capture the semantic
similarities between peer and model summaries. Our semantically motivated
approach computes ROUGE scores based on both lexical and semantic similarities.
Experiment results over TAC AESOP datasets indicate that exploiting the
lexico-semantic similarity of the words used in summaries would significantly
help ROUGE correlate better with human judgments.
| 2,017 | Computation and Language |
Local Word Vectors Guiding Keyphrase Extraction | Automated keyphrase extraction is a fundamental textual information
processing task concerned with the selection of representative phrases from a
document that summarize its content. This work presents a novel unsupervised
method for keyphrase extraction, whose main innovation is the use of local word
embeddings (in particular GloVe vectors), i.e., embeddings trained from the
single document under consideration. We argue that such local representation of
words and keyphrases are able to accurately capture their semantics in the
context of the document they are part of, and therefore can help in improving
keyphrase extraction quality. Empirical results offer evidence that indeed
local representations lead to better keyphrase extraction results compared to
both embeddings trained on very large third corpora or larger corpora
consisting of several documents of the same scientific field and to other
state-of-the-art unsupervised keyphrase extraction methods.
| 2,018 | Computation and Language |
Verb Pattern: A Probabilistic Semantic Representation on Verbs | Verbs are important in semantic understanding of natural language.
Traditional verb representations, such as FrameNet, PropBank, VerbNet, focus on
verbs' roles. These roles are too coarse to represent verbs' semantics. In this
paper, we introduce verb patterns to represent verbs' semantics, such that each
pattern corresponds to a single semantic of the verb. First we analyze the
principles for verb patterns: generality and specificity. Then we propose a
nonparametric model based on description length. Experimental results prove the
high effectiveness of verb patterns. We further apply verb patterns to
context-aware conceptualization, to show that verb patterns are helpful in
semantic-related tasks.
| 2,017 | Computation and Language |
Is space a word, too? | For words, rank-frequency distributions have long been heralded for adherence
to a potentially-universal phenomenon known as Zipf's law. The hypothetical
form of this empirical phenomenon was refined by Ben\^{i}ot Mandelbrot to that
which is presently referred to as the Zipf-Mandelbrot law. Parallel to this,
Herbet Simon proposed a selection model potentially explaining Zipf's law.
However, a significant dispute between Simon and Mandelbrot, notable empirical
exceptions, and the lack of a strong empirical connection between Simon's model
and the Zipf-Mandelbrot law have left the questions of universality and
mechanistic generation open. We offer a resolution to these issues by
exhibiting how the dark matter of word segmentation, i.e., space, punctuation,
etc., connect the Zipf-Mandelbrot law to Simon's mechanistic process. This
explains Mandelbrot's refinement as no more than a fudge factor, accommodating
the effects of the exclusion of the rank-frequency dark matter. Thus,
integrating these non-word objects resolves a more-generalized rank-frequency
law. Since this relies upon the integration of space, etc., we find support for
the hypothesis that $all$ are generated by common processes, indicating from a
physical perspective that space is a word, too.
| 2,017 | Computation and Language |
Text Coherence Analysis Based on Deep Neural Network | In this paper, we propose a novel deep coherence model (DCM) using a
convolutional neural network architecture to capture the text coherence. The
text coherence problem is investigated with a new perspective of learning
sentence distributional representation and text coherence modeling
simultaneously. In particular, the model captures the interactions between
sentences by computing the similarities of their distributional
representations. Further, it can be easily trained in an end-to-end fashion.
The proposed model is evaluated on a standard Sentence Ordering task. The
experimental results demonstrate its effectiveness and promise in coherence
assessment showing a significant improvement over the state-of-the-art by a
wide margin.
| 2,017 | Computation and Language |
How big is big enough? Unsupervised word sense disambiguation using a
very large corpus | In this paper, the problem of disambiguating a target word for Polish is
approached by searching for related words with known meaning. These relatives
are used to build a training corpus from unannotated text. This technique is
improved by proposing new rich sources of replacements that substitute the
traditional requirement of monosemy with heuristics based on wordnet relations.
The na\"ive Bayesian classifier has been modified to account for an unknown
distribution of senses. A corpus of 600 million web documents (594 billion
tokens), gathered by the NEKST search engine allows us to assess the
relationship between training set size and disambiguation accuracy. The
classifier is evaluated using both a wordnet baseline and a corpus with 17,314
manually annotated occurrences of 54 ambiguous words.
| 2,017 | Computation and Language |
Bringing Semantic Structures to User Intent Detection in Online Medical
Queries | The Internet has revolutionized healthcare by offering medical information
ubiquitously to patients via web search. The healthcare status, complex medical
information needs of patients are expressed diversely and implicitly in their
medical text queries. Aiming to better capture a focused picture of user's
medical-related information search and shed insights on their healthcare
information access strategies, it is challenging yet rewarding to detect
structured user intentions from their diversely expressed medical text queries.
We introduce a graph-based formulation to explore structured concept
transitions for effective user intent detection in medical queries, where each
node represents a medical concept mention and each directed edge indicates a
medical concept transition. A deep model based on multi-task learning is
introduced to extract structured semantic transitions from user queries, where
the model extracts word-level medical concept mentions as well as
sentence-level concept transitions collectively. A customized graph-based
mutual transfer loss function is designed to impose explicit constraints and
further exploit the contribution of mentioning a medical concept word to the
implication of a semantic transition. We observe an 8% relative improvement in
AUC and 23% relative reduction in coverage error by comparing the proposed
model with the best baseline model for the concept transition inference task on
real-world medical text queries.
| 2,017 | Computation and Language |
A First Step in Combining Cognitive Event Features and Natural Language
Representations to Predict Emotions | We explore the representational space of emotions by combining methods from
different academic fields. Cognitive science has proposed appraisal theory as a
view on human emotion with previous research showing how human-rated abstract
event features can predict fine-grained emotions and capture the similarity
space of neural patterns in mentalizing brain regions. At the same time,
natural language processing (NLP) has demonstrated how transfer and multitask
learning can be used to cope with scarcity of annotated data for text modeling.
The contribution of this work is to show that appraisal theory can be
combined with NLP for mutual benefit. First, fine-grained emotion prediction
can be improved to human-level performance by using NLP representations in
addition to appraisal features. Second, using the appraisal features as
auxiliary targets during training can improve predictions even when only text
is available as input. Third, we obtain a representation with a similarity
matrix that better correlates with the neural activity across regions. Best
results are achieved when the model is trained to simultaneously predict
appraisals, emotions and emojis using a shared representation.
While these results are preliminary, the integration of cognitive
neuroscience and NLP techniques opens up an interesting direction for future
research.
| 2,017 | Computation and Language |
Testing the limits of unsupervised learning for semantic similarity | Semantic Similarity between two sentences can be defined as a way to
determine how related or unrelated two sentences are. The task of Semantic
Similarity in terms of distributed representations can be thought to be
generating sentence embeddings (dense vectors) which take both context and
meaning of sentence in account. Such embeddings can be produced by multiple
methods, in this paper we try to evaluate LSTM auto encoders for generating
these embeddings. Unsupervised algorithms (auto encoders to be specific) just
try to recreate their inputs, but they can be forced to learn order (and some
inherent meaning to some extent) by creating proper bottlenecks. We try to
evaluate how properly can algorithms trained just on plain English Sentences
learn to figure out Semantic Similarity, without giving them any sense of what
meaning of a sentence is.
| 2,017 | Computation and Language |
Attending to All Mention Pairs for Full Abstract Biological Relation
Extraction | Most work in relation extraction forms a prediction by looking at a short
span of text within a single sentence containing a single entity pair mention.
However, many relation types, particularly in biomedical text, are expressed
across sentences or require a large context to disambiguate. We propose a model
to consider all mention and entity pairs simultaneously in order to make a
prediction. We encode full paper abstracts using an efficient self-attention
encoder and form pairwise predictions between all mentions with a bi-affine
operation. An entity-pair wise pooling aggregates mention pair scores to make a
final prediction while alleviating training noise by performing within document
multi-instance learning. We improve our model's performance by jointly training
the model to predict named entities and adding an additional corpus of weakly
labeled data. We demonstrate our model's effectiveness by achieving the state
of the art on the Biocreative V Chemical Disease Relation dataset for models
without KB resources, outperforming ensembles of models which use hand-crafted
features and additional linguistic resources.
| 2,017 | Computation and Language |
Content Based Document Recommender using Deep Learning | With the recent advancements in information technology there has been a huge
surge in amount of data available. But information retrieval technology has not
been able to keep up with this pace of information generation resulting in over
spending of time for retrieving relevant information. Even though systems exist
for assisting users to search a database along with filtering and recommending
relevant information, but recommendation system which uses content of documents
for recommendation still have a long way to mature. Here we present a Deep
Learning based supervised approach to recommend similar documents based on the
similarity of content. We combine the C-DSSM model with Word2Vec distributed
representations of words to create a novel model to classify a document pair as
relevant/irrelavant by assigning a score to it. Using our model retrieval of
documents can be done in O(1) time and the memory complexity is O(n), where n
is number of documents.
| 2,017 | Computation and Language |
Deep Health Care Text Classification | Health related social media mining is a valuable apparatus for the early
recognition of the diverse antagonistic medicinal conditions. Mostly, the
existing methods are based on machine learning with knowledge-based learning.
This working note presents the Recurrent neural network (RNN) and Long
short-term memory (LSTM) based embedding for automatic health text
classification in the social media mining. For each task, two systems are built
and that classify the tweet at the tweet level. RNN and LSTM are used for
extracting features and non-linear activation function at the last layer
facilitates to distinguish the tweets of different categories. The experiments
are conducted on 2nd Social Media Mining for Health Applications Shared Task at
AMIA 2017. The experiment results are considerable; however the proposed method
is appropriate for the health text classification. This is primarily due to the
reason that, it doesn't rely on any feature engineering mechanisms.
| 2,018 | Computation and Language |
Combining Lexical Features and a Supervised Learning Approach for Arabic
Sentiment Analysis | The importance of building sentiment analysis tools for Arabic social media
has been recognized during the past couple of years, especially with the rapid
increase in the number of Arabic social media users. One of the main
difficulties in tackling this problem is that text within social media is
mostly colloquial, with many dialects being used within social media platforms.
In this paper, we present a set of features that were integrated with a machine
learning based sentiment analysis model and applied on Egyptian, Saudi,
Levantine, and MSA Arabic social media datasets. Many of the proposed features
were derived through the use of an Arabic Sentiment Lexicon. The model also
presents emoticon based features, as well as input text related features such
as the number of segments within the text, the length of the text, whether the
text ends with a question mark or not, etc. We show that the presented features
have resulted in an increased accuracy across six of the seven datasets we've
experimented with and which are all benchmarked. Since the developed model
out-performs all existing Arabic sentiment analysis systems that have publicly
available datasets, we can state that this model presents state-of-the-art in
Arabic sentiment analysis.
| 2,017 | Computation and Language |
NileTMRG at SemEval-2017 Task 4: Arabic Sentiment Analysis | This paper describes two systems that were used by the authors for addressing
Arabic Sentiment Analysis as part of SemEval-2017, task 4. The authors
participated in three Arabic related subtasks which are: Subtask A (Message
Polarity Classification), Sub-task B (Topic-Based Message Polarity
classification) and Subtask D (Tweet quantification) using the team name of
NileTMRG. For subtask A, we made use of our previously developed sentiment
analyzer which we augmented with a scored lexicon. For subtasks B and D, we
used an ensemble of three different classifiers. The first classifier was a
convolutional neural network for which we trained (word2vec) word embeddings.
The second classifier consisted of a MultiLayer Perceptron, while the third
classifier was a Logistic regression model that takes the same input as the
second classifier. Voting between the three classifiers was used to determine
the final outcome. The output from task B, was quantified to produce the
results for task D. In all three Arabic related tasks in which NileTMRG
participated, the team ranked at number one.
| 2,017 | Computation and Language |
BENGAL: An Automatic Benchmark Generator for Entity Recognition and
Linking | The manual creation of gold standards for named entity recognition and entity
linking is time- and resource-intensive. Moreover, recent works show that such
gold standards contain a large proportion of mistakes in addition to being
difficult to maintain. We hence present BENGAL, a novel automatic generation of
such gold standards as a complement to manually created benchmarks. The main
advantage of our benchmarks is that they can be readily generated at any time.
They are also cost-effective while being guaranteed to be free of annotation
errors. We compare the performance of 11 tools on benchmarks in English
generated by BENGAL and on 16benchmarks created manually. We show that our
approach can be ported easily across languages by presenting results achieved
by 4 tools on both Brazilian Portuguese and Spanish. Overall, our results
suggest that our automatic benchmark generation approach can create varied
benchmarks that have characteristics similar to those of existing benchmarks.
Our approach is open-source. Our experimental results are available at
http://faturl.com/bengalexpinlg and the code at
https://github.com/dice-group/BENGAL.
| 2,018 | Computation and Language |
Clickbait Identification using Neural Networks | This paper presents the results of our participation in the Clickbait
Detection Challenge 2017. The system relies on a fusion of neural networks,
incorporating different types of available informations. It does not require
any linguistic preprocessing, and hence generalizes more easily to new domains
and languages. The final combined model achieves a mean squared error of
0.0428, an accuracy of 0.826, and a F1 score of 0.564. According to the
official evaluation metric the system ranked 6th of the 13 participating teams.
| 2,017 | Computation and Language |
Linking Tweets with Monolingual and Cross-Lingual News using Transformed
Word Embeddings | Social media platforms have grown into an important medium to spread
information about an event published by the traditional media, such as news
articles. Grouping such diverse sources of information that discuss the same
topic in varied perspectives provide new insights. But the gap in word usage
between informal social media content such as tweets and diligently written
content (e.g. news articles) make such assembling difficult. In this paper, we
propose a transformation framework to bridge the word usage gap between tweets
and online news articles across languages by leveraging their word embeddings.
Using our framework, word embeddings extracted from tweets and news articles
are aligned closer to each other across languages, thus facilitating the
identification of similarity between news articles and tweets. Experimental
results show a notable improvement over baselines for monolingual tweets and
news articles comparison, while new findings are reported for cross-lingual
comparison.
| 2,017 | Computation and Language |
A Simple Text Analytics Model To Assist Literary Criticism: comparative
approach and example on James Joyce against Shakespeare and the Bible | Literary analysis, criticism or studies is a largely valued field with
dedicated journals and researchers which remains mostly within the humanities
scope. Text analytics is the computer-aided process of deriving information
from texts. In this article we describe a simple and generic model for
performing literary analysis using text analytics. The method relies on
statistical measures of: 1) token and sentence sizes and 2) Wordnet synset
features. These measures are then used in Principal Component Analysis where
the texts to be analyzed are observed against Shakespeare and the Bible,
regarded as reference literature. The model is validated by analyzing selected
works from James Joyce (1882-1941), one of the most important writers of the
20th century. We discuss the consistency of this approach, the reasons why we
did not use other techniques (e.g. part-of-speech tagging) and the ways by
which the analysis model might be adapted and enhanced.
| 2,017 | Computation and Language |
Exploring the Use of Text Classification in the Legal Domain | In this paper, we investigate the application of text classification methods
to support law professionals. We present several experiments applying machine
learning techniques to predict with high accuracy the ruling of the French
Supreme Court and the law area to which a case belongs to. We also investigate
the influence of the time period in which a ruling was made on the form of the
case description and the extent to which we need to mask information in a full
case ruling to automatically obtain training and test data that resembles case
descriptions. We developed a mean probability ensemble system combining the
output of multiple SVM classifiers. We report results of 98% average F1 score
in predicting a case ruling, 96% F1 score for predicting the law area of a
case, and 87.07% F1 score on estimating the date of a ruling.
| 2,017 | Computation and Language |
Non-Projective Dependency Parsing with Non-Local Transitions | We present a novel transition system, based on the Covington non-projective
parser, introducing non-local transitions that can directly create arcs
involving nodes to the left of the current focus positions. This avoids the
need for long sequences of No-Arc transitions to create long-distance arcs,
thus alleviating error propagation. The resulting parser outperforms the
original version and achieves the best accuracy on the Stanford Dependencies
conversion of the Penn Treebank among greedy transition-based algorithms.
| 2,018 | Computation and Language |
ALL-IN-1: Short Text Classification with One Model for All Languages | We present ALL-IN-1, a simple model for multilingual text classification that
does not require any parallel data. It is based on a traditional Support Vector
Machine classifier exploiting multilingual word embeddings and character
n-grams. Our model is simple, easily extendable yet very effective, overall
ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer
feedback analysis in four languages: English, French, Japanese and Spanish.
| 2,017 | Computation and Language |
Streaming Small-Footprint Keyword Spotting using Sequence-to-Sequence
Models | We develop streaming keyword spotting systems using a recurrent neural
network transducer (RNN-T) model: an all-neural, end-to-end trained,
sequence-to-sequence model which jointly learns acoustic and language model
components. Our models are trained to predict either phonemes or graphemes as
subword units, thus allowing us to detect arbitrary keyword phrases, without
any out-of-vocabulary words. In order to adapt the models to the requirements
of keyword spotting, we propose a novel technique which biases the RNN-T system
towards a specific keyword of interest.
Our systems are compared against a strong sequence-trained, connectionist
temporal classification (CTC) based "keyword-filler" baseline, which is
augmented with a separate phoneme language model. Overall, our RNN-T system
with the proposed biasing technique significantly improves performance over the
baseline system.
| 2,017 | Computation and Language |
Impact of Coreference Resolution on Slot Filling | In this paper, we demonstrate the importance of coreference resolution for
natural language processing on the example of the TAC Slot Filling shared task.
We illustrate the strengths and weaknesses of automatic coreference resolution
systems and provide experimental results to show that they improve performance
in the slot filling end-to-end setting. Finally, we publish KBPchains, a
resource containing automatically extracted coreference chains from the TAC
source corpus in order to support other researchers working on this topic.
| 2,017 | Computation and Language |
Understanding Early Word Learning in Situated Artificial Agents | Neural network-based systems can now learn to locate the referents of words
and phrases in images, answer questions about visual scenes, and execute
symbolic instructions as first-person actors in partially-observable worlds. To
achieve this so-called grounded language learning, models must overcome
challenges that infants face when learning their first words. While it is
notable that models with no meaningful prior knowledge overcome these
obstacles, researchers currently lack a clear understanding of how they do so,
a problem that we attempt to address in this paper. For maximum control and
generality, we focus on a simple neural network-based language learning agent,
trained via policy-gradient methods, which can interpret single-word
instructions in a simulated 3D world. Whilst the goal is not to explicitly
model infant word learning, we take inspiration from experimental paradigms in
developmental psychology and apply some of these to the artificial agent,
exploring the conditions under which established human biases and learning
effects emerge. We further propose a novel method for visualising semantic
representations in the agent.
| 2,019 | Computation and Language |
CANDiS: Coupled & Attention-Driven Neural Distant Supervision | Distant Supervision for Relation Extraction uses heuristically aligned text
data with an existing knowledge base as training data. The unsupervised nature
of this technique allows it to scale to web-scale relation extraction tasks, at
the expense of noise in the training data. Previous work has explored
relationships among instances of the same entity-pair to reduce this noise, but
relationships among instances across entity-pairs have not been fully
exploited. We explore the use of inter-instance couplings based on verb-phrase
and entity type similarities. We propose a novel technique, CANDiS, which casts
distant supervision using inter-instance coupling into an end-to-end neural
network model. CANDiS incorporates an attention module at the instance-level to
model the multi-instance nature of this problem. CANDiS outperforms existing
state-of-the-art techniques on a standard benchmark dataset.
| 2,017 | Computation and Language |
BridgeNets: Student-Teacher Transfer Learning Based on Recursive Neural
Networks and its Application to Distant Speech Recognition | Despite the remarkable progress achieved on automatic speech recognition,
recognizing far-field speeches mixed with various noise sources is still a
challenging task. In this paper, we introduce novel student-teacher transfer
learning, BridgeNet which can provide a solution to improve distant speech
recognition. There are two key features in BridgeNet. First, BridgeNet extends
traditional student-teacher frameworks by providing multiple hints from a
teacher network. Hints are not limited to the soft labels from a teacher
network. Teacher's intermediate feature representations can better guide a
student network to learn how to denoise or dereverberate noisy input. Second,
the proposed recursive architecture in the BridgeNet can iteratively improve
denoising and recognition performance. The experimental results of BridgeNet
showed significant improvements in tackling the distant speech recognition
problem, where it achieved up to 13.24% relative WER reductions on AMI corpus
compared to a baseline neural network without teacher's hints.
| 2,018 | Computation and Language |
Tensor network language model | We propose a new statistical model suitable for machine learning of systems
with long distance correlations such as natural languages. The model is based
on directed acyclic graph decorated by multi-linear tensor maps in the vertices
and vector spaces in the edges, called tensor network. Such tensor networks
have been previously employed for effective numerical computation of the
renormalization group flow on the space of effective quantum field theories and
lattice models of statistical mechanics. We provide explicit algebro-geometric
analysis of the parameter moduli space for tree graphs, discuss model
properties and applications such as statistical translation.
| 2,017 | Computation and Language |
One-shot and few-shot learning of word embeddings | Standard deep learning systems require thousands or millions of examples to
learn a concept, and cannot integrate new concepts easily. By contrast, humans
have an incredible ability to do one-shot or few-shot learning. For instance,
from just hearing a word used in a sentence, humans can infer a great deal
about it, by leveraging what the syntax and semantics of the surrounding words
tells us. Here, we draw inspiration from this to highlight a simple technique
by which deep recurrent networks can similarly exploit their prior knowledge to
learn a useful representation for a new word from little data. This could make
natural language processing systems much more flexible, by allowing them to
learn continually from the new words they encounter.
| 2,018 | Computation and Language |
Deep Residual Learning for Small-Footprint Keyword Spotting | We explore the application of deep residual learning and dilated convolutions
to the keyword spotting task, using the recently-released Google Speech
Commands Dataset as our benchmark. Our best residual network (ResNet)
implementation significantly outperforms Google's previous convolutional neural
networks in terms of accuracy. By varying model depth and width, we can achieve
compact models that also outperform previous small-footprint variants. To our
knowledge, we are the first to examine these approaches for keyword spotting,
and our results establish an open-source state-of-the-art reference to support
the development of future speech-based interfaces.
| 2,018 | Computation and Language |
A Study of All-Convolutional Encoders for Connectionist Temporal
Classification | Connectionist temporal classification (CTC) is a popular sequence prediction
approach for automatic speech recognition that is typically used with models
based on recurrent neural networks (RNNs). We explore whether deep
convolutional neural networks (CNNs) can be used effectively instead of RNNs as
the "encoder" in CTC. CNNs lack an explicit representation of the entire
sequence, but have the advantage that they are much faster to train. We present
an exploration of CNNs as encoders for CTC models, in the context of
character-based (lexicon-free) automatic speech recognition. In particular, we
explore a range of one-dimensional convolutional layers, which are particularly
efficient. We compare the performance of our CNN-based models against typical
RNNbased models in terms of training time, decoding time, model size and word
error rate (WER) on the Switchboard Eval2000 corpus. We find that our CNN-based
models are close in performance to LSTMs, while not matching them, and are much
faster to train and decode.
| 2,018 | Computation and Language |
Inducing Regular Grammars Using Recurrent Neural Networks | Grammar induction is the task of learning a grammar from a set of examples.
Recently, neural networks have been shown to be powerful learning machines that
can identify patterns in streams of data. In this work we investigate their
effectiveness in inducing a regular grammar from data, without any assumptions
about the grammar. We train a recurrent neural network to distinguish between
strings that are in or outside a regular language, and utilize an algorithm for
extracting the learned finite-state automaton. We apply this method to several
regular languages and find unexpected results regarding the connections between
the network's states that may be regarded as evidence for generalization.
| 2,018 | Computation and Language |
Topic Based Sentiment Analysis Using Deep Learning | In this paper , we tackle Sentiment Analysis conditioned on a Topic in
Twitter data using Deep Learning . We propose a 2-tier approach : In the first
phase we create our own Word Embeddings and see that they do perform better
than state-of-the-art embeddings when used with standard classifiers. We then
perform inference on these embeddings to learn more about a word with respect
to all the topics being considered, and also the top n-influencing words for
each topic. In the second phase we use these embeddings to predict the
sentiment of the tweet with respect to a given topic, and all other topics
under discussion.
| 2,017 | Computation and Language |
Phase Conductor on Multi-layered Attentions for Machine Comprehension | Attention models have been intensively studied to improve NLP tasks such as
machine comprehension via both question-aware passage attention model and
self-matching attention model. Our research proposes phase conductor
(PhaseCond) for attention models in two meaningful ways. First, PhaseCond, an
architecture of multi-layered attention models, consists of multiple phases
each implementing a stack of attention layers producing passage representations
and a stack of inner or outer fusion layers regulating the information flow.
Second, we extend and improve the dot-product attention function for PhaseCond
by simultaneously encoding multiple question and passage embedding layers from
different perspectives. We demonstrate the effectiveness of our proposed model
PhaseCond on the SQuAD dataset, showing that our model significantly
outperforms both state-of-the-art single-layered and multiple-layered attention
models. We deepen our results with new findings via both detailed qualitative
analysis and visualized examples showing the dynamic changes through
multi-layered attention models.
| 2,017 | Computation and Language |
A Dual Encoder Sequence to Sequence Model for Open-Domain Dialogue
Modeling | Ever since the successful application of sequence to sequence learning for
neural machine translation systems, interest has surged in its applicability
towards language generation in other problem domains. Recent work has
investigated the use of these neural architectures towards modeling open-domain
conversational dialogue, where it has been found that although these models are
capable of learning a good distributional language model, dialogue coherence is
still of concern. Unlike translation, conversation is much more a one-to-many
mapping from utterance to a response, and it is even more pressing that the
model be aware of the preceding flow of conversation. In this paper we propose
to tackle this problem by introducing previous conversational context in terms
of latent representations of dialogue acts over time. We inject the latent
context representations into a sequence to sequence neural network in the form
of dialog acts using a second encoder to enhance the quality and the coherence
of the conversations generated. The main task of this research work is to show
that adding latent variables that capture discourse relations does indeed
result in more coherent responses when compared to conventional sequence to
sequence models.
| 2,017 | Computation and Language |
Personalized word representations Carrying Personalized Semantics
Learned from Social Network Posts | Distributed word representations have been shown to be very useful in various
natural language processing (NLP) application tasks. These word vectors learned
from huge corpora very often carry both semantic and syntactic information of
words. However, it is well known that each individual user has his own language
patterns because of different factors such as interested topics, friend groups,
social activities, wording habits, etc., which may imply some kind of
personalized semantics. With such personalized semantics, the same word may
imply slightly differently for different users. For example, the word
"Cappuccino" may imply "Leisure", "Joy", "Excellent" for a user enjoying
coffee, by only a kind of drink for someone else. Such personalized semantics
of course cannot be carried by the standard universal word vectors trained with
huge corpora produced by many people. In this paper, we propose a framework to
train different personalized word vectors for different users based on the very
successful continuous skip-gram model using the social network data posted by
many individual users. In this framework, universal background word vectors are
first learned from the background corpora, and then adapted by the personalized
corpus for each individual user to learn the personalized word vectors. We use
two application tasks to evaluate the quality of the personalized word vectors
obtained in this way, the user prediction task and the sentence completion
task. These personalized word vectors were shown to carry some personalized
semantics and offer improved performance on these two evaluation tasks.
| 2,017 | Computation and Language |
Path-Based Attention Neural Model for Fine-Grained Entity Typing | Fine-grained entity typing aims to assign entity mentions in the free text
with types arranged in a hierarchical structure. Traditional distant
supervision based methods employ a structured data source as a weak supervision
and do not need hand-labeled data, but they neglect the label noise in the
automatically labeled training corpus. Although recent studies use many
features to prune wrong data ahead of training, they suffer from error
propagation and bring much complexity. In this paper, we propose an end-to-end
typing model, called the path-based attention neural model (PAN), to learn a
noise- robust performance by leveraging the hierarchical structure of types.
Experiments demonstrate its effectiveness.
| 2,018 | Computation and Language |
Evaluation of Automatic Video Captioning Using Direct Assessment | We present Direct Assessment, a method for manually assessing the quality of
automatically-generated captions for video. Evaluating the accuracy of video
captions is particularly difficult because for any given video clip there is no
definitive ground truth or correct answer against which to measure. Automatic
metrics for comparing automatic video captions against a manual caption such as
BLEU and METEOR, drawn from techniques used in evaluating machine translation,
were used in the TRECVid video captioning task in 2016 but these are shown to
have weaknesses. The work presented here brings human assessment into the
evaluation by crowdsourcing how well a caption describes a video. We
automatically degrade the quality of some sample captions which are assessed
manually and from this we are able to rate the quality of the human assessors,
a factor we take into account in the evaluation. Using data from the TRECVid
video-to-text task in 2016, we show how our direct assessment method is
replicable and robust and should scale to where there many caption-generation
techniques to be evaluated.
| 2,018 | Computation and Language |
Finding Dominant User Utterances And System Responses in Conversations | There are several dialog frameworks which allow manual specification of
intents and rule based dialog flow. The rule based framework provides good
control to dialog designers at the expense of being more time consuming and
laborious. The job of a dialog designer can be reduced if we could identify
pairs of user intents and corresponding responses automatically from prior
conversations between users and agents. In this paper we propose an approach to
find these frequent user utterances (which serve as examples for intents) and
corresponding agent responses. We propose a novel SimCluster algorithm that
extends standard K-means algorithm to simultaneously cluster user utterances
and agent utterances by taking their adjacency information into account. The
method also aligns these clusters to provide pairs of intents and response
groups. We compare our results with those produced by using simple Kmeans
clustering on a real dataset and observe upto 10% absolute improvement in
F1-scores. Through our experiments on synthetic dataset, we show that our
algorithm gains more advantage over K-means algorithm when the data has large
variance.
| 2,017 | Computation and Language |
JESC: Japanese-English Subtitle Corpus | In this paper we describe the Japanese-English Subtitle Corpus (JESC). JESC
is a large Japanese-English parallel corpus covering the underrepresented
domain of conversational dialogue. It consists of more than 3.2 million
examples, making it the largest freely available dataset of its kind. The
corpus was assembled by crawling and aligning subtitles found on the web. The
assembly process incorporates a number of novel preprocessing elements to
ensure high monolingual fluency and accurate bilingual alignments. We summarize
its contents and evaluate its quality using human experts and baseline machine
translation (MT) systems.
| 2,018 | Computation and Language |
Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system.
| 2,017 | Computation and Language |
Learning neural trans-dimensional random field language models with
noise-contrastive estimation | Trans-dimensional random field language models (TRF LMs) where sentences are
modeled as a collection of random fields, have shown close performance with
LSTM LMs in speech recognition and are computationally more efficient in
inference. However, the training efficiency of neural TRF LMs is not
satisfactory, which limits the scalability of TRF LMs on large training corpus.
In this paper, several techniques on both model formulation and parameter
estimation are proposed to improve the training efficiency and the performance
of neural TRF LMs. First, TRFs are reformulated in the form of exponential
tilting of a reference distribution. Second, noise-contrastive estimation (NCE)
is introduced to jointly estimate the model parameters and normalization
constants. Third, we extend the neural TRF LMs by marrying the deep
convolutional neural network (CNN) and the bidirectional LSTM into the
potential function to extract the deep hierarchical features and
bidirectionally sequential features. Utilizing all the above techniques enables
the successful and efficient training of neural TRF LMs on a 40x larger
training set with only 1/3 training time and further reduces the WER with
relative reduction of 4.7% on top of a strong LSTM LM baseline.
| 2,017 | Computation and Language |
Sequence-to-Sequence ASR Optimization via Reinforcement Learning | Despite the success of sequence-to-sequence approaches in automatic speech
recognition (ASR) systems, the models still suffer from several problems,
mainly due to the mismatch between the training and inference conditions. In
the sequence-to-sequence architecture, the model is trained to predict the
grapheme of the current time-step given the input of speech signal and the
ground-truth grapheme history of the previous time-steps. However, it remains
unclear how well the model approximates real-world speech during inference.
Thus, generating the whole transcription from scratch based on previous
predictions is complicated and errors can propagate over time. Furthermore, the
model is optimized to maximize the likelihood of training data instead of error
rate evaluation metrics that actually quantify recognition quality. This paper
presents an alternative strategy for training sequence-to-sequence ASR models
by adopting the idea of reinforcement learning (RL). Unlike the standard
training scheme with maximum likelihood estimation, our proposed approach
utilizes the policy gradient algorithm. We can (1) sample the whole
transcription based on the model's prediction in the training process and (2)
directly optimize the model with negative Levenshtein distance as the reward.
Experimental results demonstrate that we significantly improved the performance
compared to a model trained only with maximum likelihood estimation.
| 2,018 | Computation and Language |
Understanding Hidden Memories of Recurrent Neural Networks | Recurrent neural networks (RNNs) have been successfully applied to various
natural language processing (NLP) tasks and achieved better results than
conventional methods. However, the lack of understanding of the mechanisms
behind their effectiveness limits further improvements on their architectures.
In this paper, we present a visual analytics method for understanding and
comparing RNN models for NLP tasks. We propose a technique to explain the
function of individual hidden state units based on their expected response to
input texts. We then co-cluster hidden state units and words based on the
expected response and visualize co-clustering results as memory chips and word
clouds to provide more structured knowledge on RNNs' hidden states. We also
propose a glyph-based sequence visualization based on aggregate information to
analyze the behavior of an RNN's hidden state at the sentence-level. The
usability and effectiveness of our method are demonstrated through case studies
and reviews from domain experts.
| 2,017 | Computation and Language |
Conceptual Text Summarizer: A new model in continuous vector space | Traditional methods of summarization are not cost-effective and possible
today. Extractive summarization is a process that helps to extract the most
important sentences from a text automatically and generates a short informative
summary. In this work, we propose an unsupervised method to summarize Persian
texts. This method is a novel hybrid approach that clusters the concepts of the
text using deep learning and traditional statistical methods. First we produce
a word embedding based on Hamshahri2 corpus and a dictionary of word
frequencies. Then the proposed algorithm extracts the keywords of the document,
clusters its concepts, and finally ranks the sentences to produce the summary.
We evaluated the proposed method on Pasokh single-document corpus using the
ROUGE evaluation measure. Without using any hand-crafted features, our proposed
method achieves state-of-the-art results. We compared our unsupervised method
with the best supervised Persian methods and we achieved an overall improvement
of ROUGE-2 recall score of 7.5%.
| 2,018 | Computation and Language |
Machine Translation of Low-Resource Spoken Dialects: Strategies for
Normalizing Swiss German | The goal of this work is to design a machine translation (MT) system for a
low-resource family of dialects, collectively known as Swiss German, which are
widely spoken in Switzerland but seldom written. We collected a significant
number of parallel written resources to start with, up to a total of about 60k
words. Moreover, we identified several other promising data sources for Swiss
German. Then, we designed and compared three strategies for normalizing Swiss
German input in order to address the regional diversity. We found that
character-based neural MT was the best solution for text normalization. In
combination with phrase-based statistical MT, our solution reached 36% BLEU
score when translating from the Bernese dialect. This value, however, decreases
as the testing data becomes more remote from the training one, geographically
and topically. These resources and normalization techniques are a first step
towards full MT of Swiss German dialects.
| 2,018 | Computation and Language |
Unsupervised Neural Machine Translation | In spite of the recent success of neural machine translation (NMT) in
standard benchmarks, the lack of large parallel corpora poses a major practical
problem for many language pairs. There have been several proposals to alleviate
this issue with, for instance, triangulation and semi-supervised learning
techniques, but they still require a strong cross-lingual signal. In this work,
we completely remove the need of parallel data and propose a novel method to
train an NMT system in a completely unsupervised manner, relying on nothing but
monolingual corpora. Our model builds upon the recent work on unsupervised
embedding mappings, and consists of a slightly modified attentional
encoder-decoder model that can be trained on monolingual corpora alone using a
combination of denoising and backtranslation. Despite the simplicity of the
approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014
French-to-English and German-to-English translation. The model can also profit
from small parallel corpora, and attains 21.81 and 15.24 points when combined
with 100,000 parallel sentences, respectively. Our implementation is released
as an open source project.
| 2,018 | Computation and Language |
Creation of an Annotated Corpus of Spanish Radiology Reports | This paper presents a new annotated corpus of 513 anonymized radiology
reports written in Spanish. Reports were manually annotated with entities,
negation and uncertainty terms and relations. The corpus was conceived as an
evaluation resource for named entity recognition and relation extraction
algorithms, and as input for the use of supervised methods. Biomedical
annotated resources are scarce due to confidentiality issues and associated
costs. This work provides some guidelines that could help other researchers to
undertake similar tasks.
| 2,017 | Computation and Language |
Indirect Supervision for Relation Extraction using Question-Answer Pairs | Automatic relation extraction (RE) for types of interest is of great
importance for interpreting massive text corpora in an efficient manner.
Traditional RE models have heavily relied on human-annotated corpus for
training, which can be costly in generating labeled data and become obstacles
when dealing with more relation types. Thus, more RE extraction systems have
shifted to be built upon training data automatically acquired by linking to
knowledge bases (distant supervision). However, due to the incompleteness of
knowledge bases and the context-agnostic labeling, the training data collected
via distant supervision (DS) can be very noisy. In recent years, as increasing
attention has been brought to tackling question-answering (QA) tasks, user
feedback or datasets of such tasks become more accessible. In this paper, we
propose a novel framework, ReQuest, to leverage question-answer pairs as an
indirect source of supervision for relation extraction, and study how to use
such supervision to reduce noise induced from DS. Our model jointly embeds
relation mentions, types, QA entity mention pairs and text features in two
low-dimensional spaces (RE and QA), where objects with same relation types or
semantically similar question-answer pairs have similar representations. Shared
features connect these two spaces, carrying clearer semantic knowledge from
both sources. ReQuest, then use these learned embeddings to estimate the types
of test relation mentions. We formulate a global objective function and adopt a
novel margin-based QA loss to reduce noise in DS by exploiting semantic
evidence from the QA dataset. Our experimental results achieve an average of
11% improvement in F1 score on two public RE datasets combined with TREC QA
dataset.
| 2,017 | Computation and Language |
Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue
Policy Learning | This paper presents a new method --- adversarial advantage actor-critic
(Adversarial A2C), which significantly improves the efficiency of dialogue
policy learning in task-completion dialogue systems. Inspired by generative
adversarial networks (GAN), we train a discriminator to differentiate
responses/actions generated by dialogue agents from responses/actions by
experts. Then, we incorporate the discriminator as another critic into the
advantage actor-critic (A2C) framework, to encourage the dialogue agent to
explore state-action within the regions where the agent takes actions similar
to those of the experts. Experimental results in a movie-ticket booking domain
show that the proposed Adversarial A2C can accelerate policy exploration
efficiently.
| 2,018 | Computation and Language |
A generalized parsing framework for Abstract Grammars | This technical report presents a general framework for parsing a variety of
grammar formalisms. We develop a grammar formalism, called an Abstract Grammar,
which is general enough to represent grammars at many levels of the hierarchy,
including Context Free Grammars, Minimalist Grammars, and Generalized
Context-free Grammars. We then develop a single parsing framework which is
capable of parsing grammars which are at least up to GCFGs on the hierarchy.
Our parsing framework exposes a grammar interface, so that it can parse any
particular grammar formalism that can be reduced to an Abstract Grammar.
| 2,018 | Computation and Language |
Improving Social Media Text Summarization by Learning Sentence Weight
Distribution | Recently, encoder-decoder models are widely used in social media text
summarization. However, these models sometimes select noise words in irrelevant
sentences as part of a summary by error, thus declining the performance. In
order to inhibit irrelevant sentences and focus on key information, we propose
an effective approach by learning sentence weight distribution. In our model,
we build a multi-layer perceptron to predict sentence weights. During training,
we use the ROUGE score as an alternative to the estimated sentence weight, and
try to minimize the gap between estimated weights and predicted weights. In
this way, we encourage our model to focus on the key sentences, which have high
relevance with the summary. Experimental results show that our approach
outperforms baselines on a large-scale social media corpus.
| 2,017 | Computation and Language |
Shallow Discourse Parsing with Maximum Entropy Model | In recent years, more research has been devoted to studying the subtask of
the complete shallow discourse parsing, such as indentifying discourse
connective and arguments of connective. There is a need to design a full
discourse parser to pull these subtasks together. So we develop a discourse
parser turning the free text into discourse relations. The parser includes
connective identifier, arguments identifier, sense classifier and non-explicit
identifier, which connects with each other in pipeline. Each component applies
the maximum entropy model with abundant lexical and syntax features extracted
from the Penn Discourse Tree-bank. The head-based representation of the PDTB is
adopted in the arguments identifier, which turns the problem of indentifying
the arguments of discourse connective into finding the head and end of the
arguments. In the non-explicit identifier, the contextual type features like
words which have high frequency and can reflect the discourse relation are
introduced to improve the performance of non-explicit identifier. Compared with
other methods, experimental results achieve the considerable performance.
| 2,017 | Computation and Language |
A Sequential Matching Framework for Multi-turn Response Selection in
Retrieval-based Chatbots | We study the problem of response selection for multi-turn conversation in
retrieval-based chatbots. The task requires matching a response candidate with
a conversation context, whose challenges include how to recognize important
parts of the context, and how to model the relationships among utterances in
the context. Existing matching methods may lose important information in
contexts as we can interpret them with a unified framework in which contexts
are transformed to fixed-length vectors without any interaction with responses
before matching. The analysis motivates us to propose a new matching framework
that can sufficiently carry the important information in contexts to matching
and model the relationships among utterances at the same time. The new
framework, which we call a sequential matching framework (SMF), lets each
utterance in a context interacts with a response candidate at the first step
and transforms the pair to a matching vector. The matching vectors are then
accumulated following the order of the utterances in the context with a
recurrent neural network (RNN) which models the relationships among the
utterances. The context-response matching is finally calculated with the hidden
states of the RNN. Under SMF, we propose a sequential convolutional network and
sequential attention network and conduct experiments on two public data sets to
test their performance. Experimental results show that both models can
significantly outperform the state-of-the-art matching methods. We also show
that the models are interpretable with visualizations that provide us insights
on how they capture and leverage the important information in contexts for
matching.
| 2,017 | Computation and Language |
Grammar Induction for Minimalist Grammars using Variational Bayesian
Inference : A Technical Report | The following technical report presents a formal approach to probabilistic
minimalist grammar parameter estimation. We describe a formalization of a
minimalist grammar. We then present an algorithm for the application of
variational Bayesian inference to this formalization.
| 2,019 | Computation and Language |
A Neural-Symbolic Approach to Design of CAPTCHA | CAPTCHAs based on reading text are susceptible to machine-learning-based
attacks due to recent significant advances in deep learning (DL). To address
this, this paper promotes image/visual captioning based CAPTCHAs, which is
robust against machine-learning-based attacks. To develop
image/visual-captioning-based CAPTCHAs, this paper proposes a new image
captioning architecture by exploiting tensor product representations (TPR), a
structured neural-symbolic framework developed in cognitive science over the
past 20 years, with the aim of integrating DL with explicit language structures
and rules. We call it the Tensor Product Generation Network (TPGN). The key
ideas of TPGN are: 1) unsupervised learning of role-unbinding vectors of words
via a TPR-based deep neural network, and 2) integration of TPR with typical DL
architectures including Long Short-Term Memory (LSTM) models. The novelty of
our approach lies in its ability to generate a sentence and extract partial
grammatical structure of the sentence by using role-unbinding vectors, which
are obtained in an unsupervised manner. Experimental results demonstrate the
effectiveness of the proposed approach.
| 2,018 | Computation and Language |
Whodunnit? Crime Drama as a Case for Natural Language Understanding | In this paper we argue that crime drama exemplified in television programs
such as CSI:Crime Scene Investigation is an ideal testbed for approximating
real-world natural language understanding and the complex inferences associated
with it. We propose to treat crime drama as a new inference task, capitalizing
on the fact that each episode poses the same basic question (i.e., who
committed the crime) and naturally provides the answer when the perpetrator is
revealed. We develop a new dataset based on CSI episodes, formalize perpetrator
identification as a sequence labeling problem, and develop an LSTM-based model
which learns from multi-modal data. Experimental results show that an
incremental inference strategy is key to making accurate guesses as well as
learning from representations fusing textual, visual, and acoustic input.
| 2,017 | Computation and Language |
Unsupervised Machine Translation Using Monolingual Corpora Only | Machine translation has recently achieved impressive performance thanks to
recent advances in deep learning and the availability of large-scale parallel
corpora. There have been numerous attempts to extend these successes to
low-resource language pairs, yet requiring tens of thousands of parallel
sentences. In this work, we take this research direction to the extreme and
investigate whether it is possible to learn to translate even without any
parallel data. We propose a model that takes sentences from monolingual corpora
in two different languages and maps them into the same latent space. By
learning to reconstruct in both languages from this shared feature space, the
model effectively learns to translate without using any labeled data. We
demonstrate our model on two widely used datasets and two language pairs,
reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French
datasets, without using even a single parallel sentence at training time.
| 2,018 | Computation and Language |
Summarizing Dialogic Arguments from Social Media | Online argumentative dialog is a rich source of information on popular
beliefs and opinions that could be useful to companies as well as governmental
or public policy agencies. Compact, easy to read, summaries of these dialogues
would thus be highly valuable. A priori, it is not even clear what form such a
summary should take. Previous work on summarization has primarily focused on
summarizing written texts, where the notion of an abstract of the text is well
defined. We collect gold standard training data consisting of five human
summaries for each of 161 dialogues on the topics of Gay Marriage, Gun Control
and Abortion. We present several different computational models aimed at
identifying segments of the dialogues whose content should be used for the
summary, using linguistic features and Word2vec features with both SVMs and
Bidirectional LSTMs. We show that we can identify the most important arguments
by using the dialog context with a best F-measure of 0.74 for gun control, 0.71
for gay marriage, and 0.67 for abortion.
| 2,017 | Computation and Language |
DCN+: Mixed Objective and Deep Residual Coattention for Question
Answering | Traditional models for question answering optimize using cross entropy loss,
which encourages exact answers at the cost of penalizing nearby or overlapping
answers that are sometimes equally accurate. We propose a mixed objective that
combines cross entropy loss with self-critical policy learning. The objective
uses rewards derived from word overlap to solve the misalignment between
evaluation metric and optimization objective. In addition to the mixed
objective, we improve dynamic coattention networks (DCN) with a deep residual
coattention encoder that is inspired by recent work in deep self-attention and
residual networks. Our proposals improve model performance across question
types and input lengths, especially for long questions that requires the
ability to capture long-term dependencies. On the Stanford Question Answering
Dataset, our model achieves state-of-the-art results with 75.1% exact match
accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy
and 86.0% F1.
| 2,017 | Computation and Language |
Neural Wikipedian: Generating Textual Summaries from Knowledge Base
Triples | Most people do not interact with Semantic Web data directly. Unless they have
the expertise to understand the underlying technology, they need textual or
visual interfaces to help them make sense of it. We explore the problem of
generating natural language summaries for Semantic Web data. This is
non-trivial, especially in an open-domain context. To address this problem, we
explore the use of neural networks. Our system encodes the information from a
set of triples into a vector of fixed dimensionality and generates a textual
summary by conditioning the output on the encoded vector. We train and evaluate
our models on two corpora of loosely aligned Wikipedia snippets and DBpedia and
Wikidata triples with promising results.
| 2,017 | Computation and Language |
Keyword-based Query Comprehending via Multiple Optimized-Demand
Augmentation | In this paper, we consider the problem of machine reading task when the
questions are in the form of keywords, rather than natural language. In recent
years, researchers have achieved significant success on machine reading
comprehension tasks, such as SQuAD and TriviaQA. These datasets provide a
natural language question sentence and a pre-selected passage, and the goal is
to answer the question according to the passage. However, in the situation of
interacting with machines by means of text, people are more likely to raise a
query in form of several keywords rather than a complete sentence. The
keyword-based query comprehension is a new challenge, because small variations
to a question may completely change its semantical information, thus yield
different answers. In this paper, we propose a novel neural network system that
consists a Demand Optimization Model based on a passage-attention neural
machine translation and a Reader Model that can find the answer given the
optimized question. The Demand Optimization Model optimizes the original query
and output multiple reconstructed questions, then the Reader Model takes the
new questions as input and locate the answers from the passage. To make
predictions robust, an evaluation mechanism will score the reconstructed
questions so the final answer strike a good balance between the quality of both
the Demand Optimization Model and the Reader Model. Experimental results on
several datasets show that our framework significantly improves multiple strong
baselines on this challenging task.
| 2,017 | Computation and Language |
Improved Text Language Identification for the South African Languages | Virtual assistants and text chatbots have recently been gaining popularity.
Given the short message nature of text-based chat interactions, the language
identification systems of these bots might only have 15 or 20 characters to
make a prediction. However, accurate text language identification is important,
especially in the early stages of many multilingual natural language processing
pipelines.
This paper investigates the use of a naive Bayes classifier, to accurately
predict the language family that a piece of text belongs to, combined with a
lexicon based classifier to distinguish the specific South African language
that the text is written in. This approach leads to a 31% reduction in the
language detection error.
In the spirit of reproducible research the training and testing datasets as
well as the code are published on github. Hopefully it will be useful to create
a text language identification shared task for South African languages.
| 2,017 | Computation and Language |
Paraphrase Generation with Deep Reinforcement Learning | Automatic generation of paraphrases from a given sentence is an important yet
challenging task in natural language processing (NLP), and plays a key role in
a number of applications such as question answering, search, and dialogue. In
this paper, we present a deep reinforcement learning approach to paraphrase
generation. Specifically, we propose a new framework for the task, which
consists of a \textit{generator} and an \textit{evaluator}, both of which are
learned from data. The generator, built as a sequence-to-sequence learning
model, can produce paraphrases given a sentence. The evaluator, constructed as
a deep matching model, can judge whether two sentences are paraphrases of each
other. The generator is first trained by deep learning and then further
fine-tuned by reinforcement learning in which the reward is given by the
evaluator. For the learning of the evaluator, we propose two methods based on
supervised learning and inverse reinforcement learning respectively, depending
on the type of available training data. Empirical study shows that the learned
evaluator can guide the generator to produce more accurate paraphrases.
Experimental results demonstrate the proposed models (the generators)
outperform the state-of-the-art methods in paraphrase generation in both
automatic evaluation and human evaluation.
| 2,018 | Computation and Language |
Towards Automatic Generation of Entertaining Dialogues in Chinese
Crosstalks | Crosstalk, also known by its Chinese name xiangsheng, is a traditional
Chinese comedic performing art featuring jokes and funny dialogues, and one of
China's most popular cultural elements. It is typically in the form of a
dialogue between two performers for the purpose of bringing laughter to the
audience, with one person acting as the leading comedian and the other as the
supporting role. Though general dialogue generation has been widely explored in
previous studies, it is unknown whether such entertaining dialogues can be
automatically generated or not. In this paper, we for the first time
investigate the possibility of automatic generation of entertaining dialogues
in Chinese crosstalks. Given the utterance of the leading comedian in each
dialogue, our task aims to generate the replying utterance of the supporting
role. We propose a humor-enhanced translation model to address this task and
human evaluation results demonstrate the efficacy of our proposed model. The
feasibility of automatic entertaining dialogue generation is also verified.
| 2,017 | Computation and Language |
Improving Neural Machine Translation through Phrase-based Forced
Decoding | Compared to traditional statistical machine translation (SMT), neural machine
translation (NMT) often sacrifices adequacy for the sake of fluency. We propose
a method to combine the advantages of traditional SMT and NMT by exploiting an
existing phrase-based SMT model to compute the phrase-based decoding cost for
an NMT output and then using this cost to rerank the n-best NMT outputs. The
main challenge in implementing this approach is that NMT outputs may not be in
the search space of the standard phrase-based decoding algorithm, because the
search space of phrase-based SMT is limited by the phrase-based translation
rule table. We propose a soft forced decoding algorithm, which can always
successfully find a decoding path for any NMT output. We show that using the
forced decoding cost to rerank the NMT outputs can successfully improve
translation quality on four different language pairs.
| 2,017 | Computation and Language |
Semantic Structure and Interpretability of Word Embeddings | Dense word embeddings, which encode semantic meanings of words to low
dimensional vector spaces have become very popular in natural language
processing (NLP) research due to their state-of-the-art performances in many
NLP tasks. Word embeddings are substantially successful in capturing semantic
relations among words, so a meaningful semantic structure must be present in
the respective vector spaces. However, in many cases, this semantic structure
is broadly and heterogeneously distributed across the embedding dimensions,
which makes interpretation a big challenge. In this study, we propose a
statistical method to uncover the latent semantic structure in the dense word
embeddings. To perform our analysis we introduce a new dataset (SEMCAT) that
contains more than 6500 words semantically grouped under 110 categories. We
further propose a method to quantify the interpretability of the word
embeddings; the proposed method is a practical alternative to the classical
word intrusion test that requires human intervention.
| 2,018 | Computation and Language |
Generalization without systematicity: On the compositional skills of
sequence-to-sequence recurrent networks | Humans can understand and produce new utterances effortlessly, thanks to
their compositional skills. Once a person learns the meaning of a new verb
"dax," he or she can immediately understand the meaning of "dax twice" or "sing
and dax." In this paper, we introduce the SCAN domain, consisting of a set of
simple compositional navigation commands paired with the corresponding action
sequences. We then test the zero-shot generalization capabilities of a variety
of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence
methods. We find that RNNs can make successful zero-shot generalizations when
the differences between training and test commands are small, so that they can
apply "mix-and-match" strategies to solve the task. However, when
generalization requires systematic compositional skills (as in the "dax"
example above), RNNs fail spectacularly. We conclude with a proof-of-concept
experiment in neural machine translation, suggesting that lack of systematicity
might be partially responsible for neural networks' notorious training data
thirst.
| 2,018 | Computation and Language |
JSUT corpus: free large-scale Japanese speech corpus for end-to-end
speech synthesis | Thanks to improvements in machine learning techniques including deep
learning, a free large-scale speech corpus that can be shared between academic
institutions and commercial companies has an important role. However, such a
corpus for Japanese speech synthesis does not exist. In this paper, we designed
a novel Japanese speech corpus, named the "JSUT corpus," that is aimed at
achieving end-to-end speech synthesis. The corpus consists of 10 hours of
reading-style speech data and its transcription and covers all of the main
pronunciations of daily-use Japanese characters. In this paper, we describe how
we designed and analyzed the corpus. The corpus is freely available online.
| 2,017 | Computation and Language |
Learning with Latent Language | The named concepts and compositional operators present in natural language
provide a rich source of information about the kinds of abstractions humans use
to navigate the world. Can this linguistic background knowledge improve the
generality and efficiency of learned classifiers and control policies? This
paper aims to show that using the space of natural language strings as a
parameter space is an effective way to capture natural task structure. In a
pretraining phase, we learn a language interpretation model that transforms
inputs (e.g. images) into outputs (e.g. labels) given natural language
descriptions. To learn a new concept (e.g. a classifier), we search directly in
the space of descriptions to minimize the interpreter's loss on training
examples. Crucially, our models do not require language data to learn these
concepts: language is used only in pretraining to impose structure on
subsequent learning. Results on image classification, text editing, and
reinforcement learning show that, in all settings, models with a linguistic
parameterization outperform those without.
| 2,017 | Computation and Language |
Evaluating Discourse Phenomena in Neural Machine Translation | For machine translation to tackle discourse phenomena, models must have
access to extra-sentential linguistic context. There has been recent interest
in modelling context in neural machine translation (NMT), but models have been
principally evaluated with standard automatic metrics, poorly adapted to
evaluating discourse phenomena. In this article, we present hand-crafted,
discourse test sets, designed to test the models' ability to exploit previous
source and target sentences. We investigate the performance of recently
proposed multi-encoder NMT models trained on subtitles for English to French.
We also explore a novel way of exploiting context from the previous sentence.
Despite gains using BLEU, multi-encoder models give limited improvement in the
handling of discourse phenomena: 50% accuracy on our coreference test set and
53.5% for coherence/cohesion (compared to a non-contextual baseline of 50%). A
simple strategy of decoding the concatenation of the previous and current
sentence leads to good performance, and our novel strategy of multi-encoding
and decoding of two sentences leads to the best performance (72.5% for
coreference and 57% for coherence/cohesion), highlighting the importance of
target-side context.
| 2,018 | Computation and Language |
Uncovering Latent Style Factors for Expressive Speech Synthesis | Prosodic modeling is a core problem in speech synthesis. The key challenge is
producing desirable prosody from textual input containing only phonetic
information. In this preliminary study, we introduce the concept of "style
tokens" in Tacotron, a recently proposed end-to-end neural speech synthesis
model. Using style tokens, we aim to extract independent prosodic styles from
training data. We show that without annotation data or an explicit supervision
signal, our approach can automatically learn a variety of prosodic variations
in a purely data-driven way. Importantly, each style token corresponds to a
fixed style factor regardless of the given text sequence. As a result, we can
control the prosodic style of synthetic speech in a somewhat predictable and
globally consistent way.
| 2,017 | Computation and Language |
Text Annotation Graphs: Annotating Complex Natural Language Phenomena | This paper introduces a new web-based software tool for annotating text, Text
Annotation Graphs, or TAG. It provides functionality for representing complex
relationships between words and word phrases that are not available in other
software tools, including the ability to define and visualize relationships
between the relationships themselves (semantic hypergraphs). Additionally, we
include an approach to representing text annotations in which annotation
subgraphs, or semantic summaries, are used to show relationships outside of the
sequential context of the text itself. Users can use these subgraphs to quickly
find similar structures within the current document or external annotated
documents. Initially, TAG was developed to support information extraction tasks
on a large database of biomedical articles. However, our software is flexible
enough to support a wide range of annotation tasks for any domain. Examples are
provided that showcase TAG's capabilities on morphological parsing and event
extraction tasks. The TAG software is available at: https://github.com/
CreativeCodingLab/TextAnnotationGraphs.
| 2,018 | Computation and Language |
Just ASK: Building an Architecture for Extensible Self-Service Spoken
Language Understanding | This paper presents the design of the machine learning architecture that
underlies the Alexa Skills Kit (ASK) a large scale Spoken Language
Understanding (SLU) Software Development Kit (SDK) that enables developers to
extend the capabilities of Amazon's virtual assistant, Alexa. At Amazon, the
infrastructure powers over 25,000 skills deployed through the ASK, as well as
AWS's Amazon Lex SLU Service. The ASK emphasizes flexibility, predictability
and a rapid iteration cycle for third party developers. It imposes inductive
biases that allow it to learn robust SLU models from extremely small and sparse
datasets and, in doing so, removes significant barriers to entry for software
developers and dialogue systems researchers.
| 2,018 | Computation and Language |
Extracting an English-Persian Parallel Corpus from Comparable Corpora | Parallel data are an important part of a reliable Statistical Machine
Translation (SMT) system. The more of these data are available, the better the
quality of the SMT system. However, for some language pairs such as
Persian-English, parallel sources of this kind are scarce. In this paper, a
bidirectional method is proposed to extract parallel sentences from English and
Persian document aligned Wikipedia. Two machine translation systems are
employed to translate from Persian to English and the reverse after which an IR
system is used to measure the similarity of the translated sentences. Adding
the extracted sentences to the training data of the existing SMT systems is
shown to improve the quality of the translation. Furthermore, the proposed
method slightly outperforms the one-directional approach. The extracted corpus
consists of about 200,000 sentences which have been sorted by their degree of
similarity calculated by the IR system and is freely available for public
access on the Web.
| 2,019 | Computation and Language |
SRL4ORL: Improving Opinion Role Labeling using Multi-task Learning with
Semantic Role Labeling | For over a decade, machine learning has been used to extract
opinion-holder-target structures from text to answer the question "Who
expressed what kind of sentiment towards what?". Recent neural approaches do
not outperform the state-of-the-art feature-based models for Opinion Role
Labeling (ORL). We suspect this is due to the scarcity of labeled training data
and address this issue using different multi-task learning (MTL) techniques
with a related task which has substantially more data, i.e. Semantic Role
Labeling (SRL). We show that two MTL models improve significantly over the
single-task model for labeling of both holders and targets, on the development
and the test sets. We found that the vanilla MTL model which makes predictions
using only shared ORL and SRL features, performs the best. With deeper analysis
we determine what works and what might be done to make further improvements for
ORL.
| 2,018 | Computation and Language |
Multi-Mention Learning for Reading Comprehension with Neural Cascades | Reading comprehension is a challenging task, especially when executed across
longer or across multiple evidence documents, where the answer is likely to
reoccur. Existing neural architectures typically do not scale to the entire
evidence, and hence, resort to selecting a single passage in the document
(either via truncation or other means), and carefully searching for the answer
within that passage. However, in some cases, this strategy can be suboptimal,
since by focusing on a specific passage, it becomes difficult to leverage
multiple mentions of the same answer throughout the document. In this work, we
take a different approach by constructing lightweight models that are combined
in a cascade to find the answer. Each submodel consists only of feed-forward
networks equipped with an attention mechanism, making it trivially
parallelizable. We show that our approach can scale to approximately an order
of magnitude larger evidence documents and can aggregate information at the
representation level from multiple mentions of each answer candidate across the
document. Empirically, our approach achieves state-of-the-art performance on
both the Wikipedia and web domains of the TriviaQA dataset, outperforming more
complex, recurrent architectures.
| 2,018 | Computation and Language |
A Comparison of Feature-Based and Neural Scansion of Poetry | Automatic analysis of poetic rhythm is a challenging task that involves
linguistics, literature, and computer science. When the language to be analyzed
is known, rule-based systems or data-driven methods can be used. In this paper,
we analyze poetic rhythm in English and Spanish. We show that the
representations of data learned from character-based neural models are more
informative than the ones from hand-crafted features, and that a
Bi-LSTM+CRF-model produces state-of-the art accuracy on scansion of poetry in
two languages. Results also show that the information about whole word
structure, and not just independent syllables, is highly informative for
performing scansion.
| 2,021 | Computation and Language |
Towards Neural Machine Translation with Partially Aligned Corpora | While neural machine translation (NMT) has become the new paradigm, the
parameter optimization requires large-scale parallel data which is scarce in
many domains and language pairs. In this paper, we address a new translation
scenario in which there only exists monolingual corpora and phrase pairs. We
propose a new method towards translation with partially aligned sentence pairs
which are derived from the phrase pairs and monolingual corpora. To make full
use of the partially aligned corpora, we adapt the conventional NMT training
method in two aspects. On one hand, different generation strategies are
designed for aligned and unaligned target words. On the other hand, a different
objective function is designed to model the partially aligned parts. The
experiments demonstrate that our method can achieve a relatively good result in
such a translation scenario, and tiny bitexts can boost translation quality to
a large extent.
| 2,017 | Computation and Language |
Dual Language Models for Code Switched Speech Recognition | In this work, we present a simple and elegant approach to language modeling
for bilingual code-switched text. Since code-switching is a blend of two or
more different languages, a standard bilingual language model can be improved
upon by using structures of the monolingual language models. We propose a novel
technique called dual language models, which involves building two
complementary monolingual language models and combining them using a
probabilistic model for switching between the two. We evaluate the efficacy of
our approach using a conversational Mandarin-English speech corpus. We prove
the robustness of our model by showing significant improvements in perplexity
measures over the standard bilingual language model without the use of any
external information. Similar consistent improvements are also reflected in
automatic speech recognition error rates.
| 2,018 | Computation and Language |
Compressing Word Embeddings via Deep Compositional Code Learning | Natural language processing (NLP) models often require a massive number of
parameters for word embeddings, resulting in a large storage or memory
footprint. Deploying neural NLP models to mobile devices requires compressing
the word embeddings without any significant sacrifices in performance. For this
purpose, we propose to construct the embeddings with few basis vectors. For
each word, the composition of basis vectors is determined by a hash code. To
maximize the compression rate, we adopt the multi-codebook quantization
approach instead of binary coding scheme. Each code is composed of multiple
discrete numbers, such as (3, 2, 1, 8), where the value of each component is
limited to a fixed range. We propose to directly learn the discrete codes in an
end-to-end neural network by applying the Gumbel-softmax trick. Experiments
show the compression rate achieves 98% in a sentiment analysis task and 94% ~
99% in machine translation tasks without performance loss. In both tasks, the
proposed method can improve the model performance by slightly lowering the
compression rate. Compared to other approaches such as character-level
segmentation, the proposed method is language-independent and does not require
modifications to the network architecture.
| 2,017 | Computation and Language |
One Model to Rule them all: Multitask and Multilingual Modelling for
Lexical Analysis | When learning a new skill, you take advantage of your preexisting skills and
knowledge. For instance, if you are a skilled violinist, you will likely have
an easier time learning to play cello. Similarly, when learning a new language
you take advantage of the languages you already speak. For instance, if your
native language is Norwegian and you decide to learn Dutch, the lexical overlap
between these two languages will likely benefit your rate of language
acquisition. This thesis deals with the intersection of learning multiple tasks
and learning multiple languages in the context of Natural Language Processing
(NLP), which can be defined as the study of computational processing of human
language. Although these two types of learning may seem different on the
surface, we will see that they share many similarities.
The traditional approach in NLP is to consider a single task for a single
language at a time. However, recent advances allow for broadening this
approach, by considering data for multiple tasks and languages simultaneously.
This is an important approach to explore further as the key to improving the
reliability of NLP, especially for low-resource languages, is to take advantage
of all relevant data whenever possible. In doing so, the hope is that in the
long term, low-resource languages can benefit from the advances made in NLP
which are currently to a large extent reserved for high-resource languages.
This, in turn, may then have positive consequences for, e.g., language
preservation, as speakers of minority languages will have a lower degree of
pressure to using high-resource languages. In the short term, answering the
specific research questions posed should be of use to NLP researchers working
towards the same goal.
| 2,017 | Computation and Language |
Learning Filterbanks from Raw Speech for Phone Recognition | We train a bank of complex filters that operates on the raw waveform and is
fed into a convolutional neural network for end-to-end phone recognition. These
time-domain filterbanks (TD-filterbanks) are initialized as an approximation of
mel-filterbanks, and then fine-tuned jointly with the remaining convolutional
architecture. We perform phone recognition experiments on TIMIT and show that
for several architectures, models trained on TD-filterbanks consistently
outperform their counterparts trained on comparable mel-filterbanks. We get our
best performance by learning all front-end steps, from pre-emphasis up to
averaging. Finally, we observe that the filters at convergence have an
asymmetric impulse response, and that some of them remain almost analytic.
| 2,018 | Computation and Language |
"Attention" for Detecting Unreliable News in the Information Age | An Unreliable news is any piece of information which is false or misleading,
deliberately spread to promote political, ideological and financial agendas.
Recently the problem of unreliable news has got a lot of attention as the
number instances of using news and social media outlets for propaganda have
increased rapidly. This poses a serious threat to society, which calls for
technology to automatically and reliably identify unreliable news sources. This
paper is an effort made in this direction to build systems for detecting
unreliable news articles. In this paper, various NLP algorithms were built and
evaluated on Unreliable News Data 2017 dataset. Variants of hierarchical
attention networks (HAN) are presented for encoding and classifying news
articles which achieve the best results of 0.944 ROC-AUC. Finally, Attention
layer weights are visualized to understand and give insight into the decisions
made by HANs. The results obtained are very promising and encouraging to deploy
and use these systems in the real world to mitigate the problem of unreliable
news.
| 2,017 | Computation and Language |
Predicting Discharge Medications at Admission Time Based on Deep
Learning | Predicting discharge medications right after a patient being admitted is an
important clinical decision, which provides physicians with guidance on what
type of medication regimen to plan for and what possible changes on initial
medication may occur during an inpatient stay. It also facilitates medication
reconciliation process with easy detection of medication discrepancy at
discharge time to improve patient safety. However, since the information
available upon admission is limited and patients' condition may evolve during
an inpatient stay, these predictions could be a difficult decision for
physicians to make. In this work, we investigate how to leverage deep learning
technologies to assist physicians in predicting discharge medications based on
information documented in the admission note. We build a convolutional neural
network which takes an admission note as input and predicts the medications
placed on the patient at discharge time. Our method is able to distill semantic
patterns from unstructured and noisy texts, and is capable of capturing the
pharmacological correlations among medications. We evaluate our method on 25K
patient visits and compare with 4 strong baselines. Our methods demonstrate a
20% increase in macro-averaged F1 score than the best baseline.
| 2,017 | Computation and Language |
Language as a matrix product state | We propose a statistical model for natural language that begins by
considering language as a monoid, then representing it in complex matrices with
a compatible translation invariant probability measure. We interpret the
probability measure as arising via the Born rule from a translation invariant
matrix product state.
| 2,017 | Computation and Language |
Deep Stacking Networks for Low-Resource Chinese Word Segmentation with
Transfer Learning | In recent years, neural networks have proven to be effective in Chinese word
segmentation. However, this promising performance relies on large-scale
training data. Neural networks with conventional architectures cannot achieve
the desired results in low-resource datasets due to the lack of labelled
training data. In this paper, we propose a deep stacking framework to improve
the performance on word segmentation tasks with insufficient data by
integrating datasets from diverse domains. Our framework consists of two parts,
domain-based models and deep stacking networks. The domain-based models are
used to learn knowledge from different datasets. The deep stacking networks are
designed to integrate domain-based models. To reduce model conflicts, we
innovatively add communication paths among models and design various structures
of deep stacking networks, including Gaussian-based Stacking Networks,
Concatenate-based Stacking Networks, Sequence-based Stacking Networks and
Tree-based Stacking Networks. We conduct experiments on six low-resource
datasets from various domains. Our proposed framework shows significant
performance improvements on all datasets compared with several strong
baselines.
| 2,017 | Computation and Language |
Towards Linguistically Generalizable NLP Systems: A Workshop and Shared
Task | This paper presents a summary of the first Workshop on Building
Linguistically Generalizable Natural Language Processing Systems, and the
associated Build It Break It, The Language Edition shared task. The goal of
this workshop was to bring together researchers in NLP and linguistics with a
shared task aimed at testing the generalizability of NLP systems beyond the
distributions of their training data. We describe the motivation, setup, and
participation of the shared task, provide discussion of some highlighted
results, and discuss lessons learned.
| 2,017 | Computation and Language |
Learning Word Embeddings from Speech | In this paper, we propose a novel deep neural network architecture,
Sequence-to-Sequence Audio2Vec, for unsupervised learning of fixed-length
vector representations of audio segments excised from a speech corpus, where
the vectors contain semantic information pertaining to the segments, and are
close to other vectors in the embedding space if their corresponding segments
are semantically similar. The design of the proposed model is based on the RNN
Encoder-Decoder framework, and borrows the methodology of continuous skip-grams
for training. The learned vector representations are evaluated on 13 widely
used word similarity benchmarks, and achieved competitive results to that of
GloVe. The biggest advantage of the proposed model is its capability of
extracting semantic information of audio segments taken directly from raw
speech, without relying on any other modalities such as text or images, which
are challenging and expensive to collect and annotate.
| 2,017 | Computation and Language |
Robust Speech Recognition Using Generative Adversarial Networks | This paper describes a general, scalable, end-to-end framework that uses the
generative adversarial network (GAN) objective to enable robust speech
recognition. Encoders trained with the proposed approach enjoy improved
invariance by learning to map noisy audio to the same embedding space as that
of clean audio. Unlike previous methods, the new framework does not rely on
domain expertise or simplifying assumptions as are often needed in signal
processing, and directly encourages robustness in a data-driven way. We show
the new approach improves simulated far-field speech recognition of vanilla
sequence-to-sequence models without specialized front-ends or preprocessing.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.