Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Technical Report: Adjudication of Coreference Annotations via Answer Set
Optimization | We describe the first automatic approach for merging coreference annotations
obtained from multiple annotators into a single gold standard. This merging is
subject to certain linguistic hard constraints and optimization criteria that
prefer solutions with minimal divergence from annotators. The representation
involves an equivalence relation over a large number of elements. We use Answer
Set Programming to describe two representations of the problem and four
objective functions suitable for different datasets. We provide two
structurally different real-world benchmark datasets based on the METU-Sabanci
Turkish Treebank and we report our experiences in using the Gringo, Clasp, and
Wasp tools for computing optimal adjudication results on these datasets.
| 2,018 | Computation and Language |
Adapting predominant and novel sense discovery algorithms for
identifying corpus-specific sense differences | Word senses are not static and may have temporal, spatial or corpus-specific
scopes. Identifying such scopes might benefit the existing WSD systems largely.
In this paper, while studying corpus specific word senses, we adapt three
existing predominant and novel-sense discovery algorithms to identify these
corpus-specific senses. We make use of text data available in the form of
millions of digitized books and newspaper archives as two different sources of
corpora and propose automated methods to identify corpus-specific word senses
at various time points. We conduct an extensive and thorough human judgment
experiment to rigorously evaluate and compare the performance of these
approaches. Post adaptation, the output of the three algorithms are in the same
format and the accuracy results are also comparable, with roughly 45-60% of the
reported corpus-specific senses being judged as genuine.
| 2,018 | Computation and Language |
Emerging Language Spaces Learned From Massively Multilingual Corpora | Translations capture important information about languages that can be used
as implicit supervision in learning linguistic properties and semantic
representations. In an information-centric view, translated texts may be
considered as semantic mirrors of the original text and the significant
variations that we can observe across various languages can be used to
disambiguate a given expression using the linguistic signal that is grounded in
translation. Parallel corpora consisting of massive amounts of human
translations with a large linguistic variation can be applied to increase
abstractions and we propose the use of highly multilingual machine translation
models to find language-independent meaning representations. Our initial
experiments show that neural machine translation models can indeed learn in
such a setup and we can show that the learning algorithm picks up information
about the relation between languages in order to optimize transfer leaning with
shared parameters. The model creates a continuous language space that
represents relationships in terms of geometric distances, which we can
visualize to illustrate how languages cluster according to language families
and groups. Does this open the door for new ideas of data-driven language
typology with promising models and techniques in empirical cross-linguistic
research?
| 2,018 | Computation and Language |
A Unified Deep Learning Architecture for Abuse Detection | Hate speech, offensive language, sexism, racism and other types of abusive
behavior have become a common phenomenon in many online social media platforms.
In recent years, such diverse abusive behaviors have been manifesting with
increased frequency and levels of intensity. This is due to the openness and
willingness of popular media platforms, such as Twitter and Facebook, to host
content of sensitive or controversial topics. However, these platforms have not
adequately addressed the problem of online abusive behavior, and their
responsiveness to the effective detection and blocking of such inappropriate
behavior remains limited.
In the present paper, we study this complex problem by following a more
holistic approach, which considers the various aspects of abusive behavior. To
make the approach tangible, we focus on Twitter data and analyze user and
textual properties from different angles of abusive posting behavior. We
propose a deep learning architecture, which utilizes a wide variety of
available metadata, and combines it with automatically-extracted hidden
patterns within the text of the tweets, to detect multiple abusive behavioral
norms which are highly inter-related. We apply this unified architecture in a
seamless, transparent fashion to detect different types of abusive behavior
(hate speech, sexism vs. racism, bullying, sarcasm, etc.) without the need for
any tuning of the model architecture for each task. We test the proposed
approach with multiple datasets addressing different and multiple abusive
behaviors on Twitter. Our results demonstrate that it largely outperforms the
state-of-art methods (between 21 and 45\% improvement in AUC, depending on the
dataset).
| 2,018 | Computation and Language |
Disunited Nations? A Multiplex Network Approach to Detecting Preference
Affinity Blocs using Texts and Votes | This paper contributes to an emerging literature that models votes and text
in tandem to better understand polarization of expressed preferences. It
introduces a new approach to estimate preference polarization in
multidimensional settings, such as international relations, based on
developments in the natural language processing and network science literatures
-- namely word embeddings, which retain valuable syntactical qualities of human
language, and community detection in multilayer networks, which locates densely
connected actors across multiple, complex networks. We find that the employment
of these tools in tandem helps to better estimate states' foreign policy
preferences expressed in UN votes and speeches beyond that permitted by votes
alone. The utility of these located affinity blocs is demonstrated through an
application to conflict onset in International Relations, though these tools
will be of interest to all scholars faced with the measurement of preferences
and polarization in multidimensional settings.
| 2,019 | Computation and Language |
Goal-Oriented Chatbot Dialog Management Bootstrapping with Transfer
Learning | Goal-Oriented (GO) Dialogue Systems, colloquially known as goal oriented
chatbots, help users achieve a predefined goal (e.g. book a movie ticket)
within a closed domain. A first step is to understand the user's goal by using
natural language understanding techniques. Once the goal is known, the bot must
manage a dialogue to achieve that goal, which is conducted with respect to a
learnt policy. The success of the dialogue system depends on the quality of the
policy, which is in turn reliant on the availability of high-quality training
data for the policy learning method, for instance Deep Reinforcement Learning.
Due to the domain specificity, the amount of available data is typically too
low to allow the training of good dialogue policies. In this paper we introduce
a transfer learning method to mitigate the effects of the low in-domain data
availability. Our transfer learning based approach improves the bot's success
rate by 20% in relative terms for distant domains and we more than double it
for close domains, compared to the model without transfer learning. Moreover,
the transfer learning chatbots learn the policy up to 5 to 10 times faster.
Finally, as the transfer learning approach is complementary to additional
processing such as warm-starting, we show that their joint application gives
the best outcomes.
| 2,018 | Computation and Language |
Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training
Based on Sentence Embeddings | Spoken language understanding (SLU) systems, such as goal-oriented chatbots
or personal assistants, rely on an initial natural language understanding (NLU)
module to determine the intent and to extract the relevant information from the
user queries they take as input. SLU systems usually help users to solve
problems in relatively narrow domains and require a large amount of in-domain
training data. This leads to significant data availability issues that inhibit
the development of successful systems. To alleviate this problem, we propose a
technique of data selection in the low-data regime that enables us to train
with fewer labeled sentences, thus smaller labelling costs.
We propose a submodularity-inspired data ranking function, the ratio-penalty
marginal gain, for selecting data points to label based only on the information
extracted from the textual embedding space. We show that the distances in the
embedding space are a viable source of information that can be used for data
selection. Our method outperforms two known active learning techniques and
enables cost-efficient training of the NLU unit. Moreover, our proposed
selection technique does not need the model to be retrained in between the
selection steps, making it time efficient as well.
| 2,018 | Computation and Language |
Order matters: Distributional properties of speech to young children
bootstraps learning of semantic representations | Some researchers claim that language acquisition is critically dependent on
experiencing linguistic input in order of increasing complexity. We set out to
test this hypothesis using a simple recurrent neural network (SRN) trained to
predict word sequences in CHILDES, a 5-million-word corpus of speech directed
to children. First, we demonstrated that age-ordered CHILDES exhibits a gradual
increase in linguistic complexity. Next, we compared the performance of two
groups of SRNs trained on CHILDES which had either been age-ordered or not.
Specifically, we assessed learning of grammatical and semantic structure and
showed that training on age-ordered input facilitates learning of semantic, but
not of sequential structure. We found that this advantage is eliminated when
the models were trained on input with utterance boundary information removed.
| 2,018 | Computation and Language |
Densely Connected Bidirectional LSTM with Applications to Sentence
Classification | Deep neural networks have recently been shown to achieve highly competitive
performance in many computer vision tasks due to their abilities of exploring
in a much larger hypothesis space. However, since most deep architectures like
stacked RNNs tend to suffer from the vanishing-gradient and overfitting
problems, their effects are still understudied in many NLP tasks. Inspired by
this, we propose a novel multi-layer RNN model called densely connected
bidirectional long short-term memory (DC-Bi-LSTM) in this paper, which
essentially represents each layer by the concatenation of its hidden state and
all preceding layers' hidden states, followed by recursively passing each
layer's representation to all subsequent layers. We evaluate our proposed model
on five benchmark datasets of sentence classification. DC-Bi-LSTM with depth up
to 20 can be successfully trained and obtain significant improvements over the
traditional Bi-LSTM with the same or even less parameters. Moreover, our model
has promising performance compared with the state-of-the-art approaches.
| 2,018 | Computation and Language |
Left-Center-Right Separated Neural Network for Aspect-based Sentiment
Analysis with Rotatory Attention | Deep learning techniques have achieved success in aspect-based sentiment
analysis in recent years. However, there are two important issues that still
remain to be further studied, i.e., 1) how to efficiently represent the target
especially when the target contains multiple words; 2) how to utilize the
interaction between target and left/right contexts to capture the most
important words in them. In this paper, we propose an approach, called
left-center-right separated neural network with rotatory attention (LCR-Rot),
to better address the two problems. Our approach has two characteristics: 1) it
has three separated LSTMs, i.e., left, center and right LSTMs, corresponding to
three parts of a review (left context, target phrase and right context); 2) it
has a rotatory attention mechanism which models the relation between target and
left/right contexts. The target2context attention is used to capture the most
indicative sentiment words in left/right contexts. Subsequently, the
context2target attention is used to capture the most important word in the
target. This leads to a two-side representation of the target: left-aware
target and right-aware target. We compare our approach on three benchmark
datasets with ten related methods proposed recently. The results show that our
approach significantly outperforms the state-of-the-art techniques.
| 2,018 | Computation and Language |
DeepType: Multilingual Entity Linking by Neural Type System Evolution | The wealth of structured (e.g. Wikidata) and unstructured data about the
world available today presents an incredible opportunity for tomorrow's
Artificial Intelligence. So far, integration of these two different modalities
is a difficult process, involving many decisions concerning how best to
represent the information so that it will be captured or useful, and
hand-labeling large amounts of data. DeepType overcomes this challenge by
explicitly integrating symbolic information into the reasoning process of a
neural network with a type system. First we construct a type system, and
second, we use it to constrain the outputs of a neural network to respect the
symbolic structure. We achieve this by reformulating the design problem into a
mixed integer problem: create a type system and subsequently train a neural
network with it. In this reformulation discrete variables select which
parent-child relations from an ontology are types within the type system, while
continuous variables control a classifier fit to the type system. The original
problem cannot be solved exactly, so we propose a 2-step algorithm: 1)
heuristic search or stochastic optimization over discrete variables that define
a type system informed by an Oracle and a Learnability heuristic, 2) gradient
descent to fit classifier parameters. We apply DeepType to the problem of
Entity Linking on three standard datasets (i.e. WikiDisamb30, CoNLL (YAGO), TAC
KBP 2010) and find that it outperforms all existing solutions by a wide margin,
including approaches that rely on a human-designed type system or recent deep
learning-based entity embeddings, while explicitly using symbolic information
lets it integrate new entities without retraining.
| 2,018 | Computation and Language |
Heuristic Feature Selection for Clickbait Detection | We study feature selection as a means to optimize the baseline clickbait
detector employed at the Clickbait Challenge 2017. The challenge's task is to
score the "clickbaitiness" of a given Twitter tweet on a scale from 0 (no
clickbait) to 1 (strong clickbait). Unlike most other approaches submitted to
the challenge, the baseline approach is based on manual feature engineering and
does not compete out of the box with many of the deep learning-based
approaches. We show that scaling up feature selection efforts to heuristically
identify better-performing feature subsets catapults the performance of the
baseline classifier to second rank overall, beating 12 other competing
approaches and improving over the baseline performance by 20%. This
demonstrates that traditional classification approaches can still keep up with
deep learning on this task.
| 2,018 | Computation and Language |
Semantic projection: recovering human knowledge of multiple, distinct
object features from word embeddings | The words of a language reflect the structure of the human mind, allowing us
to transmit thoughts between individuals. However, language can represent only
a subset of our rich and detailed cognitive architecture. Here, we ask what
kinds of common knowledge (semantic memory) are captured by word meanings
(lexical semantics). We examine a prominent computational model that represents
words as vectors in a multidimensional space, such that proximity between
word-vectors approximates semantic relatedness. Because related words appear in
similar contexts, such spaces - called "word embeddings" - can be learned from
patterns of lexical co-occurrences in natural language. Despite their
popularity, a fundamental concern about word embeddings is that they appear to
be semantically "rigid": inter-word proximity captures only overall similarity,
yet human judgments about object similarities are highly context-dependent and
involve multiple, distinct semantic features. For example, dolphins and
alligators appear similar in size, but differ in intelligence and
aggressiveness. Could such context-dependent relationships be recovered from
word embeddings? To address this issue, we introduce a powerful, domain-general
solution: "semantic projection" of word-vectors onto lines that represent
various object features, like size (the line extending from the word "small" to
"big"), intelligence (from "dumb" to "smart"), or danger (from "safe" to
"dangerous"). This method, which is intuitively analogous to placing objects
"on a mental scale" between two extremes, recovers human judgments across a
range of object categories and properties. We thus show that word embeddings
inherit a wealth of common knowledge from word co-occurrence statistics and can
be flexibly manipulated to express context-dependent meanings.
| 2,018 | Computation and Language |
Chemical-protein relation extraction with ensembles of SVM, CNN, and RNN
models | Text mining the relations between chemicals and proteins is an increasingly
important task. The CHEMPROT track at BioCreative VI aims to promote the
development and evaluation of systems that can automatically detect the
chemical-protein relations in running text (PubMed abstracts). This manuscript
describes our submission, which is an ensemble of three systems, including a
Support Vector Machine, a Convolutional Neural Network, and a Recurrent Neural
Network. Their output is combined using a decision based on majority voting or
stacking. Our CHEMPROT system obtained 0.7266 in precision and 0.5735 in recall
for an f-score of 0.6410, demonstrating the effectiveness of machine
learning-based approaches for automatic relation extraction from biomedical
literature. Our submission achieved the highest performance in the task during
the 2017 challenge.
| 2,018 | Computation and Language |
DP-GAN: Diversity-Promoting Generative Adversarial Network for
Generating Informative and Diversified Text | Existing text generation methods tend to produce repeated and "boring"
expressions. To tackle this problem, we propose a new text generation model,
called Diversity-Promoting Generative Adversarial Network (DP-GAN). The
proposed model assigns low reward for repeatedly generated text and high reward
for "novel" and fluent text, encouraging the generator to produce diverse and
informative text. Moreover, we propose a novel language-model based
discriminator, which can better distinguish novel text from repeated text
without the saturation problem compared with existing classifier-based
discriminators. The experimental results on review generation and dialogue
generation tasks demonstrate that our model can generate substantially more
diverse and informative text than existing baselines. The code is available at
https://github.com/lancopku/DPGAN
| 2,018 | Computation and Language |
Manuscripts in Time and Space: Experiments in Scriptometrics on an Old
French Corpus | Witnesses of medieval literary texts, preserved in manuscript, are layered
objects , being almost exclusively copies of copies. This results in multiple
and hard to distinguish linguistic strata -- the author's scripta interacting
with the scriptae of the various scribes -- in a context where literary written
language is already a dialectal hybrid. Moreover, no single linguistic
phenomenon allows to distinguish between different scriptae, and only the
combination of multiple characteristics is likely to be significant [9] -- but
which ones? The most common approach is to search for these features in a set
of previously selected texts, that are supposed to be representative of a given
scripta. This can induce a circularity, in which texts are used to select
features that in turn characterise them as belonging to a linguistic area. To
counter this issue, this paper offers an unsupervised and corpus-based
approach, in which clustering methods are applied to an Old French corpus to
identify main divisions and groups. Ultimately, scriptometric profiles are
built for each of them.
| 2,018 | Computation and Language |
Interactive Grounded Language Acquisition and Generalization in a 2D
World | We build a virtual agent for learning language in a 2D maze-like world. The
agent sees images of the surrounding environment, listens to a virtual teacher,
and takes actions to receive rewards. It interactively learns the teacher's
language from scratch based on two language use cases: sentence-directed
navigation and question answering. It learns simultaneously the visual
representations of the world, the language, and the action control. By
disentangling language grounding from other computational routines and sharing
a concept detection function between language grounding and prediction, the
agent reliably interpolates and extrapolates to interpret sentences that
contain new word combinations or new words missing from training sentences. The
new words are transferred from the answers of language prediction. Such a
language ability is trained and evaluated on a population of over 1.6 million
distinct sentences consisting of 119 object words, 8 color words, 9
spatial-relation words, and 50 grammatical words. The proposed model
significantly outperforms five comparison methods for interpreting zero-shot
sentences. In addition, we demonstrate human-interpretable intermediate outputs
of the model in the appendix.
| 2,018 | Computation and Language |
Quantitative Fine-Grained Human Evaluation of Machine Translation
Systems: a Case Study on English to Croatian | This paper presents a quantitative fine-grained manual evaluation approach to
comparing the performance of different machine translation (MT) systems. We
build upon the well-established Multidimensional Quality Metrics (MQM) error
taxonomy and implement a novel method that assesses whether the differences in
performance for MQM error types between different MT systems are statistically
significant. We conduct a case study for English-to-Croatian, a language
direction that involves translating into a morphologically rich language, for
which we compare three MT systems belonging to different paradigms: pure
phrase-based, factored phrase-based and neural. First, we design an
MQM-compliant error taxonomy tailored to the relevant linguistic phenomena of
Slavic languages, which made the annotation process feasible and accurate.
Errors in MT outputs were then annotated by two annotators following this
taxonomy. Subsequently, we carried out a statistical analysis which showed that
the best-performing system (neural) reduces the errors produced by the worst
system (pure phrase-based) by more than half (54\%). Moreover, we conducted an
additional analysis of agreement errors in which we distinguished between short
(phrase-level) and long distance (sentence-level) errors. We discovered that
phrase-based MT approaches are of limited use for long distance agreement
phenomena, for which neural MT was found to be especially effective.
| 2,018 | Computation and Language |
Diverse Beam Search for Increased Novelty in Abstractive Summarization | Text summarization condenses a text to a shorter version while retaining the
important informations. Abstractive summarization is a recent development that
generates new phrases, rather than simply copying or rephrasing sentences
within the original text. Recently neural sequence-to-sequence models have
achieved good results in the field of abstractive summarization, which opens
new possibilities and applications for industrial purposes. However, most
practitioners observe that these models still use large parts of the original
text in the output summaries, making them often similar to extractive
frameworks. To address this drawback, we first introduce a new metric to
measure how much of a summary is extracted from the input text. Secondly, we
present a novel method, that relies on a diversity factor in computing the
neural network loss, to improve the diversity of the summaries generated by any
neural abstractive model implementing beam search. Finally, we show that this
method not only makes the system less extractive, but also improves the overall
rouge score of state-of-the-art methods by at least 2 points.
| 2,018 | Computation and Language |
Question-Answer Selection in User to User Marketplace Conversations | Sellers in user to user marketplaces can be inundated with questions from
potential buyers. Answers are often already available in the product
description. We collected a dataset of around 590K such questions and answers
from conversations in an online marketplace. We propose a question answering
system that selects a sentence from the product description using a
neural-network ranking model. We explore multiple encoding strategies, with
recurrent neural networks and feed-forward attention layers yielding good
results. This paper presents a demo to interactively pose buyer questions and
visualize the ranking scores of product description sentences from live online
listings.
| 2,018 | Computation and Language |
Decoding-History-Based Adaptive Control of Attention for Neural Machine
Translation | Attention-based sequence-to-sequence model has proved successful in Neural
Machine Translation (NMT). However, the attention without consideration of
decoding history, which includes the past information in the decoder and the
attention mechanism, often causes much repetition. To address this problem, we
propose the decoding-history-based Adaptive Control of Attention (ACA) for the
NMT model. ACA learns to control the attention by keeping track of the decoding
history and the current information with a memory vector, so that the model can
take the translated contents and the current information into consideration.
Experiments on Chinese-English translation and the English-Vietnamese
translation have demonstrated that our model significantly outperforms the
strong baselines. The analysis shows that our model is capable of generating
translation with less repetition and higher accuracy. The code will be
available at https://github.com/lancopku
| 2,018 | Computation and Language |
Byte-Level Recursive Convolutional Auto-Encoder for Text | This article proposes to auto-encode text at byte-level using convolutional
networks with a recursive architecture. The motivation is to explore whether it
is possible to have scalable and homogeneous text generation at byte-level in a
non-sequential fashion through the simple task of auto-encoding. We show that
non-sequential text generation from a fixed-length representation is not only
possible, but also achieved much better auto-encoding results than recurrent
networks. The proposed model is a multi-stage deep convolutional
encoder-decoder framework using residual connections, containing up to 160
parameterized layers. Each encoder or decoder contains a shared group of
modules that consists of either pooling or upsampling layers, making the
network recursive in terms of abstraction levels in representation. Results for
6 large-scale paragraph datasets are reported, in 3 languages including Arabic,
Chinese and English. Analyses are conducted to study several properties of the
proposed model.
| 2,018 | Computation and Language |
A Neurobiologically Motivated Analysis of Distributional Semantic Models | The pervasive use of distributional semantic models or word embeddings in a
variety of research fields is due to their remarkable ability to represent the
meanings of words for both practical application and cognitive modeling.
However, little has been known about what kind of information is encoded in
text-based word vectors. This lack of understanding is particularly problematic
when word vectors are regarded as a model of semantic representation for
abstract concepts. This paper attempts to reveal the internal information of
distributional word vectors by the analysis using Binder et al.'s (2016)
brain-based vectors, explicitly structured conceptual representations based on
neurobiologically motivated attributes. In the analysis, the mapping from
text-based vectors to brain-based vectors is trained and prediction performance
is evaluated by comparing the estimated and original brain-based vectors. The
analysis demonstrates that social and cognitive information is better encoded
in text-based word vectors, but emotional information is not. This result is
discussed in terms of embodied theories for abstract concepts.
| 2,018 | Computation and Language |
Texygen: A Benchmarking Platform for Text Generation Models | We introduce Texygen, a benchmarking platform to support research on
open-domain text generation models. Texygen has not only implemented a majority
of text generation models, but also covered a set of metrics that evaluate the
diversity, the quality and the consistency of the generated texts. The Texygen
platform could help standardize the research on text generation and facilitate
the sharing of fine-tuned open-source implementations among researchers for
their work. As a consequence, this would help in improving the reproductivity
and reliability of future research work in text generation.
| 2,018 | Computation and Language |
Improving Variational Encoder-Decoders in Dialogue Generation | Variational encoder-decoders (VEDs) have shown promising results in dialogue
generation. However, the latent variable distributions are usually approximated
by a much simpler model than the powerful RNN structure used for encoding and
decoding, yielding the KL-vanishing problem and inconsistent training
objective. In this paper, we separate the training step into two phases: The
first phase learns to autoencode discrete texts into continuous embeddings,
from which the second phase learns to generalize latent representations by
reconstructing the encoded embedding. In this case, latent variables are
sampled by transforming Gaussian noise through multi-layer perceptrons and are
trained with a separate VED model, which has the potential of realizing a much
more flexible distribution. We compare our model with current popular models
and the experiment demonstrates substantial improvement in both metric-based
and human evaluations.
| 2,018 | Computation and Language |
Syst\`eme de traduction automatique statistique Anglais-Arabe | Machine translation (MT) is the process of translating text written in a
source language into text in a target language. In this article, we present our
English-Arabic statistical machine translation system. First, we present the
general process for setting up a statistical machine translation system, then
we describe the tools as well as the different corpora we used to build our MT
system. Our system was evaluated in terms of the BLUE score (24.51%)
| 2,018 | Computation and Language |
Investigations on Knowledge Base Embedding for Relation Prediction and
Extraction | We report an evaluation of the effectiveness of the existing knowledge base
embedding models for relation prediction and for relation extraction on a wide
range of benchmarks. We also describe a new benchmark, which is much larger and
complex than previous ones, which we introduce to help validate the
effectiveness of both tasks. The results demonstrate that knowledge base
embedding models are generally effective for relation prediction but unable to
give improvements for the state-of-art neural relation extraction model with
the existing strategies, while pointing limitations of existing methods.
| 2,018 | Computation and Language |
Non-Projective Dependency Parsing via Latent Heads Representation (LHR) | In this paper, we introduce a novel approach based on a bidirectional
recurrent autoencoder to perform globally optimized non-projective dependency
parsing via semi-supervised learning. The syntactic analysis is completed at
the end of the neural process that generates a Latent Heads Representation
(LHR), without any algorithmic constraint and with a linear complexity. The
resulting "latent syntactic structure" can be used directly in other semantic
tasks. The LHR is transformed into the usual dependency tree computing a simple
vectors similarity. We believe that our model has the potential to compete with
much more complex state-of-the-art parsing architectures.
| 2,018 | Computation and Language |
An Empirical Evaluation of Deep Learning for ICD-9 Code Assignment using
MIMIC-III Clinical Notes | Background and Objective: Code assignment is of paramount importance in many
levels in modern hospitals, from ensuring accurate billing process to creating
a valid record of patient care history. However, the coding process is tedious
and subjective, and it requires medical coders with extensive training. This
study aims to evaluate the performance of deep-learning-based systems to
automatically map clinical notes to ICD-9 medical codes. Methods: The
evaluations of this research are focused on end-to-end learning methods without
manually defined rules. Traditional machine learning algorithms, as well as
state-of-the-art deep learning methods such as Recurrent Neural Networks and
Convolution Neural Networks, were applied to the Medical Information Mart for
Intensive Care (MIMIC-III) dataset. An extensive number of experiments was
applied to different settings of the tested algorithm. Results: Findings showed
that the deep learning-based methods outperformed other conventional machine
learning methods. From our assessment, the best models could predict the top 10
ICD-9 codes with 0.6957 F1 and 0.8967 accuracy and could estimate the top 10
ICD-9 categories with 0.7233 F1 and 0.8588 accuracy. Our implementation also
outperformed existing work under certain evaluation metrics. Conclusion: A set
of standard metrics was utilized in assessing the performance of ICD-9 code
assignment on MIMIC-III dataset. All the developed evaluation tools and
resources are available online, which can be used as a baseline for further
research.
| 2,019 | Computation and Language |
Polisis: Automated Analysis and Presentation of Privacy Policies Using
Deep Learning | Privacy policies are the primary channel through which companies inform users
about their data collection and sharing practices. These policies are often
long and difficult to comprehend. Short notices based on information extracted
from privacy policies have been shown to be useful but face a significant
scalability hurdle, given the number of policies and their evolution over time.
Companies, users, researchers, and regulators still lack usable and scalable
tools to cope with the breadth and depth of privacy policies. To address these
hurdles, we propose an automated framework for privacy policy analysis
(Polisis). It enables scalable, dynamic, and multi-dimensional queries on
natural language privacy policies. At the core of Polisis is a privacy-centric
language model, built with 130K privacy policies, and a novel hierarchy of
neural-network classifiers that accounts for both high-level aspects and
fine-grained details of privacy practices. We demonstrate Polisis' modularity
and utility with two applications supporting structured and free-form querying.
The structured querying application is the automated assignment of privacy
icons from privacy policies. With Polisis, we can achieve an accuracy of 88.4%
on this task. The second application, PriBot, is the first freeform
question-answering system for privacy policies. We show that PriBot can produce
a correct answer among its top-3 results for 82% of the test questions. Using
an MTurk user study with 700 participants, we show that at least one of
PriBot's top-3 answers is relevant to users for 89% of the test questions.
| 2,018 | Computation and Language |
Unsupervised word sense disambiguation in dynamic semantic spaces | In this paper, we are mainly concerned with the ability to quickly and
automatically distinguish word senses in dynamic semantic spaces in which new
terms and new senses appear frequently. Such spaces are built '"on the fly"
from constantly evolving data sets such as Wikipedia, repositories of patent
grants and applications, or large sets of legal documents for Technology
Assisted Review and e-discovery. This immediacy rules out supervision as well
as the use of a priori training sets. We show that the various senses of a term
can be automatically made apparent with a simple clustering algorithm, each
sense being a vector in the semantic space. While we only consider here
semantic spaces built by using random vectors, this algorithm should work with
any kind of embedding, provided meaningful similarities between terms can be
computed and do fulfill at least the two basic conditions that terms which
close meanings have high similarities and terms with unrelated meanings have
near-zero similarities.
| 2,018 | Computation and Language |
Learning from Past Mistakes: Improving Automatic Speech Recognition
Output via Noisy-Clean Phrase Context Modeling | Automatic speech recognition (ASR) systems often make unrecoverable errors
due to subsystem pruning (acoustic, language and pronunciation models); for
example pruning words due to acoustics using short-term context, prior to
rescoring with long-term context based on linguistics. In this work we model
ASR as a phrase-based noisy transformation channel and propose an error
correction system that can learn from the aggregate errors of all the
independent modules constituting the ASR and attempt to invert those. The
proposed system can exploit long-term context using a neural network language
model and can better choose between existing ASR output possibilities as well
as re-introduce previously pruned or unseen (out-of-vocabulary) phrases. It
provides corrections under poorly performing ASR conditions without degrading
any accurate transcriptions; such corrections are greater on top of
out-of-domain and mismatched data ASR. Our system consistently provides
improvements over the baseline ASR, even when baseline is further optimized
through recurrent neural network language model rescoring. This demonstrates
that any ASR improvements can be exploited independently and that our proposed
system can potentially still provide benefits on highly optimized ASR. Finally,
we present an extensive analysis of the type of errors corrected by our system.
| 2,019 | Computation and Language |
Enhance word representation for out-of-vocabulary on Ubuntu dialogue
corpus | Ubuntu dialogue corpus is the largest public available dialogue corpus to
make it feasible to build end-to-end deep neural network models directly from
the conversation data. One challenge of Ubuntu dialogue corpus is the large
number of out-of-vocabulary words. In this paper we proposed a method which
combines the general pre-trained word embedding vectors with those generated on
the task-specific training set to address this issue. We integrated character
embedding into Chen et al's Enhanced LSTM method (ESIM) and used it to evaluate
the effectiveness of our proposed method. For the task of next utterance
selection, the proposed method has demonstrated a significant performance
improvement against original ESIM and the new model has achieved
state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation
corpus. In addition, we investigated the performance impact of end-of-utterance
and end-of-turn token tags.
| 2,018 | Computation and Language |
Joint Modeling of Accents and Acoustics for Multi-Accent Speech
Recognition | The performance of automatic speech recognition systems degrades with
increasing mismatch between the training and testing scenarios. Differences in
speaker accents are a significant source of such mismatch. The traditional
approach to deal with multiple accents involves pooling data from several
accents during training and building a single model in multi-task fashion,
where tasks correspond to individual accents. In this paper, we explore an
alternate model where we jointly learn an accent classifier and a multi-task
acoustic model. Experiments on the American English Wall Street Journal and
British English Cambridge corpora demonstrate that our joint model outperforms
the strong multi-task acoustic model baseline. We obtain a 5.94% relative
improvement in word error rate on British English, and 9.47% relative
improvement on American English. This illustrates that jointly modeling with
accent information improves acoustic model performance.
| 2,018 | Computation and Language |
Learning Inductive Biases with Simple Neural Networks | People use rich prior knowledge about the world in order to efficiently learn
new concepts. These priors - also known as "inductive biases" - pertain to the
space of internal models considered by a learner, and they help the learner
make inferences that go beyond the observed data. A recent study found that
deep neural networks optimized for object recognition develop the shape bias
(Ritter et al., 2017), an inductive bias possessed by children that plays an
important role in early word learning. However, these networks use
unrealistically large quantities of training data, and the conditions required
for these biases to develop are not well understood. Moreover, it is unclear
how the learning dynamics of these networks relate to developmental processes
in childhood. We investigate the development and influence of the shape bias in
neural networks using controlled datasets of abstract patterns and synthetic
images, allowing us to systematically vary the quantity and form of the
experience provided to the learning algorithms. We find that simple neural
networks develop a shape bias after seeing as few as 3 examples of 4 object
categories. The development of these biases predicts the onset of vocabulary
acceleration in our networks, consistent with the developmental process in
children.
| 2,018 | Computation and Language |
Biomedical term normalization of EHRs with UMLS | This paper presents a novel prototype for biomedical term normalization of
electronic health record excerpts with the Unified Medical Language System
(UMLS) Metathesaurus. Despite being multilingual and cross-lingual by design,
we first focus on processing clinical text in Spanish because there is no
existing tool for this language and for this specific purpose. The tool is
based on Apache Lucene to index the Metathesaurus and generate mapping
candidates from input text. It uses the IXA pipeline for basic language
processing and resolves ambiguities with the UKB toolkit. It has been evaluated
by measuring its agreement with MetaMap in two English-Spanish parallel
corpora. In addition, we present a web-based interface for the tool.
| 2,018 | Computation and Language |
Efficient Large-Scale Multi-Modal Classification | While the incipient internet was largely text-based, the modern digital world
is becoming increasingly multi-modal. Here, we examine multi-modal
classification where one modality is discrete, e.g. text, and the other is
continuous, e.g. visual representations transferred from a convolutional neural
network. In particular, we focus on scenarios where we have to be able to
classify large quantities of data quickly. We investigate various methods for
performing multi-modal fusion and analyze their trade-offs in terms of
classification accuracy and computational efficiency. Our findings indicate
that the inclusion of continuous information improves performance over
text-only on a range of multi-modal classification tasks, even with simple
fusion methods. In addition, we experiment with discretizing the continuous
features in order to speed up and simplify the fusion process even further. Our
results show that fusion with discretized features outperforms text-only
classification, at a fraction of the computational cost of full multi-modal
fusion, with the additional benefit of improved interpretability.
| 2,018 | Computation and Language |
Praaline: Integrating Tools for Speech Corpus Research | This paper presents Praaline, an open-source software system for managing,
annotating, analysing and visualising speech corpora. Researchers working with
speech corpora are often faced with multiple tools and formats, and they need
to work with ever-increasing amounts of data in a collaborative way. Praaline
integrates and extends existing time-proven tools for spoken corpora analysis
(Praat, Sonic Visualiser and a bridge to the R statistical package) in a
modular system, facilitating automation and reuse. Users are exposed to an
integrated, user-friendly interface from which to access multiple tools. Corpus
metadata and annotations may be stored in a database, locally or remotely, and
users can define the metadata and annotation structure. Users may run a
customisable cascade of analysis steps, based on plug-ins and scripts, and
update the database with the results. The corpus database may be queried, to
produce aggregated data-sets. Praaline is extensible using Python or C++
plug-ins, while Praat and R scripts may be executed against the corpus data. A
series of visualisations, editors and plug-ins are provided. Praaline is free
software, released under the GPL license.
| 2,014 | Computation and Language |
DisMo: A Morphosyntactic, Disfluency and Multi-Word Unit Annotator. An
Evaluation on a Corpus of French Spontaneous and Read Speech | We present DisMo, a multi-level annotator for spoken language corpora that
integrates part-of-speech tagging with basic disfluency detection and
annotation, and multi-word unit recognition. DisMo is a hybrid system that uses
a combination of lexical resources, rules, and statistical models based on
Conditional Random Fields (CRF). In this paper, we present the first public
version of DisMo for French. The system is trained and its performance
evaluated on a 57k-token corpus, including different varieties of French spoken
in three countries (Belgium, France and Switzerland). DisMo supports a
multi-level annotation scheme, in which the tokenisation to minimal word units
is complemented with multi-word unit groupings (each having associated POS
tags), as well as separate levels for annotating disfluencies and discourse
phenomena. We present the system's architecture, linguistic resources and its
hierarchical tag-set. Results show that DisMo achieves a precision of 95%
(finest tag-set) to 96.8% (coarse tag-set) in POS-tagging non-punctuated,
sound-aligned transcriptions of spoken French, while also offering substantial
possibilities for automated multi-level annotation.
| 2,014 | Computation and Language |
WorldTree: A Corpus of Explanation Graphs for Elementary Science
Questions supporting Multi-Hop Inference | Developing methods of automated inference that are able to provide users with
compelling human-readable justifications for why the answer to a question is
correct is critical for domains such as science and medicine, where user trust
and detecting costly errors are limiting factors to adoption. One of the
central barriers to training question answering models on explainable inference
tasks is the lack of gold explanations to serve as training data. In this paper
we present a corpus of explanations for standardized science exams, a recent
challenge task for question answering. We manually construct a corpus of
detailed explanations for nearly all publicly available standardized elementary
science question (approximately 1,680 3rd through 5th grade questions) and
represent these as "explanation graphs" -- sets of lexically overlapping
sentences that describe how to arrive at the correct answer to a question
through a combination of domain and world knowledge. We also provide an
explanation-centered tablestore, a collection of semi-structured tables that
contain the knowledge to construct these elementary science explanations.
Together, these two knowledge resources map out a substantial portion of the
knowledge required for answering and explaining elementary science exams, and
provide both structured and free-text training data for the explainable
inference task.
| 2,018 | Computation and Language |
Zero-Resource Neural Machine Translation with Multi-Agent Communication
Game | While end-to-end neural machine translation (NMT) has achieved notable
success in the past years in translating a handful of resource-rich language
pairs, it still suffers from the data scarcity problem for low-resource
language pairs and domains. To tackle this problem, we propose an interactive
multimodal framework for zero-resource neural machine translation. Instead of
being passively exposed to large amounts of parallel corpora, our learners
(implemented as encoder-decoder architecture) engage in cooperative image
description games, and thus develop their own image captioning or neural
machine translation model from the need to communicate in order to succeed at
the game. Experimental results on the IAPR-TC12 and Multi30K datasets show that
the proposed learning mechanism significantly improves over the
state-of-the-art methods.
| 2,018 | Computation and Language |
Augmenting Librispeech with French Translations: A Multimodal Corpus for
Direct Speech Translation Evaluation | Recent works in spoken language translation (SLT) have attempted to build
end-to-end speech-to-text translation without using source language
transcription during learning or decoding. However, while large quantities of
parallel texts (such as Europarl, OpenSubtitles) are available for training
machine translation systems, there are no large (100h) and open source parallel
corpora that include speech in a source language aligned to text in a target
language. This paper tries to fill this gap by augmenting an existing
(monolingual) corpus: LibriSpeech. This corpus, used for automatic speech
recognition, is derived from read audiobooks from the LibriVox project, and has
been carefully segmented and aligned. After gathering French e-books
corresponding to the English audio-books from LibriSpeech, we align speech
segments at the sentence level with their respective translations and obtain
236h of usable parallel data. This paper presents the details of the processing
as well as a manual evaluation conducted on a small subset of the corpus. This
evaluation shows that the automatic alignments scores are reasonably correlated
with the human judgments of the bilingual alignment quality. We believe that
this corpus (which is made available online) is useful for replicable
experiments in direct speech translation or more general spoken language
translation experiments.
| 2,018 | Computation and Language |
Natural Language Inference over Interaction Space: ICLR 2018
Reproducibility Report | We have tried to reproduce the results of the paper "Natural Language
Inference over Interaction Space" submitted to ICLR 2018 conference as part of
the ICLR 2018 Reproducibility Challenge. Initially, we were not aware that the
code was available, so we started to implement the network from scratch. We
have evaluated our version of the model on Stanford NLI dataset and reached
86.38% accuracy on the test set, while the paper claims 88.0% accuracy. The
main difference, as we understand it, comes from the optimizers and the way
model selection is performed.
| 2,018 | Computation and Language |
Recurrent Neural Network-Based Semantic Variational Autoencoder for
Sequence-to-Sequence Learning | Sequence-to-sequence (Seq2seq) models have played an important role in the
recent success of various natural language processing methods, such as machine
translation, text summarization, and speech recognition. However, current
Seq2seq models have trouble preserving global latent information from a long
sequence of words. Variational autoencoder (VAE) alleviates this problem by
learning a continuous semantic space of the input sentence. However, it does
not solve the problem completely. In this paper, we propose a new recurrent
neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder
(RNN--SVAE), to better capture the global latent information of a sequence of
words. To reflect the meaning of words in a sentence properly, without regard
to its position within the sentence, we construct a document information vector
using the attention information between the final state of the encoder and
every prior hidden state. Then, the mean and standard deviation of the
continuous semantic space are learned by using this vector to take advantage of
the variational method. By using the document information vector to find the
semantic space of the sentence, it becomes possible to better capture the
global latent feature of the sentence. Experimental results of three natural
language tasks (i.e., language modeling, missing word imputation, paraphrase
identification) confirm that the proposed RNN--SVAE yields higher performance
than two benchmark models.
| 2,018 | Computation and Language |
Online Learning for Effort Reduction in Interactive Neural Machine
Translation | Neural machine translation systems require large amounts of training data and
resources. Even with this, the quality of the translations may be insufficient
for some users or domains. In such cases, the output of the system must be
revised by a human agent. This can be done in a post-editing stage or following
an interactive machine translation protocol.
We explore the incremental update of neural machine translation systems
during the post-editing or interactive translation processes. Such
modifications aim to incorporate the new knowledge, from the edited sentences,
into the translation system. Updates to the model are performed on-the-fly, as
sentences are corrected, via online learning techniques. In addition, we
implement a novel interactive, adaptive system, able to react to
single-character interactions. This system greatly reduces the human effort
required for obtaining high-quality translations.
In order to stress our proposals, we conduct exhaustive experiments varying
the amount and type of data available for training. Results show that online
learning effectively achieves the objective of reducing the human effort
required during the post-editing or the interactive machine translation stages.
Moreover, these adaptive systems also perform well in scenarios with scarce
resources. We show that a neural machine translation system can be rapidly
adapted to a specific domain, exclusively by means of online learning
techniques.
| 2,019 | Computation and Language |
TextZoo, a New Benchmark for Reconsidering Text Classification | Text representation is a fundamental concern in Natural Language Processing,
especially in text classification. Recently, many neural network approaches
with delicate representation model (e.g. FASTTEXT, CNN, RNN and many hybrid
models with attention mechanisms) claimed that they achieved state-of-art in
specific text classification datasets. However, it lacks an unified benchmark
to compare these models and reveals the advantage of each sub-components for
various settings. We re-implement more than 20 popular text representation
models for classification in more than 10 datasets. In this paper, we
reconsider the text classification task in the perspective of neural network
and get serval effects with analysis of the above results.
| 2,018 | Computation and Language |
Syntax and Semantics of Italian Poetry in the First Half of the 20th
Century | In this paper we study, analyse and comment rhetorical figures present in
some of most interesting poetry of the first half of the twentieth century.
These figures are at first traced back to some famous poet of the past and then
compared to classical Latin prose. Linguistic theory is then called in to show
how they can be represented in syntactic structures and classified as
noncanonical structures, by positioning discontinuous or displaced linguistic
elements in Spec XP projections at various levels of constituency. Then we
introduce LFG (Lexical Functional Grammar) as the theory that allows us to
connect syntactic noncanonical structures with informational structure and
psycholinguistic theories for complexity evaluation. We end up with two
computational linguistics experiments and then evaluate the results. The first
one uses best online parsers of Italian to parse poetic structures; the second
one uses Getarun, the system created at Ca Foscari Computational Linguistics
Laboratory. As will be shown, the first approach is unable to cope with these
structures due to the use of only statistical probabilistic information. On the
contrary, the second one, being a symbolic rule based system, is by far
superior and allows also to complete both semantic an pragmatic analysis.
| 2,018 | Computation and Language |
Sample Efficient Deep Reinforcement Learning for Dialogue Systems with
Large Action Spaces | In spoken dialogue systems, we aim to deploy artificial intelligence to build
automated dialogue agents that can converse with humans. A part of this effort
is the policy optimisation task, which attempts to find a policy describing how
to respond to humans, in the form of a function taking the current state of the
dialogue and returning the response of the system. In this paper, we
investigate deep reinforcement learning approaches to solve this problem.
Particular attention is given to actor-critic methods, off-policy reinforcement
learning with experience replay, and various methods aimed at reducing the bias
and variance of estimators. When combined, these methods result in the
previously proposed ACER algorithm that gave competitive results in gaming
environments. These environments however are fully observable and have a
relatively small action set so in this paper we examine the application of ACER
to dialogue policy optimisation. We show that this method beats the current
state-of-the-art in deep learning approaches for spoken dialogue systems. This
not only leads to a more sample efficient algorithm that can train faster, but
also allows us to apply the algorithm in more difficult environments than
before. We thus experiment with learning in a very large action space, which
has two orders of magnitude more actions than previously considered. We find
that ACER trains significantly faster than the current state-of-the-art.
| 2,018 | Computation and Language |
Understanding Recurrent Neural State Using Memory Signatures | We demonstrate a network visualization technique to analyze the recurrent
state inside the LSTMs/GRUs used commonly in language and acoustic models.
Interpreting intermediate state and network activations inside end-to-end
models remains an open challenge. Our method allows users to understand exactly
how much and what history is encoded inside recurrent state in grapheme
sequence models. Our procedure trains multiple decoders that predict prior
input history. Compiling results from these decoders, a user can obtain a
signature of the recurrent kernel that characterizes its memory behavior. We
demonstrate this method's usefulness in revealing information divergence in the
bases of recurrent factorized kernels, visualizing the character-level
differences between the memory of n-gram and recurrent language models, and
extracting knowledge of history encoded in the layers of grapheme-based
end-to-end ASR networks.
| 2,018 | Computation and Language |
Automatic Generation of Language-Independent Features for Cross-Lingual
Classification | Many applications require categorization of text documents using predefined
categories. The main approach to performing text categorization is learning
from labeled examples. For many tasks, it may be difficult to find examples in
one language but easy in others. The problem of learning from examples in one
or more languages and classifying (categorizing) in another is called
cross-lingual learning. In this work, we present a novel approach that solves
the general cross-lingual text categorization problem. Our method generates,
for each training document, a set of language-independent features. Using these
features for training yields a language-independent classifier. At the
classification stage, we generate language-independent features for the
unlabeled document, and apply the classifier on the new representation.
To build the feature generator, we utilize a hierarchical
language-independent ontology, where each concept has a set of support
documents for each language involved. In the preprocessing stage, we use the
support documents to build a set of language-independent feature generators,
one for each language. The collection of these generators is used to map any
document into the language-independent feature space.
Our methodology works on the most general cross-lingual text categorization
problems, being able to learn from any mix of languages and classify documents
in any other language. We also present a method for exploiting the hierarchical
structure of the ontology to create virtual supporting documents for languages
that do not have them. We tested our method, using Wikipedia as our ontology,
on the most commonly used test collections in cross-lingual text
categorization, and found that it outperforms existing methods.
| 2,018 | Computation and Language |
Making "fetch" happen: The influence of social and linguistic context on
nonstandard word growth and decline | In an online community, new words come and go: today's "haha" may be replaced
by tomorrow's "lol." Changes in online writing are usually studied as a social
process, with innovations diffusing through a network of individuals in a
speech community. But unlike other types of innovation, language change is
shaped and constrained by the system in which it takes part. To investigate the
links between social and structural factors in language change, we undertake a
large-scale analysis of nonstandard word growth in the online community Reddit.
We find that dissemination across many linguistic contexts is a sign of growth:
words that appear in more linguistic contexts grow faster and survive longer.
We also find that social dissemination likely plays a less important role in
explaining word growth and decline than previously hypothesized.
| 2,018 | Computation and Language |
End-to-End Automatic Speech Translation of Audiobooks | We investigate end-to-end speech-to-text translation on a corpus of
audiobooks specifically augmented for this task. Previous works investigated
the extreme case where source language transcription is not available during
learning nor decoding, but we also study a midway case where source language
transcription is available at training time only. In this case, a single model
is trained to decode source speech into target text in a single pass.
Experimental results show that it is possible to train compact and efficient
end-to-end speech translation models in this setup. We also distribute the
corpus and hope that our speech translation baseline on this corpus will be
challenged in the future.
| 2,018 | Computation and Language |
Evaluating Compositionality in Sentence Embeddings | An important challenge for human-like AI is compositional semantics. Recent
research has attempted to address this by using deep neural networks to learn
vector space embeddings of sentences, which then serve as input to other tasks.
We present a new dataset for one such task, `natural language inference' (NLI),
that cannot be solved using only word-level knowledge and requires some
compositionality. We find that the performance of state of the art sentence
embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We
analyze the decision rules learned by InferSent and find that they are
consistent with simple heuristics that are ecologically valid in its training
dataset. Further, we find that augmenting training with our dataset improves
test performance on our dataset without loss of performance on the original
training dataset. This highlights the importance of structured datasets in
better understanding and improving AI systems.
| 2,018 | Computation and Language |
A Unified Implicit Dialog Framework for Conversational Search | We propose a unified Implicit Dialog framework for goal-oriented, information
seeking tasks of Conversational Search applications. It aims to enable dialog
interactions with domain data without replying on explicitly encoded the rules
but utilizing the underlying data representation to build the components
required for dialog interaction, which we refer as Implicit Dialog in this
work. The proposed framework consists of a pipeline of End-to-End trainable
modules. A centralized knowledge representation is used to semantically ground
multiple dialog modules. An associated set of tools are integrated with the
framework to gather end users' input for continuous improvement of the system.
The goal is to facilitate development of conversational systems by identifying
the components and the data that can be adapted and reused across many end-user
applications. We demonstrate our approach by creating conversational agents for
several independent domains.
| 2,018 | Computation and Language |
"How Was Your Weekend?" A Generative Model of Phatic Conversation | Unspoken social rules, such as those that govern choosing a proper discussion
topic and when to change discussion topics, guide conversational behaviors. We
propose a computational model of conversation that can follow or break such
rules, with participant agents that respond accordingly. Additionally, we
demonstrate an application of the model: the Experimental Social Tutor (EST), a
first step toward a social skills training tool that generates human-readable
conversation and a conversational guideline at each point in the dialogue.
Finally, we discuss the design and results of a pilot study evaluating the EST.
Results show that our model is capable of producing conversations that follow
social norms.
| 2,018 | Computation and Language |
Sentence Boundary Detection for French with Subword-Level Information
Vectors and Convolutional Neural Networks | In this work we tackle the problem of sentence boundary detection applied to
French as a binary classification task ("sentence boundary" or "not sentence
boundary"). We combine convolutional neural networks with subword-level
information vectors, which are word embedding representations learned from
Wikipedia that take advantage of the words morphology; so each word is
represented as a bag of their character n-grams.
We decide to use a big written dataset (French Gigaword) instead of standard
size transcriptions to train and evaluate the proposed architectures with the
intention of using the trained models in posterior real life ASR
transcriptions.
Three different architectures are tested showing similar results; general
accuracy for all models overpasses 0.96. All three models have good F1 scores
reaching values over 0.97 regarding the "not sentence boundary" class. However,
the "sentence boundary" class reflects lower scores decreasing the F1 metric to
0.778 for one of the models.
Using subword-level information vectors seem to be very effective leading to
conclude that the morphology of words encoded in the embeddings representations
behave like pixels in an image making feasible the use of convolutional neural
network architectures.
| 2,018 | Computation and Language |
Network Features Based Co-hyponymy Detection | Distinguishing lexical relations has been a long term pursuit in natural
language processing (NLP) domain. Recently, in order to detect lexical
relations like hypernymy, meronymy, co-hyponymy etc., distributional semantic
models are being used extensively in some form or the other. Even though a lot
of efforts have been made for detecting hypernymy relation, the problem of
co-hyponymy detection has been rarely investigated. In this paper, we are
proposing a novel supervised model where various network measures have been
utilized to identify co-hyponymy relation with high accuracy performing better
or at par with the state-of-the-art models.
| 2,018 | Computation and Language |
Examining the Tip of the Iceberg: A Data Set for Idiom Translation | Neural Machine Translation (NMT) has been widely used in recent years with
significant improvements for many language pairs. Although state-of-the-art NMT
systems are generating progressively better translations, idiom translation
remains one of the open challenges in this field. Idioms, a category of
multiword expressions, are an interesting language phenomenon where the overall
meaning of the expression cannot be composed from the meanings of its parts. A
first important challenge is the lack of dedicated data sets for learning and
evaluating idiom translation. In this paper we address this problem by creating
the first large-scale data set for idiom translation. Our data set is
automatically extracted from a widely used German-English translation corpus
and includes, for each language direction, a targeted evaluation set where all
sentences contain idioms and a regular training corpus where sentences
including idioms are marked. We release this data set and use it to perform
preliminary NMT experiments as the first step towards better idiom translation.
| 2,018 | Computation and Language |
A Short Survey on Sense-Annotated Corpora | Large sense-annotated datasets are increasingly necessary for training deep
supervised systems in Word Sense Disambiguation. However, gathering
high-quality sense-annotated data for as many instances as possible is a
laborious and expensive task. This has led to the proliferation of automatic
and semi-automatic methods for overcoming the so-called knowledge-acquisition
bottleneck. In this short survey we present an overview of sense-annotated
corpora, annotated either manually- or (semi)automatically, that are currently
available for different languages and featuring distinct lexical resources as
inventory of senses, i.e. WordNet, Wikipedia, BabelNet. Furthermore, we provide
the reader with general statistics of each dataset and an analysis of their
specific features.
| 2,020 | Computation and Language |
Distributional Term Set Expansion | This paper is a short empirical study of the performance of centrality and
classification based iterative term set expansion methods for distributional
semantic models. Iterative term set expansion is an interactive process using
distributional semantics models where a user labels terms as belonging to some
sought after term set, and a system uses this labeling to supply the user with
new, candidate, terms to label, trying to maximize the number of positive
examples found. While centrality based methods have a long history in term set
expansion, we compare them to classification methods based on the the Simple
Margin method, an Active Learning approach to classification using Support
Vector Machines. Examining the performance of various centrality and
classification based methods for a variety of distributional models over five
different term sets, we can show that active learning based methods
consistently outperform centrality based methods.
| 2,018 | Computation and Language |
Linguistic unit discovery from multi-modal inputs in unwritten
languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop | We summarize the accomplishments of a multi-disciplinary workshop exploring
the computational and scientific issues surrounding the discovery of linguistic
units (subwords and words) in a language without orthography. We study the
replacement of orthographic transcriptions by images and/or translated text in
a well-resourced language to help unsupervised discovery from raw speech.
| 2,018 | Computation and Language |
Classifying movie genres by analyzing text reviews | This paper proposes a method for classifying movie genres by only looking at
text reviews. The data used are from Large Movie Review Dataset v1.0 and IMDb.
This paper compared a K-nearest neighbors (KNN) model and a multilayer
perceptron (MLP) that uses tf-idf as input features. The paper also discusses
different evaluation metrics used when doing multi-label classification. For
the data used in this research, the KNN model performed the best with an
accuracy of 55.4\% and a Hamming loss of 0.047.
| 2,018 | Computation and Language |
Deep contextualized word representations | We introduce a new type of deep contextualized word representation that
models both (1) complex characteristics of word use (e.g., syntax and
semantics), and (2) how these uses vary across linguistic contexts (i.e., to
model polysemy). Our word vectors are learned functions of the internal states
of a deep bidirectional language model (biLM), which is pre-trained on a large
text corpus. We show that these representations can be easily added to existing
models and significantly improve the state of the art across six challenging
NLP problems, including question answering, textual entailment and sentiment
analysis. We also present an analysis showing that exposing the deep internals
of the pre-trained network is crucial, allowing downstream models to mix
different types of semi-supervision signals.
| 2,018 | Computation and Language |
Universal Neural Machine Translation for Extremely Low Resource
Languages | In this paper, we propose a new universal machine translation approach
focusing on languages with a limited amount of parallel data. Our proposed
approach utilizes a transfer-learning approach to share lexical and sentence
level representations across multiple source languages into one target
language. The lexical part is shared through a Universal Lexical Representation
to support multilingual word-level sharing. The sentence-level sharing is
represented by a model of experts from all source languages that share the
source encoders with all other languages. This enables the low-resource
language to utilize the lexical and sentence representations of the higher
resource languages. Our approach is able to achieve 23 BLEU on Romanian-English
WMT2016 using a tiny parallel corpus of 6k sentences, compared to the 18 BLEU
of strong baseline system which uses multilingual training and
back-translation. Furthermore, we show that the proposed approach can achieve
almost 20 BLEU on the same dataset through fine-tuning a pre-trained
multi-lingual system in a zero-shot setting.
| 2,018 | Computation and Language |
Improving Retrieval Modeling Using Cross Convolution Networks And Multi
Frequency Word Embedding | To build a satisfying chatbot that has the ability of managing a
goal-oriented multi-turn dialogue, accurate modeling of human conversation is
crucial. In this paper we concentrate on the task of response selection for
multi-turn human-computer conversation with a given context. Previous
approaches show weakness in capturing information of rare keywords that appear
in either or both context and correct response, and struggle with long input
sequences. We propose Cross Convolution Network (CCN) and Multi Frequency word
embedding to address both problems. We train several models using the Ubuntu
Dialogue dataset which is the largest freely available multi-turn based
dialogue corpus. We further build an ensemble model by averaging predictions of
multiple models. We achieve a new state-of-the-art on this dataset with
considerable improvements compared to previous best results.
| 2,018 | Computation and Language |
Deep Learning Based Speech Beamforming | Multi-channel speech enhancement with ad-hoc sensors has been a challenging
task. Speech model guided beamforming algorithms are able to recover natural
sounding speech, but the speech models tend to be oversimplified or the
inference would otherwise be too complicated. On the other hand, deep learning
based enhancement approaches are able to learn complicated speech distributions
and perform efficient inference, but they are unable to deal with variable
number of input channels. Also, deep learning approaches introduce a lot of
errors, particularly in the presence of unseen noise types and settings. We
have therefore proposed an enhancement framework called DEEPBEAM, which
combines the two complementary classes of algorithms. DEEPBEAM introduces a
beamforming filter to produce natural sounding speech, but the filter
coefficients are determined with the help of a monaural speech enhancement
neural network. Experiments on synthetic and real-world data show that DEEPBEAM
is able to produce clean, dry and natural sounding speech, and is robust
against unseen noise.
| 2,018 | Computation and Language |
Open Information Extraction on Scientific Text: An Evaluation | Open Information Extraction (OIE) is the task of the unsupervised creation of
structured information from text. OIE is often used as a starting point for a
number of downstream tasks including knowledge base construction, relation
extraction, and question answering. While OIE methods are targeted at being
domain independent, they have been evaluated primarily on newspaper,
encyclopedic or general web text. In this article, we evaluate the performance
of OIE on scientific texts originating from 10 different disciplines. To do so,
we use two state-of-the-art OIE systems applying a crowd-sourcing approach. We
find that OIE systems perform significantly worse on scientific text than
encyclopedic text. We also provide an error analysis and suggest areas of work
to reduce errors. Our corpus of sentences and judgments are made available.
| 2,018 | Computation and Language |
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language
Inference | We present a novel deep learning architecture to address the natural language
inference (NLI) task. Existing approaches mostly rely on simple reading
mechanisms for independent encoding of the premise and hypothesis. Instead, we
propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to
efficiently model the relationship between a premise and a hypothesis during
encoding and inference. We also introduce a sophisticated ensemble strategy to
combine our proposed models, which noticeably improves final predictions.
Finally, we demonstrate how the results can be improved further with an
additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the
best single model and ensemble model results achieving the new state-of-the-art
scores on the Stanford NLI dataset.
| 2,018 | Computation and Language |
Tools and resources for Romanian text-to-speech and speech-to-text
applications | In this paper we introduce a set of resources and tools aimed at providing
support for natural language processing, text-to-speech synthesis and speech
recognition for Romanian. While the tools are general purpose and can be used
for any language (we successfully trained our system for more than 50 languages
and participated in the Universal Dependencies Shared Task), the resources are
only relevant for Romanian language processing.
| 2,018 | Computation and Language |
Calculating the similarity between words and sentences using a lexical
database and corpus statistics | Calculating the semantic similarity between sentences is a long dealt problem
in the area of natural language processing. The semantic analysis field has a
crucial role to play in the research related to the text analytics. The
semantic similarity differs as the domain of operation differs. In this paper,
we present a methodology which deals with this issue by incorporating semantic
similarity and corpus statistics. To calculate the semantic similarity between
words and sentences, the proposed method follows an edge-based approach using a
lexical database. The methodology can be applied in a variety of domains. The
methodology has been tested on both benchmark standards and mean human
similarity dataset. When tested on these two datasets, it gives highest
correlation value for both word and sentence similarity outperforming other
similar models. For word similarity, we obtained Pearson correlation
coefficient of 0.8753 and for sentence similarity, the correlation obtained is
0.8794.
| 2,018 | Computation and Language |
Event Nugget Detection with Forward-Backward Recurrent Neural Networks | Traditional event detection methods heavily rely on manually engineered rich
features. Recent deep learning approaches alleviate this problem by automatic
feature engineering. But such efforts, like tradition methods, have so far only
focused on single-token event mentions, whereas in practice events can also be
a phrase. We instead use forward-backward recurrent neural networks (FBRNNs) to
detect events that can be either words or phrases. To the best our knowledge,
this is one of the first efforts to handle multi-word events and also the first
attempt to use RNNs for event detection. Experimental results demonstrate that
FBRNN is competitive with the state-of-the-art methods on the ACE 2005 and the
Rich ERE 2015 event detection tasks.
| 2,016 | Computation and Language |
Multinomial Adversarial Networks for Multi-Domain Text Classification | Many text classification tasks are known to be highly domain-dependent.
Unfortunately, the availability of training data can vary drastically across
domains. Worse still, for some domains there may not be any annotated data at
all. In this work, we propose a multinomial adversarial network (MAN) to tackle
the text classification problem in this real-world multidomain setting (MDTC).
We provide theoretical justifications for the MAN framework, proving that
different instances of MANs are essentially minimizers of various f-divergence
metrics (Ali and Silvey, 1966) among multiple probability distributions. MANs
are thus a theoretically sound generalization of traditional adversarial
networks that discriminate over two distributions. More specifically, for the
MDTC task, MAN learns features that are invariant across multiple domains by
resorting to its ability to reduce the divergence among the feature
distributions of each domain. We present experimental results showing that MANs
significantly outperform the prior art on the MDTC task. We also show that MANs
achieve state-of-the-art performance for domains with no labeled data.
| 2,018 | Computation and Language |
Explainable Prediction of Medical Codes from Clinical Text | Clinical notes are text documents that are created by clinicians for each
patient encounter. They are typically accompanied by medical codes, which
describe the diagnosis and treatment. Annotating these codes is labor intensive
and error prone; furthermore, the connection between the codes and the text is
not annotated, obscuring the reasons and details behind specific diagnoses and
treatments. We present an attentional convolutional network that predicts
medical codes from clinical text. Our method aggregates information across the
document using a convolutional neural network, and uses an attention mechanism
to select the most relevant segments for each of the thousands of possible
codes. The method is accurate, achieving precision@8 of 0.71 and a Micro-F1 of
0.54, which are both better than the prior state of the art. Furthermore,
through an interpretability evaluation by a physician, we show that the
attention mechanism identifies meaningful explanations for each code assignment
| 2,018 | Computation and Language |
JU_KS@SAIL_CodeMixed-2017: Sentiment Analysis for Indian Code Mixed
Social Media Texts | This paper reports about our work in the NLP Tool Contest @ICON-2017, shared
task on Sentiment Analysis for Indian Languages (SAIL) (code mixed). To
implement our system, we have used a machine learning algo-rithm called
Multinomial Na\"ive Bayes trained using n-gram and SentiWordnet features. We
have also used a small SentiWordnet for English and a small SentiWordnet for
Bengali. But we have not used any SentiWordnet for Hindi language. We have
tested our system on Hindi-English and Bengali-English code mixed social media
data sets released for the contest. The performance of our system is very close
to the best system participated in the contest. For both Bengali-English and
Hindi-English runs, our system was ranked at the 3rd position out of all
submitted runs and awarded the 3rd prize in the contest.
| 2,017 | Computation and Language |
Cross-topic Argument Mining from Heterogeneous Sources Using
Attention-based Neural Networks | Argument mining is a core technology for automating argument search in large
document collections. Despite its usefulness for this task, most current
approaches to argument mining are designed for use only with specific text
types and fall short when applied to heterogeneous texts. In this paper, we
propose a new sentential annotation scheme that is reliably applicable by crowd
workers to arbitrary Web texts. We source annotations for over 25,000 instances
covering eight controversial topics. The results of cross-topic experiments
show that our attention-based neural network generalizes best to unseen topics
and outperforms vanilla BiLSTM models by 6% in accuracy and 11% in F-score.
| 2,018 | Computation and Language |
Disentangling Aspect and Opinion Words in Target-based Sentiment
Analysis using Lifelong Learning | Given a target name, which can be a product aspect or entity, identifying its
aspect words and opinion words in a given corpus is a fine-grained task in
target-based sentiment analysis (TSA). This task is challenging, especially
when we have no labeled data and we want to perform it for any given domain. To
address it, we propose a general two-stage approach. Stage one extracts/groups
the target-related words (call t-words) for a given target. This is relatively
easy as we can apply an existing semantics-based learning technique. Stage two
separates the aspect and opinion words from the grouped t-words, which is
challenging because we often do not have enough word-level aspect and opinion
labels. In this work, we formulate this problem in a PU learning setting and
incorporate the idea of lifelong learning to solve it. Experimental results
show the effectiveness of our approach.
| 2,018 | Computation and Language |
Articulatory information and Multiview Features for Large Vocabulary
Continuous Speech Recognition | This paper explores the use of multi-view features and their discriminative
transforms in a convolutional deep neural network (CNN) architecture for a
continuous large vocabulary speech recognition task. Mel-filterbank energies
and perceptually motivated forced damped oscillator coefficient (DOC) features
are used after feature-space maximum-likelihood linear regression (fMLLR)
transforms, which are combined and fed as a multi-view feature to a single CNN
acoustic model. Use of multi-view feature representation demonstrated
significant reduction in word error rates (WERs) compared to the use of
individual features by themselves. In addition, when articulatory information
was used as an additional input to a fused deep neural network (DNN) and CNN
acoustic model, it was found to demonstrate further reduction in WER for the
Switchboard subset and the CallHome subset (containing partly non-native
accented speech) of the NIST 2000 conversational telephone speech test set,
reducing the error rate by 12% relative to the baseline in both cases. This
work shows that multi-view features in association with articulatory
information can improve speech recognition robustness to spontaneous and
non-native speech.
| 2,018 | Computation and Language |
Deep Generative Model for Joint Alignment and Word Representation | This work exploits translation data as a source of semantically relevant
learning signal for models of word representation. In particular, we exploit
equivalence through translation as a form of distributed context and jointly
learn how to embed and align with a deep generative model. Our EmbedAlign model
embeds words in their complete observed context and learns by marginalisation
of latent lexical alignments. Besides, it embeds words as posterior probability
densities, rather than point estimates, which allows us to compare words in
context using a measure of overlap between distributions (e.g. KL divergence).
We investigate our model's performance on a range of lexical semantics tasks
achieving competitive results on several standard benchmarks including natural
language inference, paraphrasing, and text similarity.
| 2,018 | Computation and Language |
Learning beyond datasets: Knowledge Graph Augmented Neural Networks for
Natural language Processing | Machine Learning has been the quintessential solution for many AI problems,
but learning is still heavily dependent on the specific training data. Some
learning models can be incorporated with a prior knowledge in the Bayesian set
up, but these learning models do not have the ability to access any organised
world knowledge on demand. In this work, we propose to enhance learning models
with world knowledge in the form of Knowledge Graph (KG) fact triples for
Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning
model that can extract relevant prior support facts from knowledge graphs
depending on the task using attention mechanism. We introduce a
convolution-based model for learning representations of knowledge graph entity
and relation clusters in order to reduce the attention space. We show that the
proposed method is highly scalable to the amount of prior information that has
to be processed and can be applied to any generic NLP task. Using this method
we show significant improvement in performance for text classification with
News20, DBPedia datasets and natural language inference with Stanford Natural
Language Inference (SNLI) dataset. We also demonstrate that a deep learning
model can be trained well with substantially less amount of labeled training
data, when it has access to organised world knowledge in the form of knowledge
graph.
| 2,018 | Computation and Language |
Instance-based Inductive Deep Transfer Learning by Cross-Dataset
Querying with Locality Sensitive Hashing | Supervised learning models are typically trained on a single dataset and the
performance of these models rely heavily on the size of the dataset, i.e.,
amount of data available with the ground truth. Learning algorithms try to
generalize solely based on the data that is presented with during the training.
In this work, we propose an inductive transfer learning method that can augment
learning models by infusing similar instances from different learning tasks in
the Natural Language Processing (NLP) domain. We propose to use instance
representations from a source dataset, \textit{without inheriting anything}
from the source learning model. Representations of the instances of
\textit{source} \& \textit{target} datasets are learned, retrieval of relevant
source instances is performed using soft-attention mechanism and
\textit{locality sensitive hashing}, and then, augmented into the model during
training on the target dataset. Our approach simultaneously exploits the local
\textit{instance level information} as well as the macro statistical viewpoint
of the dataset. Using this approach we have shown significant improvements for
three major news classification datasets over the baseline. Experimental
evaluations also show that the proposed approach reduces dependency on labeled
data by a significant margin for comparable performance. With our proposed
cross dataset learning procedure we show that one can achieve
competitive/better performance than learning from a single dataset.
| 2,018 | Computation and Language |
Structured-based Curriculum Learning for End-to-end English-Japanese
Speech Translation | Sequence-to-sequence attentional-based neural network architectures have been
shown to provide a powerful model for machine translation and speech
recognition. Recently, several works have attempted to extend the models for
end-to-end speech translation task. However, the usefulness of these models
were only investigated on language pairs with similar syntax and word order
(e.g., English-French or English-Spanish). In this work, we focus on end-to-end
speech translation tasks on syntactically distant language pairs (e.g.,
English-Japanese) that require distant word reordering.
To guide the encoder-decoder attentional model to learn this difficult
problem, we propose a structured-based curriculum learning strategy.
Unlike conventional curriculum learning that gradually emphasizes difficult
data examples, we formalize learning strategies from easier network structures
to more difficult network structures. Here, we start the training with
end-to-end encoder-decoder for speech recognition or text-based machine
translation task then gradually move to end-to-end speech translation task. The
experiment results show that the proposed approach could provide significant
improvements in comparison with the one without curriculum learning.
| 2,018 | Computation and Language |
Neural Voice Cloning with a Few Samples | Voice cloning is a highly desired feature for personalized speech interfaces.
Neural network based speech synthesis has been shown to generate high quality
speech for a large number of speakers. In this paper, we introduce a neural
voice cloning system that takes a few audio samples as input. We study two
approaches: speaker adaptation and speaker encoding. Speaker adaptation is
based on fine-tuning a multi-speaker generative model with a few cloning
samples. Speaker encoding is based on training a separate model to directly
infer a new speaker embedding from cloning audios and to be used with a
multi-speaker generative model. In terms of naturalness of the speech and its
similarity to original speaker, both approaches can achieve good performance,
even with very few cloning audios. While speaker adaptation can achieve better
naturalness and similarity, the cloning time or required memory for the speaker
encoding approach is significantly less, making it favorable for low-resource
deployment.
| 2,018 | Computation and Language |
Authorship Attribution Using the Chaos Game Representation | The Chaos Game Representation, a method for creating images from nucleotide
sequences, is modified to make images from chunks of text documents. Machine
learning methods are then applied to train classifiers based on authorship.
Experiments are conducted on several benchmark data sets in English, including
the widely used Federalist Papers, and one in Portuguese. Validation results
for the trained classifiers are competitive with the best methods in prior
literature. The methodology is also successfully applied for text
categorization with encouraging results. One classifier method is moreover seen
to hold promise for the task of digital fingerprinting.
| 2,018 | Computation and Language |
Towards a Continuous Knowledge Learning Engine for Chatbots | Although chatbots have been very popular in recent years, they still have
some serious weaknesses which limit the scope of their applications. One major
weakness is that they cannot learn new knowledge during the conversation
process, i.e., their knowledge is fixed beforehand and cannot be expanded or
updated during conversation. In this paper, we propose to build a general
knowledge learning engine for chatbots to enable them to continuously and
interactively learn new knowledge during conversations. As time goes by, they
become more and more knowledgeable and better and better at learning and
conversation. We model the task as an open-world knowledge base completion
problem and propose a novel technique called lifelong interactive learning and
inference (LiLi) to solve it. LiLi works by imitating how humans acquire
knowledge and perform inference during an interactive conversation. Our
experimental results show LiLi is highly promising.
| 2,018 | Computation and Language |
Fluency Over Adequacy: A Pilot Study in Measuring User Trust in
Imperfect MT | Although measuring intrinsic quality has been a key factor in the advancement
of Machine Translation (MT), successfully deploying MT requires considering not
just intrinsic quality but also the user experience, including aspects such as
trust. This work introduces a method of studying how users modulate their trust
in an MT system after seeing errorful (disfluent or inadequate) output amidst
good (fluent and adequate) output. We conduct a survey to determine how users
respond to good translations compared to translations that are either adequate
but not fluent, or fluent but not adequate. In this pilot study, users
responded strongly to disfluent translations, but were, surprisingly, much less
concerned with adequacy.
| 2,018 | Computation and Language |
Bayesian Models for Unit Discovery on a Very Low Resource Language | Developing speech technologies for low-resource languages has become a very
active research field over the last decade. Among others, Bayesian models have
shown some promising results on artificial examples but still lack of in situ
experiments. Our work applies state-of-the-art Bayesian models to unsupervised
Acoustic Unit Discovery (AUD) in a real low-resource language scenario. We also
show that Bayesian models can naturally integrate information from other
resourceful languages by means of informative prior leading to more consistent
discovered units. Finally, discovered acoustic units are used, either as the
1-best sequence or as a lattice, to perform word segmentation. Word
segmentation results show that this Bayesian approach clearly outperforms a
Segmental-DTW baseline on the same corpus.
| 2,018 | Computation and Language |
Global-scale phylogenetic linguistic inference from lexical resources | Automatic phylogenetic inference plays an increasingly important role in
computational historical linguistics. Most pertinent work is currently based on
expert cognate judgments. This limits the scope of this approach to a small
number of well-studied language families. We used machine learning techniques
to compile data suitable for phylogenetic inference from the ASJP database, a
collection of almost 7,000 phonetically transcribed word lists over 40
concepts, covering two third of the extant world-wide linguistic diversity.
First, we estimated Pointwise Mutual Information scores between sound classes
using weighted sequence alignment and general-purpose optimization. From this
we computed a dissimilarity matrix over all ASJP word lists. This matrix is
suitable for distance-based phylogenetic inference. Second, we applied cognate
clustering to the ASJP data, using supervised training of an SVM classifier on
expert cognacy judgments. Third, we defined two types of binary characters,
based on automatically inferred cognate classes and on sound-class occurrences.
Several tests are reported demonstrating the suitability of these characters
for character-based phylogenetic inference.
| 2,018 | Computation and Language |
Building a Word Segmenter for Sanskrit Overnight | There is an abundance of digitised texts available in Sanskrit. However, the
word segmentation task in such texts are challenging due to the issue of
'Sandhi'. In Sandhi, words in a sentence often fuse together to form a single
chunk of text, where the word delimiter vanishes and sounds at the word
boundaries undergo transformations, which is also reflected in the written
text. Here, we propose an approach that uses a deep sequence to sequence
(seq2seq) model that takes only the sandhied string as the input and predicts
the unsandhied string. The state of the art models are linguistically involved
and have external dependencies for the lexical and morphological analysis of
the input. Our model can be trained "overnight" and be used for production. In
spite of the knowledge lean approach, our system preforms better than the
current state of the art by gaining a percentage increase of 16.79 % than the
current state of the art.
| 2,018 | Computation and Language |
Can Network Embedding of Distributional Thesaurus be Combined with Word
Vectors for Better Representation? | Distributed representations of words learned from text have proved to be
successful in various natural language processing tasks in recent times. While
some methods represent words as vectors computed from text using predictive
model (Word2vec) or dense count based model (GloVe), others attempt to
represent these in a distributional thesaurus network structure where the
neighborhood of a word is a set of words having adequate context overlap. Being
motivated by recent surge of research in network embedding techniques
(DeepWalk, LINE, node2vec etc.), we turn a distributional thesaurus network
into dense word vectors and investigate the usefulness of distributional
thesaurus embedding in improving overall word representation. This is the first
attempt where we show that combining the proposed word representation obtained
by distributional thesaurus embedding with the state-of-the-art word
representations helps in improving the performance by a significant margin when
evaluated against NLP tasks like word similarity and relatedness, synonym
detection, analogy detection. Additionally, we show that even without using any
handcrafted lexical resources we can come up with representations having
comparable performance in the word similarity and relatedness tasks compared to
the representations where a lexical resource has been used.
| 2,018 | Computation and Language |
Sentiment Analysis on Speaker Specific Speech Data | Sentiment analysis has evolved over past few decades, most of the work in it
revolved around textual sentiment analysis with text mining techniques. But
audio sentiment analysis is still in a nascent stage in the research community.
In this proposed research, we perform sentiment analysis on speaker
discriminated speech transcripts to detect the emotions of the individual
speakers involved in the conversation. We analyzed different techniques to
perform speaker discrimination and sentiment analysis to find efficient
algorithms to perform this task.
| 2,018 | Computation and Language |
Improved TDNNs using Deep Kernels and Frequency Dependent Grid-RNNs | Time delay neural networks (TDNNs) are an effective acoustic model for large
vocabulary speech recognition. The strength of the model can be attributed to
its ability to effectively model long temporal contexts. However, current TDNN
models are relatively shallow, which limits the modelling capability. This
paper proposes a method of increasing the network depth by deepening the kernel
used in the TDNN temporal convolutions. The best performing kernel consists of
three fully connected layers with a residual (ResNet) connection from the
output of the first to the output of the third. The addition of
spectro-temporal processing as the input to the TDNN in the form of a
convolutional neural network (CNN) and a newly designed Grid-RNN was
investigated. The Grid-RNN strongly outperforms a CNN if different sets of
parameters for different frequency bands are used and can be further enhanced
by using a bi-directional Grid-RNN. Experiments using the multi-genre broadcast
(MGB3) English data (275h) show that deep kernel TDNNs reduces the word error
rate (WER) by 6% relative and when combined with the frequency dependent
Grid-RNN gives a relative WER reduction of 9%.
| 2,018 | Computation and Language |
Before Name-calling: Dynamics and Triggers of Ad Hominem Fallacies in
Web Argumentation | Arguing without committing a fallacy is one of the main requirements of an
ideal debate. But even when debating rules are strictly enforced and fallacious
arguments punished, arguers often lapse into attacking the opponent by an ad
hominem argument. As existing research lacks solid empirical investigation of
the typology of ad hominem arguments as well as their potential causes, this
paper fills this gap by (1) performing several large-scale annotation studies,
(2) experimenting with various neural architectures and validating our working
hypotheses, such as controversy or reasonableness, and (3) providing linguistic
insights into triggers of ad hominem using explainable neural network
architectures.
| 2,022 | Computation and Language |
Tied Multitask Learning for Neural Speech Translation | We explore multitask models for neural translation of speech, augmenting them
in order to reflect two intuitive notions. First, we introduce a model where
the second task decoder receives information from the decoder of the first
task, since higher-level intermediate representations should provide useful
information. Second, we apply regularization that encourages transitivity and
invertibility. We show that the application of these notions on jointly trained
models improves performance on the tasks of low-resource speech transcription
and translation. It also leads to better performance when using attention
information for word discovery over unsegmented input.
| 2,018 | Computation and Language |
Stability of meanings versus rate of replacement of words: an
experimental test | The words of a language are randomly replaced in time by new ones, but it has
long been known that words corresponding to some items (meanings) are less
frequently replaced than others. Usually, the rate of replacement for a given
item is not directly observable, but it is inferred by the estimated stability
which, on the contrary, is observable. This idea goes back a long way in the
lexicostatistical literature, nevertheless nothing ensures that it gives the
correct answer. The family of Romance languages allows for a direct test of the
estimated stabilities against the replacement rates since the proto-language
(Latin) is known and the replacement rates can be explicitly computed. The
output of the test is threefold:first, we prove that the standard approach
which tries to infer the replacement rates trough the estimated stabilities is
sound; second, we are able to rewrite the fundamental formula of
Glottochronology for a non universal replacement rate (a rate which depends on
the item); third, we give indisputable evidence that the stability ranking is
far from being the same for different families of languages. This last result
is also supported by comparison with the Malagasy family of dialects. As a side
result we also provide some evidence that Vulgar Latin and not Late Classical
Latin is at the root of modern Romance languages.
| 2,018 | Computation and Language |
Zero-Shot Question Generation from Knowledge Graphs for Unseen
Predicates and Entity Types | We present a neural model for question generation from knowledge base triples
in a "Zero-Shot" setup, that is generating questions for triples containing
predicates, subject types or object types that were not seen at training time.
Our model leverages triples occurrences in the natural language corpus in an
encoder-decoder architecture, paired with an original part-of-speech copy
action mechanism to generate questions. Benchmark and human evaluation show
that our model sets a new state-of-the-art for zero-shot QG.
| 2,018 | Computation and Language |
Interpreting DNN output layer activations: A strategy to cope with
unseen data in speech recognition | Unseen data can degrade performance of deep neural net acoustic models. To
cope with unseen data, adaptation techniques are deployed. For unlabeled unseen
data, one must generate some hypothesis given an existing model, which is used
as the label for model adaptation. However, assessing the goodness of the
hypothesis can be difficult, and an erroneous hypothesis can lead to poorly
trained models. In such cases, a strategy to select data having reliable
hypothesis can ensure better model adaptation. This work proposes a
data-selection strategy for DNN model adaptation, where DNN output layer
activations are used to ascertain the goodness of a generated hypothesis. In a
DNN acoustic model, the output layer activations are used to generate target
class probabilities. Under unseen data conditions, the difference between the
most probable target and the next most probable target is decreased compared to
the same for seen data, indicating that the model may be uncertain while
generating its hypothesis. This work proposes a strategy to assess a model's
performance by analyzing the output layer activations by using a distance
measure between the most likely target and the next most likely target, which
is used for data selection for performing unsupervised adaptation.
| 2,018 | Computation and Language |
Learning Word Vectors for 157 Languages | Distributed word representations, or word vectors, have recently been applied
to many tasks in natural language processing, leading to state-of-the-art
performance. A key ingredient to the successful application of these
representations is to train them on very large corpora, and use these
pre-trained models in downstream tasks. In this paper, we describe how we
trained such high quality word representations for 157 languages. We used two
sources of data to train these models: the free online encyclopedia Wikipedia
and data from the common crawl project. We also introduce three new word
analogy datasets to evaluate these word vectors, for French, Hindi and Polish.
Finally, we evaluate our pre-trained word vectors on 10 languages for which
evaluation datasets exists, showing very strong performance compared to
previous models.
| 2,018 | Computation and Language |
Learning Hidden Markov Models from Pairwise Co-occurrences with
Application to Topic Modeling | We present a new algorithm for identifying the transition and emission
probabilities of a hidden Markov model (HMM) from the emitted data.
Expectation-maximization becomes computationally prohibitive for long
observation records, which are often required for identification. The new
algorithm is particularly suitable for cases where the available sample size is
large enough to accurately estimate second-order output probabilities, but not
higher-order ones. We show that if one is only able to obtain a reliable
estimate of the pairwise co-occurrence probabilities of the emissions, it is
still possible to uniquely identify the HMM if the emission probability is
\emph{sufficiently scattered}. We apply our method to hidden topic Markov
modeling, and demonstrate that we can learn topics with higher quality if
documents are modeled as observations of HMMs sharing the same emission (topic)
probability, compared to the simple but widely used bag-of-words model.
| 2,018 | Computation and Language |
Distilling Knowledge Using Parallel Data for Far-field Speech
Recognition | In order to improve the performance for far-field speech recognition, this
paper proposes to distill knowledge from the close-talking model to the
far-field model using parallel data. The close-talking model is called the
teacher model. The far-field model is called the student model. The student
model is trained to imitate the output distributions of the teacher model. This
constraint can be realized by minimizing the Kullback-Leibler (KL) divergence
between the output distribution of the student model and the teacher model.
Experimental results on AMI corpus show that the best student model achieves up
to 4.7% absolute word error rate (WER) reduction when compared with the
conventionally-trained baseline models.
| 2,018 | Computation and Language |
TAP-DLND 1.0 : A Corpus for Document Level Novelty Detection | Detecting novelty of an entire document is an Artificial Intelligence (AI)
frontier problem that has widespread NLP applications, such as extractive
document summarization, tracking development of news events, predicting impact
of scholarly articles, etc. Important though the problem is, we are unaware of
any benchmark document level data that correctly addresses the evaluation of
automatic novelty detection techniques in a classification framework. To bridge
this gap, we present here a resource for benchmarking the techniques for
document level novelty detection. We create the resource via event-specific
crawling of news documents across several domains in a periodic manner. We
release the annotated corpus with necessary statistics and show its use with a
developed system for the problem in concern.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.