Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Modeling Coverage for Neural Machine Translation | Attention mechanism has enhanced state-of-the-art Neural Machine Translation
(NMT) by jointly learning to align and translate. It tends to ignore past
alignment information, however, which often leads to over-translation and
under-translation. To address this problem, we propose coverage-based NMT in
this paper. We maintain a coverage vector to keep track of the attention
history. The coverage vector is fed to the attention model to help adjust
future attention, which lets NMT system to consider more about untranslated
source words. Experiments show that the proposed approach significantly
improves both translation quality and alignment quality over standard
attention-based NMT.
| 2,016 | Computation and Language |
Graded Entailment for Compositional Distributional Semantics | The categorical compositional distributional model of natural language
provides a conceptually motivated procedure to compute the meaning of
sentences, given grammatical structure and the meanings of its words. This
approach has outperformed other models in mainstream empirical language
processing tasks. However, until recently it has lacked the crucial feature of
lexical entailment -- as do other distributional models of meaning.
In this paper we solve the problem of entailment for categorical
compositional distributional semantics. Taking advantage of the abstract
categorical framework allows us to vary our choice of model. This enables the
introduction of a notion of entailment, exploiting ideas from the categorical
semantics of partial knowledge in quantum computation.
The new model of language uses density matrices, on which we introduce a
novel robust graded order capturing the entailment strength between concepts.
This graded measure emerges from a general framework for approximate
entailment, induced by any commutative monoid. Quantum logic embeds in our
graded order.
Our main theorem shows that entailment strength lifts compositionally to the
sentence level, giving a lower bound on sentence entailment. We describe the
essential properties of graded entailment such as continuity, and provide a
procedure for calculating entailment strength.
| 2,016 | Computation and Language |
Improved Spoken Document Summarization with Coverage Modeling Techniques | Extractive summarization aims at selecting a set of indicative sentences from
a source document as a summary that can express the major theme of the
document. A general consensus on extractive summarization is that both
relevance and coverage are critical issues to address. The existing methods
designed to model coverage can be characterized by either reducing redundancy
or increasing diversity in the summary. Maximal margin relevance (MMR) is a
widely-cited method since it takes both relevance and redundancy into account
when generating a summary for a given document. In addition to MMR, there is
only a dearth of research concentrating on reducing redundancy or increasing
diversity for the spoken document summarization task, as far as we are aware.
Motivated by these observations, two major contributions are presented in this
paper. First, in contrast to MMR, which considers coverage by reducing
redundancy, we propose two novel coverage-based methods, which directly
increase diversity. With the proposed methods, a set of representative
sentences, which not only are relevant to the given document but also cover
most of the important sub-themes of the document, can be selected
automatically. Second, we make a step forward to plug in several
document/sentence representation methods into the proposed framework to further
enhance the summarization performance. A series of empirical evaluations
demonstrate the effectiveness of our proposed methods.
| 2,016 | Computation and Language |
Semantic Word Clusters Using Signed Normalized Graph Cuts | Vector space representations of words capture many aspects of word
similarity, but such methods tend to make vector spaces in which antonyms (as
well as synonyms) are close to each other. We present a new signed spectral
normalized graph cut algorithm, signed clustering, that overlays existing
thesauri upon distributionally derived vector representations of words, so that
antonym relationships between word pairs are represented by negative weights.
Our signed clustering algorithm produces clusters of words which simultaneously
capture distributional and synonym relations. We evaluate these clusters
against the SimLex-999 dataset (Hill et al.,2014) of human judgments of word
pair similarities, and also show the benefit of using our clusters to predict
the sentiment of a given text.
| 2,016 | Computation and Language |
Hierarchical Latent Word Clustering | This paper presents a new Bayesian non-parametric model by extending the
usage of Hierarchical Dirichlet Allocation to extract tree structured word
clusters from text data. The inference algorithm of the model collects words in
a cluster if they share similar distribution over documents. In our
experiments, we observed meaningful hierarchical structures on NIPS corpus and
radiology reports collected from public repositories.
| 2,016 | Computation and Language |
On Structured Sparsity of Phonological Posteriors for Linguistic Parsing | The speech signal conveys information on different time scales from short
time scale or segmental, associated to phonological and phonetic information to
long time scale or supra segmental, associated to syllabic and prosodic
information. Linguistic and neurocognitive studies recognize the phonological
classes at segmental level as the essential and invariant representations used
in speech temporal organization. In the context of speech processing, a deep
neural network (DNN) is an effective computational method to infer the
probability of individual phonological classes from a short segment of speech
signal. A vector of all phonological class probabilities is referred to as
phonological posterior. There are only very few classes comprising a short term
speech signal; hence, the phonological posterior is a sparse vector. Although
the phonological posteriors are estimated at segmental level, we claim that
they convey supra-segmental information. Specifically, we demonstrate that
phonological posteriors are indicative of syllabic and prosodic events.
Building on findings from converging linguistic evidence on the gestural model
of Articulatory Phonology as well as the neural basis of speech perception, we
hypothesize that phonological posteriors convey properties of linguistic
classes at multiple time scales, and this information is embedded in their
support (index) of active coefficients. To verify this hypothesis, we obtain a
binary representation of phonological posteriors at the segmental level which
is referred to as first-order sparsity structure; the high-order structures are
obtained by the concatenation of first-order binary vectors. It is then
confirmed that the classification of supra-segmental linguistic events, the
problem known as linguistic parsing, can be achieved with high accuracy using
asimple binary pattern matching of first-order or high-order structures.
| 2,016 | Computation and Language |
Syntax-Semantics Interaction Parsing Strategies. Inside SYNTAGMA | This paper discusses SYNTAGMA, a rule based NLP system addressing the tricky
issues of syntactic ambiguity reduction and word sense disambiguation as well
as providing innovative and original solutions for constituent generation and
constraints management. To provide an insight into how it operates, the
system's general architecture and components, as well as its lexical, syntactic
and semantic resources are described. After that, the paper addresses the
mechanism that performs selective parsing through an interaction between
syntactic and semantic information, leading the parser to a coherent and
accurate interpretation of the input text.
| 2,016 | Computation and Language |
Exploiting Low-dimensional Structures to Enhance DNN Based Acoustic
Modeling in Speech Recognition | We propose to model the acoustic space of deep neural network (DNN)
class-conditional posterior probabilities as a union of low-dimensional
subspaces. To that end, the training posteriors are used for dictionary
learning and sparse coding. Sparse representation of the test posteriors using
this dictionary enables projection to the space of training data. Relying on
the fact that the intrinsic dimensions of the posterior subspaces are indeed
very small and the matrix of all posteriors belonging to a class has a very low
rank, we demonstrate how low-dimensional structures enable further enhancement
of the posteriors and rectify the spurious errors due to mismatch conditions.
The enhanced acoustic modeling method leads to improvements in continuous
speech recognition task using hybrid DNN-HMM (hidden Markov model) framework in
both clean and noisy conditions, where upto 15.4% relative reduction in word
error rate (WER) is achieved.
| 2,017 | Computation and Language |
Speech vocoding for laboratory phonology | Using phonological speech vocoding, we propose a platform for exploring
relations between phonology and speech processing, and in broader terms, for
exploring relations between the abstract and physical structures of a speech
signal. Our goal is to make a step towards bridging phonology and speech
processing and to contribute to the program of Laboratory Phonology. We show
three application examples for laboratory phonology: compositional phonological
speech modelling, a comparison of phonological systems and an experimental
phonological parametric text-to-speech (TTS) system. The featural
representations of the following three phonological systems are considered in
this work: (i) Government Phonology (GP), (ii) the Sound Pattern of English
(SPE), and (iii) the extended SPE (eSPE). Comparing GP- and eSPE-based vocoded
speech, we conclude that the latter achieves slightly better results than the
former. However, GP - the most compact phonological speech representation -
performs comparably to the systems with a higher number of phonological
features. The parametric TTS based on phonological speech representation, and
trained from an unlabelled audiobook in an unsupervised manner, achieves
intelligibility of 85% of the state-of-the-art parametric speech synthesis. We
envision that the presented approach paves the way for researchers in both
fields to form meaningful hypotheses that are explicitly testable using the
concepts developed and exemplified in this paper. On the one hand, laboratory
phonologists might test the applied concepts of their theoretical models, and
on the other hand, the speech processing community may utilize the concepts
developed for the theoretical phonological models for improvements of the
current state-of-the-art applications.
| 2,017 | Computation and Language |
Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing | One of the limitations of semantic parsing approaches to open-domain question
answering is the lexicosyntactic gap between natural language questions and
knowledge base entries -- there are many ways to ask a question, all with the
same answer. In this paper we propose to bridge this gap by generating
paraphrases of the input question with the goal that at least one of them will
be correctly mapped to a knowledge-base query. We introduce a novel grammar
model for paraphrase generation that does not require any sentence-aligned
paraphrase corpus. Our key idea is to leverage the flexibility and scalability
of latent-variable probabilistic context-free grammars to sample paraphrases.
We do an extrinsic evaluation of our paraphrases by plugging them into a
semantic parser for Freebase. Our evaluation experiments on the WebQuestions
benchmark dataset show that the performance of the semantic parser
significantly improves over strong baselines.
| 2,016 | Computation and Language |
Why Do Urban Legends Go Viral? | Urban legends are a genre of modern folklore, consisting of stories about
rare and exceptional events, just plausible enough to be believed, which tend
to propagate inexorably across communities. In our view, while urban legends
represent a form of "sticky" deceptive text, they are marked by a tension
between the credible and incredible. They should be credible like a news
article and incredible like a fairy tale to go viral. In particular we will
focus on the idea that urban legends should mimic the details of news (who,
where, when) to be credible, while they should be emotional and readable like a
fairy tale to be catchy and memorable. Using NLP tools we will provide a
quantitative analysis of these prototypical characteristics. We also lay out
some machine learning experiments showing that it is possible to recognize an
urban legend using just these simple features.
| 2,016 | Computation and Language |
A Kernel Independence Test for Geographical Language Variation | Quantifying the degree of spatial dependence for linguistic variables is a
key task for analyzing dialectal variation. However, existing approaches have
important drawbacks. First, they are based on parametric models of dependence,
which limits their power in cases where the underlying parametric assumptions
are violated. Second, they are not applicable to all types of linguistic data:
some approaches apply only to frequencies, others to boolean indicators of
whether a linguistic variable is present. We present a new method for measuring
geographical language variation, which solves both of these problems. Our
approach builds on Reproducing Kernel Hilbert space (RKHS) representations for
nonparametric statistics, and takes the form of a test statistic that is
computed from pairs of individual geotagged observations without aggregation
into predefined geographical bins. We compare this test with prior work using
synthetic data as well as a diverse set of real datasets: a corpus of Dutch
tweets, a Dutch syntactic atlas, and a dataset of letters to the editor in
North American newspapers. Our proposed test is shown to support robust
inferences across a broad range of scenarios and types of data.
| 2,016 | Computation and Language |
Character-Level Incremental Speech Recognition with Recurrent Neural
Networks | In real-time speech recognition applications, the latency is an important
issue. We have developed a character-level incremental speech recognition (ISR)
system that responds quickly even during the speech, where the hypotheses are
gradually improved while the speaking proceeds. The algorithm employs a
speech-to-character unidirectional recurrent neural network (RNN), which is
end-to-end trained with connectionist temporal classification (CTC), and an
RNN-based character-level language model (LM). The output values of the
CTC-trained RNN are character-level probabilities, which are processed by beam
search decoding. The RNN LM augments the decoding by providing long-term
dependency information. We propose tree-based online beam search with
additional depth-pruning, which enables the system to process infinitely long
input speech with low latency. This system not only responds quickly on speech
but also can dictate out-of-vocabulary (OOV) words according to pronunciation.
The proposed model achieves the word error rate (WER) of 8.90% on the Wall
Street Journal (WSJ) Nov'92 20K evaluation set when trained on the WSJ SI-284
training set.
| 2,016 | Computation and Language |
Long Short-Term Memory-Networks for Machine Reading | In this paper we address the question of how to render sequence-level
networks better at handling structured input. We propose a machine reading
simulator which processes text incrementally from left to right and performs
shallow reasoning with memory and attention. The reader extends the Long
Short-Term Memory architecture with a memory network in place of a single
memory cell. This enables adaptive memory usage during recurrence with neural
attention, offering a way to weakly induce relations among tokens. The system
is initially designed to process a single sequence but we also demonstrate how
to integrate it with an encoder-decoder architecture. Experiments on language
modeling, sentiment analysis, and natural language inference show that our
model matches or outperforms the state of the art.
| 2,016 | Computation and Language |
Sentiment Analysis of Twitter Data: A Survey of Techniques | With the advancement of web technology and its growth, there is a huge volume
of data present in the web for internet users and a lot of data is generated
too. Internet has become a platform for online learning, exchanging ideas and
sharing opinions. Social networking sites like Twitter, Facebook, Google+ are
rapidly gaining popularity as they allow people to share and express their
views about topics,have discussion with different communities, or post messages
across the world. There has been lot of work in the field of sentiment analysis
of twitter data. This survey focuses mainly on sentiment analysis of twitter
data which is helpful to analyze the information in the tweets where opinions
are highly unstructured, heterogeneous and are either positive or negative, or
neutral in some cases. In this paper, we provide a survey and a comparative
analyses of existing techniques for opinion mining like machine learning and
lexicon-based approaches, together with evaluation metrics. Using various
machine learning algorithms like Naive Bayes, Max Entropy, and Support Vector
Machine, we provide a research on twitter data streams.General challenges and
applications of Sentiment Analysis on Twitter are also discussed in this paper.
| 2,016 | Computation and Language |
LIA-RAG: a system based on graphs and divergence of probabilities
applied to Speech-To-Text Summarization | This paper aims to introduces a new algorithm for automatic speech-to-text
summarization based on statistical divergences of probabilities and graphs. The
input is a text from speech conversations with noise, and the output a compact
text summary. Our results, on the pilot task CCCS Multiling 2015 French corpus
are very encouraging
| 2,016 | Computation and Language |
Recurrent Neural Network Postfilters for Statistical Parametric Speech
Synthesis | In the last two years, there have been numerous papers that have looked into
using Deep Neural Networks to replace the acoustic model in traditional
statistical parametric speech synthesis. However, far less attention has been
paid to approaches like DNN-based postfiltering where DNNs work in conjunction
with traditional acoustic models. In this paper, we investigate the use of
Recurrent Neural Networks as a potential postfilter for synthesis. We explore
the possibility of replacing existing postfilters, as well as highlight the
ease with which arbitrary new features can be added as input to the postfilter.
We also tried a novel approach of jointly training the Classification And
Regression Tree and the postfilter, rather than the traditional approach of
training them independently.
| 2,016 | Computation and Language |
Co-Occurrence Patterns in the Voynich Manuscript | The Voynich Manuscript is a medieval book written in an unknown script. This
paper studies the distribution of similarly spelled words in the Voynich
Manuscript. It shows that the distribution of words within the manuscript is
not compatible with natural languages.
| 2,016 | Computation and Language |
Zipf's law is a consequence of coherent language production | The task of text segmentation may be undertaken at many levels in text
analysis---paragraphs, sentences, words, or even letters. Here, we focus on a
relatively fine scale of segmentation, hypothesizing it to be in accord with a
stochastic model of language generation, as the smallest scale where
independent units of meaning are produced. Our goals in this letter include the
development of methods for the segmentation of these minimal independent units,
which produce feature-representations of texts that align with the independence
assumption of the bag-of-terms model, commonly used for prediction and
classification in computational text analysis. We also propose the measurement
of texts' association (with respect to realized segmentations) to the model of
language generation. We find (1) that our segmentations of phrases exhibit much
better associations to the generation model than words and (2), that texts
which are well fit are generally topically homogeneous. Because our generative
model produces Zipf's law, our study further suggests that Zipf's law may be a
consequence of homogeneity in language production.
| 2,016 | Computation and Language |
WASSUP? LOL : Characterizing Out-of-Vocabulary Words in Twitter | Language in social media is mostly driven by new words and spellings that are
constantly entering the lexicon thereby polluting it and resulting in high
deviation from the formal written version. The primary entities of such
language are the out-of-vocabulary (OOV) words. In this paper, we study various
sociolinguistic properties of the OOV words and propose a classification model
to categorize them into at least six categories. We achieve 81.26% accuracy
with high precision and recall. We observe that the content features are the
most discriminative ones followed by lexical and context features.
| 2,016 | Computation and Language |
Efficient Character-level Document Classification by Combining
Convolution and Recurrent Layers | Document classification tasks were primarily tackled at word level. Recent
research that works with character-level inputs shows several benefits over
word-level approaches such as natural incorporation of morphemes and better
handling of rare words. We propose a neural network architecture that utilizes
both convolution and recurrent layers to efficiently encode character inputs.
We validate the proposed model on eight large scale document classification
tasks and compare with character-level convolution-only models. It achieves
comparable performances with much less parameters.
| 2,016 | Computation and Language |
An Iterative Deep Learning Framework for Unsupervised Discovery of
Speech Features and Linguistic Units with Applications on Spoken Term
Detection | In this work we aim to discover high quality speech features and linguistic
units directly from unlabeled speech data in a zero resource scenario. The
results are evaluated using the metrics and corpora proposed in the Zero
Resource Speech Challenge organized at Interspeech 2015. A Multi-layered
Acoustic Tokenizer (MAT) was proposed for automatic discovery of multiple sets
of acoustic tokens from the given corpus. Each acoustic token set is specified
by a set of hyperparameters that describe the model configuration. These sets
of acoustic tokens carry different characteristics fof the given corpus and the
language behind, thus can be mutually reinforced. The multiple sets of token
labels are then used as the targets of a Multi-target Deep Neural Network
(MDNN) trained on low-level acoustic features. Bottleneck features extracted
from the MDNN are then used as the feedback input to the MAT and the MDNN
itself in the next iteration. We call this iterative deep learning framework
the Multi-layered Acoustic Tokenizing Deep Neural Network (MAT-DNN), which
generates both high quality speech features for the Track 1 of the Challenge
and acoustic tokens for the Track 2 of the Challenge. In addition, we performed
extra experiments on the same corpora on the application of query-by-example
spoken term detection. The experimental results showed the iterative deep
learning framework of MAT-DNN improved the detection performance due to better
underlying speech features and acoustic tokens.
| 2,016 | Computation and Language |
The Grail theorem prover: Type theory for syntax and semantics | As the name suggests, type-logical grammars are a grammar formalism based on
logic and type theory. From the prespective of grammar design, type-logical
grammars develop the syntactic and semantic aspects of linguistic phenomena
hand-in-hand, letting the desired semantics of an expression inform the
syntactic type and vice versa. Prototypical examples of the successful
application of type-logical grammars to the syntax-semantics interface include
coordination, quantifier scope and extraction.This chapter describes the Grail
theorem prover, a series of tools for designing and testing grammars in various
modern type-logical grammars which functions as a tool . All tools described in
this chapter are freely available.
| 2,016 | Computation and Language |
"Draw My Topics": Find Desired Topics fast from large scale of Corpus | We develop the "Draw My Topics" toolkit, which provides a fast way to
incorporate social scientists' interest into standard topic modelling. Instead
of using raw corpus with primitive processing as input, an algorithm based on
Vector Space Model and Conditional Entropy are used to connect social
scientists' willingness and unsupervised topic models' output. Space for users'
adjustment on specific corpus of their interest is also accommodated. We
demonstrate the toolkit's use on the Diachronic People's Daily Corpus in
Chinese.
| 2,016 | Computation and Language |
A Factorized Recurrent Neural Network based architecture for medium to
large vocabulary Language Modelling | Statistical language models are central to many applications that use
semantics. Recurrent Neural Networks (RNN) are known to produce state of the
art results for language modelling, outperforming their traditional n-gram
counterparts in many cases. To generate a probability distribution across a
vocabulary, these models require a softmax output layer that linearly increases
in size with the size of the vocabulary. Large vocabularies need a
commensurately large softmax layer and training them on typical laptops/PCs
requires significant time and machine resources. In this paper we present a new
technique for implementing RNN based large vocabulary language models that
substantially speeds up computation while optimally using the limited memory
resources. Our technique, while building on the notion of factorizing the
output layer by having multiple output layers, improves on the earlier work by
substantially optimizing on the individual output layer size and also
eliminating the need for a multistep prediction process.
| 2,016 | Computation and Language |
Many Languages, One Parser | We train one multilingual model for dependency parsing and use it to parse
sentences in several languages. The parsing model uses (i) multilingual word
clusters and embeddings; (ii) token-level language information; and (iii)
language-specific features (fine-grained POS tags). This input representation
enables the parser not only to parse effectively in multiple languages, but
also to generalize across languages based on linguistic universals and
typological similarities, making it more effective to learn from limited
annotations. Our parser's performance compares favorably to strong baselines in
a range of data scenarios, including when the target language has a large
treebank, a small treebank, or no treebank for training.
| 2,016 | Computation and Language |
A Generalised Quantifier Theory of Natural Language in Categorical
Compositional Distributional Semantics with Bialgebras | Categorical compositional distributional semantics is a model of natural
language; it combines the statistical vector space models of words with the
compositional models of grammar. We formalise in this model the generalised
quantifier theory of natural language, due to Barwise and Cooper. The
underlying setting is a compact closed category with bialgebras. We start from
a generative grammar formalisation and develop an abstract categorical
compositional semantics for it, then instantiate the abstract setting to sets
and relations and to finite dimensional vector spaces and linear maps. We prove
the equivalence of the relational instantiation to the truth theoretic
semantics of generalised quantifiers. The vector space instantiation formalises
the statistical usages of words and enables us to, for the first time, reason
about quantified phrases and sentences compositionally in distributional
semantics.
| 2,019 | Computation and Language |
Massively Multilingual Word Embeddings | We introduce new methods for estimating and evaluating embeddings of words in
more than fifty languages in a single shared embedding space. Our estimation
methods, multiCluster and multiCCA, use dictionaries and monolingual data; they
do not require parallel data. Our new evaluation method, multiQVEC-CCA, is
shown to correlate better than previous ones with two downstream tasks (text
categorization and parsing). We also describe a web portal for evaluation that
will facilitate further research in this area, along with open-source releases
of all our methods.
| 2,016 | Computation and Language |
Fantastic 4 system for NIST 2015 Language Recognition Evaluation | This article describes the systems jointly submitted by Institute for
Infocomm (I$^2$R), the Laboratoire d'Informatique de l'Universit\'e du Maine
(LIUM), Nanyang Technology University (NTU) and the University of Eastern
Finland (UEF) for 2015 NIST Language Recognition Evaluation (LRE). The
submitted system is a fusion of nine sub-systems based on i-vectors extracted
from different types of features. Given the i-vectors, several classifiers are
adopted for the language detection task including support vector machines
(SVM), multi-class logistic regression (MCLR), Probabilistic Linear
Discriminant Analysis (PLDA) and Deep Neural Networks (DNN).
| 2,016 | Computation and Language |
Utiliza\c{c}\~ao de Grafos e Matriz de Similaridade na Sumariza\c{c}\~ao
Autom\'atica de Documentos Baseada em Extra\c{c}\~ao de Frases | The internet increased the amount of information available. However, the
reading and understanding of this information are costly tasks. In this
scenario, the Natural Language Processing (NLP) applications enable very
important solutions, highlighting the Automatic Text Summarization (ATS), which
produce a summary from one or more source texts. Automatically summarizing one
or more texts, however, is a complex task because of the difficulties inherent
to the analysis and generation of this summary. This master's thesis describes
the main techniques and methodologies (NLP and heuristics) to generate
summaries. We have also addressed and proposed some heuristics based on graphs
and similarity matrix to measure the relevance of judgments and to generate
summaries by extracting sentences. We used the multiple languages (English,
French and Spanish), CSTNews (Brazilian Portuguese), RPM (French) and DECODA
(French) corpus to evaluate the developped systems. The results obtained were
quite interesting.
| 2,016 | Computation and Language |
From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label
Classification | We propose sparsemax, a new activation function similar to the traditional
softmax, but able to output sparse probabilities. After deriving its
properties, we show how its Jacobian can be efficiently computed, enabling its
use in a network trained with backpropagation. Then, we propose a new smooth
and convex loss function which is the sparsemax analogue of the logistic loss.
We reveal an unexpected connection between this new loss and the Huber
classification loss. We obtain promising empirical results in multi-label
classification problems and in attention-based neural networks for natural
language inference. For the latter, we achieve a similar performance as the
traditional softmax, but with a selective, more compact, attention focus.
| 2,016 | Computation and Language |
Mining Software Quality from Software Reviews: Research Trends and Open
Issues | Software review text fragments have considerably valuable information about
users experience. It includes a huge set of properties including the software
quality. Opinion mining or sentiment analysis is concerned with analyzing
textual user judgments. The application of sentiment analysis on software
reviews can find a quantitative value that represents software quality.
Although many software quality methods are proposed they are considered
difficult to customize and many of them are limited. This article investigates
the application of opinion mining as an approach to extract software quality
properties. We found that the major issues of software reviews mining using
sentiment analysis are due to software lifecycle and the diverse users and
teams.
| 2,016 | Computation and Language |
Swivel: Improving Embeddings by Noticing What's Missing | We present Submatrix-wise Vector Embedding Learner (Swivel), a method for
generating low-dimensional feature embeddings from a feature co-occurrence
matrix. Swivel performs approximate factorization of the point-wise mutual
information matrix via stochastic gradient descent. It uses a piecewise loss
with special handling for unobserved co-occurrences, and thus makes use of all
the information in the matrix. While this requires computation proportional to
the size of the entire matrix, we make use of vectorized multiplication to
process thousands of rows and columns at once to compute millions of predicted
values. Furthermore, we partition the matrix into shards in order to
parallelize the computation across many nodes. This approach results in more
accurate embeddings than can be achieved with methods that consider only
observed co-occurrences, and can scale to much larger corpora than can be
handled with sampling methods.
| 2,016 | Computation and Language |
Exploring the Limits of Language Modeling | In this work we explore recent advances in Recurrent Neural Networks for
large scale Language Modeling, a task central to language understanding. We
extend current models to deal with two key challenges present in this task:
corpora and vocabulary sizes, and complex, long term structure of language. We
perform an exhaustive study on techniques such as character Convolutional
Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.
Our best single model significantly improves state-of-the-art perplexity from
51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),
while an ensemble of models sets a new record by improving perplexity from 41.0
down to 23.7. We also release these models for the NLP and ML community to
study and improve upon.
| 2,016 | Computation and Language |
Simple Search Algorithms on Semantic Networks Learned from Language Use | Recent empirical and modeling research has focused on the semantic fluency
task because it is informative about semantic memory. An interesting interplay
arises between the richness of representations in semantic memory and the
complexity of algorithms required to process it. It has remained an open
question whether representations of words and their relations learned from
language use can enable a simple search algorithm to mimic the observed
behavior in the fluency task. Here we show that it is plausible to learn rich
representations from naturalistic data for which a very simple search algorithm
(a random walk) can replicate the human patterns. We suggest that explicitly
structuring knowledge about words into a semantic network plays a crucial role
in modeling human behavior in memory search and retrieval; moreover, this is
the case across a range of semantic information sources.
| 2,016 | Computation and Language |
Automatic Sarcasm Detection: A Survey | Automatic sarcasm detection is the task of predicting sarcasm in text. This
is a crucial step to sentiment analysis, considering prevalence and challenges
of sarcasm in sentiment-bearing text. Beginning with an approach that used
speech-based features, sarcasm detection has witnessed great interest from the
sentiment analysis community. This paper is the first known compilation of past
work in automatic sarcasm detection. We observe three milestones in the
research so far: semi-supervised pattern extraction to identify implicit
sentiment, use of hashtag-based supervision, and use of context beyond target
text. In this paper, we describe datasets, approaches, trends and issues in
sarcasm detection. We also discuss representative performance values, shared
tasks and pointers to future work, as given in prior works. In terms of
resources that could be useful for understanding state-of-the-art, the survey
presents several useful illustrations - most prominently, a table that
summarizes past papers along different dimensions such as features, annotation
techniques, data forms, etc.
| 2,016 | Computation and Language |
Learning Distributed Representations of Sentences from Unlabelled Data | Unsupervised methods for learning distributed representations of words are
ubiquitous in today's NLP research, but far less is known about the best ways
to learn distributed phrase or sentence representations from unlabelled data.
This paper is a systematic comparison of models that learn such
representations. We find that the optimal approach depends critically on the
intended application. Deeper, more complex models are preferable for
representations to be used in supervised systems, but shallow log-linear models
work best for building representation spaces that can be decoded with simple
spatial distance metrics. We also propose two new unsupervised
representation-learning objectives designed to optimise the trade-off between
training time, domain portability and performance.
| 2,016 | Computation and Language |
Knowledge Transfer with Medical Language Embeddings | Identifying relationships between concepts is a key aspect of scientific
knowledge synthesis. Finding these links often requires a researcher to
laboriously search through scien- tific papers and databases, as the size of
these resources grows ever larger. In this paper we describe how distributional
semantics can be used to unify structured knowledge graphs with unstructured
text to predict new relationships between medical concepts, using a
probabilistic generative model. Our approach is also designed to ameliorate
data sparsity and scarcity issues in the medical domain, which make language
modelling more challenging. Specifically, we integrate the medical relational
database (SemMedDB) with text from electronic health records (EHRs) to perform
knowledge graph completion. We further demonstrate the ability of our model to
predict relationships between tokens not appearing in the relational database.
| 2,016 | Computation and Language |
Variations of the Similarity Function of TextRank for Automated
Summarization | This article presents new alternatives to the similarity function for the
TextRank algorithm for automatic summarization of texts. We describe the
generalities of the algorithm and the different functions we propose. Some of
these variants achieve a significative improvement using the same metrics and
dataset as the original publication.
| 2,015 | Computation and Language |
Attentive Pooling Networks | In this work, we propose Attentive Pooling (AP), a two-way attention
mechanism for discriminative model training. In the context of pair-wise
ranking or classification with neural networks, AP enables the pooling layer to
be aware of the current input pair, in a way that information from the two
input items can directly influence the computation of each other's
representations. Along with such representations of the paired inputs, AP
jointly learns a similarity measure over projected segments (e.g. trigrams) of
the pair, and subsequently, derives the corresponding attention vector for each
input to guide the pooling. Our two-way attention mechanism is a general
framework independent of the underlying representation learning, and it has
been applied to both convolutional neural networks (CNNs) and recurrent neural
networks (RNNs) in our studies. The empirical results, from three very
different benchmark tasks of question answering/answer selection, demonstrate
that our proposed models outperform a variety of strong baselines and achieve
state-of-the-art performance in all the benchmarks.
| 2,016 | Computation and Language |
TabMCQ: A Dataset of General Knowledge Tables and Multiple-choice
Questions | We describe two new related resources that facilitate modelling of general
knowledge reasoning in 4th grade science exams. The first is a collection of
curated facts in the form of tables, and the second is a large set of
crowd-sourced multiple-choice questions covering the facts in the tables.
Through the setup of the crowd-sourced annotation task we obtain implicit
alignment information between questions and tables. We envisage that the
resources will be useful not only to researchers working on question answering,
but also to people investigating a diverse range of other applications such as
information extraction, question parsing, answer type identification, and
lexical semantic modelling.
| 2,016 | Computation and Language |
Signer-independent Fingerspelling Recognition with Deep Neural Network
Adaptation | We study the problem of recognition of fingerspelled letter sequences in
American Sign Language in a signer-independent setting. Fingerspelled sequences
are both challenging and important to recognize, as they are used for many
content words such as proper nouns and technical terms. Previous work has shown
that it is possible to achieve almost 90% accuracies on fingerspelling
recognition in a signer-dependent setting. However, the more realistic
signer-independent setting presents challenges due to significant variations
among signers, coupled with the dearth of available training data. We
investigate this problem with approaches inspired by automatic speech
recognition. We start with the best-performing approaches from prior work,
based on tandem models and segmental conditional random fields (SCRFs), with
features based on deep neural network (DNN) classifiers of letters and
phonological features. Using DNN adaptation, we find that it is possible to
bridge a large part of the gap between signer-dependent and signer-independent
performance. Using only about 115 transcribed words for adaptation from the
target signer, we obtain letter accuracies of up to 82.7% with frame-level
adaptation labels and 69.7% with only word labels.
| 2,016 | Computation and Language |
Attention-Based Convolutional Neural Network for Machine Comprehension | Understanding open-domain text is one of the primary challenges in natural
language processing (NLP). Machine comprehension benchmarks evaluate the
system's ability to understand text based on the text content only. In this
work, we investigate machine comprehension on MCTest, a question answering (QA)
benchmark. Prior work is mainly based on feature engineering approaches. We
come up with a neural network framework, named hierarchical attention-based
convolutional neural network (HABCNN), to address this task without any
manually designed features. Specifically, we explore HABCNN for this task by
two routes, one is through traditional joint modeling of passage, question and
answer, one is through textual entailment. HABCNN employs an attention
mechanism to detect key phrases, key sentences and key snippets that are
relevant to answering the question. Experiments show that HABCNN outperforms
prior deep learning approaches by a big margin.
| 2,016 | Computation and Language |
Science Question Answering using Instructional Materials | We provide a solution for elementary science test using instructional
materials. We posit that there is a hidden structure that explains the
correctness of an answer given the question and instructional materials and
present a unified max-margin framework that learns to find these hidden
structures (given a corpus of question-answer pairs and instructional
materials), and uses what it learns to answer novel elementary science
questions. Our evaluation shows that our framework outperforms several strong
baselines.
| 2,016 | Computation and Language |
Exploiting Lists of Names for Named Entity Identification of Financial
Institutions from Unstructured Documents | There is a wealth of information about financial systems that is embedded in
document collections. In this paper, we focus on a specialized text extraction
task for this domain. The objective is to extract mentions of names of
financial institutions, or FI names, from financial prospectus documents, and
to identify the corresponding real world entities, e.g., by matching against a
corpus of such entities. The tasks are Named Entity Recognition (NER) and
Entity Resolution (ER); both are well studied in the literature. Our
contribution is to develop a rule-based approach that will exploit lists of FI
names for both tasks; our solution is labeled Dict-based NER and Rank-based ER.
Since the FI names are typically represented by a root, and a suffix that
modifies the root, we use these lists of FI names to create specialized root
and suffix dictionaries. To evaluate the effectiveness of our specialized
solution for extracting FI names, we compare Dict-based NER with a general
purpose rule-based NER solution, ORG NER. Our evaluation highlights the
benefits and limitations of specialized versus general purpose approaches, and
presents additional suggestions for tuning and customization for FI name
extraction. To our knowledge, our proposed solutions, Dict-based NER and
Rank-based ER, and the root and suffix dictionaries, are the first attempt to
exploit specialized knowledge, i.e., lists of FI names, for rule-based NER and
ER.
| 2,016 | Computation and Language |
Authorship Attribution Using a Neural Network Language Model | In practice, training language models for individual authors is often
expensive because of limited data resources. In such cases, Neural Network
Language Models (NNLMs), generally outperform the traditional non-parametric
N-gram models. Here we investigate the performance of a feed-forward NNLM on an
authorship attribution problem, with moderate author set size and relatively
limited data. We also consider how the text topics impact performance. Compared
with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the
proposed method achieves nearly 2:5% reduction in perplexity and increases
author classification accuracy by 3:43% on average, given as few as 5 test
sentences. The performance is very competitive with the state of the art in
terms of accuracy and demand on test data. The source code, preprocessed
datasets, a detailed description of the methodology and results are available
at https://github.com/zge/authorship-attribution.
| 2,016 | Computation and Language |
Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label
Embedding | Current systems of fine-grained entity typing use distant supervision in
conjunction with existing knowledge bases to assign categories (type labels) to
entity mentions. However, the type labels so obtained from knowledge bases are
often noisy (i.e., incorrect for the entity mention's local context). We define
a new task, Label Noise Reduction in Entity Typing (LNR), to be the automatic
identification of correct type labels (type-paths) for training examples, given
the set of candidate type labels obtained by distant supervision with a given
type hierarchy. The unknown type labels for individual entity mentions and the
semantic similarity between entity types pose unique challenges for solving the
LNR task. We propose a general framework, called PLE, to jointly embed entity
mentions, text features and entity types into the same low-dimensional space
where, in that space, objects whose types are semantically close have similar
representations. Then we estimate the type-path for each training example in a
top-down manner using the learned embeddings. We formulate a global objective
for learning the embeddings from text corpora and knowledge bases, which adopts
a novel margin-based loss that is robust to noisy labels and faithfully models
type correlation derived from knowledge bases. Our experiments on three public
typing datasets demonstrate the effectiveness and robustness of PLE, with an
average of 25% improvement in accuracy compared to next best method.
| 2,016 | Computation and Language |
Cross-Language Domain Adaptation for Classifying Crisis-Related Short
Messages | Rapid crisis response requires real-time analysis of messages. After a
disaster happens, volunteers attempt to classify tweets to determine needs,
e.g., supplies, infrastructure damage, etc. Given labeled data, supervised
machine learning can help classify these messages. Scarcity of labeled data
causes poor performance in machine training. Can we reuse old tweets to train
classifiers? How can we choose labeled tweets for training? Specifically, we
study the usefulness of labeled data of past events. Do labeled tweets in
different language help? We observe the performance of our classifiers trained
using different combinations of training sets obtained from past disasters. We
perform extensive experimentation on real crisis datasets and show that the
past labels are useful when both source and target events are of the same type
(e.g. both earthquakes). For similar languages (e.g., Italian and Spanish),
cross-language domain adaptation was useful, however, when for different
languages (e.g., Italian and English), the performance decreased.
| 2,016 | Computation and Language |
Overview of Annotation Creation: Processes & Tools | Creating linguistic annotations requires more than just a reliable annotation
scheme. Annotation can be a complex endeavour potentially involving many
people, stages, and tools. This chapter outlines the process of creating
end-to-end linguistic annotations, identifying specific tasks that researchers
often perform. Because tool support is so central to achieving high quality,
reusable annotations with low cost, the focus is on identifying capabilities
that are necessary or useful for annotation tools, as well as common problems
these tools present that reduce their utility. Although examples of specific
tools are provided in many cases, this chapter concentrates more on abstract
capabilities and problems because new tools appear continuously, while old
tools disappear into disuse or disrepair. The two core capabilities tools must
have are support for the chosen annotation scheme and the ability to work on
the language under study. Additional capabilities are organized into three
categories: those that are widely provided; those that often useful but found
in only a few tools; and those that have as yet little or no available tool
support.
| 2,016 | Computation and Language |
Corpus analysis without prior linguistic knowledge - unsupervised mining
of phrases and subphrase structure | When looking at the structure of natural language, "phrases" and "words" are
central notions. We consider the problem of identifying such "meaningful
subparts" of language of any length and underlying composition principles in a
completely corpus-based and language-independent way without using any kind of
prior linguistic knowledge. Unsupervised methods for identifying "phrases",
mining subphrase structure and finding words in a fully automated way are
described. This can be considered as a step towards automatically computing a
"general dictionary and grammar of the corpus". We hope that in the long run
variants of our approach turn out to be useful for other kind of sequence data
as well, such as, e.g., speech, genom sequences, or music annotation. Even if
we are not primarily interested in immediate applications, results obtained for
a variety of languages show that our methods are interesting for many practical
tasks in text mining, terminology extraction and lexicography, search engine
technology, and related fields.
| 2,016 | Computation and Language |
The Interaction of Memory and Attention in Novel Word Generalization: A
Computational Investigation | People exhibit a tendency to generalize a novel noun to the basic-level in a
hierarchical taxonomy -- a cognitively salient category such as "dog" -- with
the degree of generalization depending on the number and type of exemplars.
Recently, a change in the presentation timing of exemplars has also been shown
to have an effect, surprisingly reversing the prior observed pattern of
basic-level generalization. We explore the precise mechanisms that could lead
to such behavior by extending a computational model of word learning and word
generalization to integrate cognitive processes of memory and attention. Our
results show that the interaction of forgetting and attention to novelty, as
well as sensitivity to both type and token frequencies of exemplars, enables
the model to replicate the empirical results from different presentation
timings. Our results reinforce the need to incorporate general cognitive
processes within word learning models to better understand the range of
observed behaviors in vocabulary acquisition.
| 2,016 | Computation and Language |
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and
Beyond | In this work, we model abstractive text summarization using Attentional
Encoder-Decoder Recurrent Neural Networks, and show that they achieve
state-of-the-art performance on two different corpora. We propose several novel
models that address critical problems in summarization that are not adequately
modeled by the basic architecture, such as modeling key-words, capturing the
hierarchy of sentence-to-word structure, and emitting words that are rare or
unseen at training time. Our work shows that many of our proposed models
contribute to further improvement in performance. We also propose a new dataset
consisting of multi-sentence summaries, and establish performance benchmarks
for further research.
| 2,016 | Computation and Language |
On Training Bi-directional Neural Network Language Model with Noise
Contrastive Estimation | We propose to train bi-directional neural network language model(NNLM) with
noise contrastive estimation(NCE). Experiments are conducted on a rescore task
on the PTB data set. It is shown that NCE-trained bi-directional NNLM
outperformed the one trained by conventional maximum likelihood training. But
still(regretfully), it did not out-perform the baseline uni-directional NNLM.
| 2,016 | Computation and Language |
Learning to SMILE(S) | This paper shows how one can directly apply natural language processing (NLP)
methods to classification problems in cheminformatics. Connection between these
seemingly separate fields is shown by considering standard textual
representation of compound, SMILES. The problem of activity prediction against
a target protein is considered, which is a crucial part of computer aided drug
design process. Conducted experiments show that this way one can not only
outrank state of the art results of hand crafted representations but also gets
direct structural insights into the way decisions are made.
| 2,018 | Computation and Language |
Contextual LSTM (CLSTM) models for Large scale NLP tasks | Documents exhibit sequential structure at multiple levels of abstraction
(e.g., sentences, paragraphs, sections). These abstractions constitute a
natural hierarchy for representing the context in which to infer the meaning of
words and larger fragments of text. In this paper, we present CLSTM (Contextual
LSTM), an extension of the recurrent neural network LSTM (Long-Short Term
Memory) model, where we incorporate contextual features (e.g., topics) into the
model. We evaluate CLSTM on three specific NLP tasks: word prediction, next
sentence selection, and sentence topic prediction. Results from experiments run
on two corpora, English documents in Wikipedia and a subset of articles from a
recent snapshot of English Google News, indicate that using both words and
topics as features improves performance of the CLSTM models over baseline LSTM
models for these tasks. For example on the next sentence selection task, we get
relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the
Google News dataset. This clearly demonstrates the significant benefit of using
context appropriately in natural language (NL) tasks. This has implications for
a wide variety of NL applications like question answering, sentence completion,
paraphrase generation, and next utterance prediction in dialog systems.
| 2,016 | Computation and Language |
Text Matching as Image Recognition | Matching two texts is a fundamental problem in many natural language
processing tasks. An effective way is to extract meaningful matching patterns
from words, phrases, and sentences to produce the matching score. Inspired by
the success of convolutional neural network in image recognition, where neurons
can capture many complicated patterns based on the extracted elementary visual
patterns such as oriented edges and corners, we propose to model text matching
as the problem of image recognition. Firstly, a matching matrix whose entries
represent the similarities between words is constructed and viewed as an image.
Then a convolutional neural network is utilized to capture rich matching
patterns in a layer-by-layer way. We show that by resembling the compositional
hierarchies of patterns in image recognition, our model can successfully
identify salient signals such as n-gram and n-term matchings. Experimental
results demonstrate its superiority against the baselines.
| 2,016 | Computation and Language |
Semi-supervised Clustering for Short Text via Deep Representation
Learning | In this work, we propose a semi-supervised method for short text clustering,
where we represent texts as distributed vectors with neural networks, and use a
small amount of labeled data to specify our intention for clustering. We design
a novel objective to combine the representation learning process and the
k-means clustering process together, and optimize the objective with both
labeled data and unlabeled data iteratively until convergence through three
steps: (1) assign each short text to its nearest centroid based on its
representation from the current neural networks; (2) re-estimate the cluster
centroids based on cluster assignments from step (1); (3) update neural
networks according to the objective by keeping centroids and cluster
assignments fixed. Experimental results on four datasets show that our method
works significantly better than several other text clustering methods.
| 2,017 | Computation and Language |
Blind score normalization method for PLDA based speaker recognition | Probabilistic Linear Discriminant Analysis (PLDA) has become state-of-the-art
method for modeling $i$-vector space in speaker recognition task. However the
performance degradation is observed if enrollment data size differs from one
speaker to another. This paper presents a solution to such problem by
introducing new PLDA scoring normalization technique. Normalization parameters
are derived in a blind way, so that, unlike traditional \textit{ZT-norm}, no
extra development data is required. Moreover, proposed method has shown to be
optimal in terms of detection cost function. The experiments conducted on NIST
SRE 2014 database demonstrate an improved accuracy in a mixed enrollment number
condition.
| 2,016 | Computation and Language |
Empath: Understanding Topic Signals in Large-Scale Text | Human language is colored by a broad range of topics, but existing text
analysis tools only focus on a small number of them. We present Empath, a tool
that can generate and validate new lexical categories on demand from a small
set of seed terms (like "bleed" and "punch" to generate the category violence).
Empath draws connotations between words and phrases by deep learning a neural
embedding across more than 1.8 billion words of modern fiction. Given a small
set of seed words that characterize a category, Empath uses its neural
embedding to discover new related terms, then validates the category with a
crowd-powered filter. Empath also analyzes text across 200 built-in,
pre-validated categories we have generated from common topics in our web
dataset, like neglect, government, and social media. We show that Empath's
data-driven, human validated categories are highly correlated (r=0.906) with
similar categories in LIWC.
| 2,016 | Computation and Language |
Sentence Similarity Learning by Lexical Decomposition and Composition | Most conventional sentence similarity methods only focus on similar parts of
two input sentences, and simply ignore the dissimilar parts, which usually give
us some clues and semantic meanings about the sentences. In this work, we
propose a model to take into account both the similarities and dissimilarities
by decomposing and composing lexical semantics over sentences. The model
represents each word as a vector, and calculates a semantic matching vector for
each word based on all words in the other sentence. Then, each word vector is
decomposed into a similar component and a dissimilar component based on the
semantic matching vector. After this, a two-channel CNN model is employed to
capture features by composing the similar and dissimilar components. Finally, a
similarity score is estimated over the composed feature vectors. Experimental
results show that our model gets the state-of-the-art performance on the answer
sentence selection task, and achieves a comparable result on the paraphrase
identification task.
| 2,017 | Computation and Language |
Petrarch 2 : Petrarcher | PETRARCH 2 is the fourth generation of a series of Event-Data coders stemming
from research by Phillip Schrodt. Each iteration has brought new functionality
and usability, and this is no exception.Petrarch 2 takes much of the power of
the original Petrarch's dictionaries and redirects it into a faster and smarter
core logic. Earlier iterations handled sentences largely as a list of words,
incorporating some syntactic information here and there. Petrarch 2 now views
the sentence entirely on the syntactic level. It receives the syntactic parse
of a sentence from the Stanford CoreNLP software, and stores this data as a
tree structure of linked nodes, where each node is a Phrase object.
Prepositional, noun, and verb phrases each have their own version of this
Phrase class, which deals with the logic particular to those kinds of phrases.
Since this is an event coder, the core of the logic focuses around the verbs:
who is acting, who is being acted on, and what is happening. The theory behind
this new structure and its logic is founded in Generative Grammar, Information
Theory, and Lambda-Calculus Semantics.
| 2,016 | Computation and Language |
Domain Specific Author Attribution Based on Feedforward Neural Network
Language Models | Authorship attribution refers to the task of automatically determining the
author based on a given sample of text. It is a problem with a long history and
has a wide range of application. Building author profiles using language models
is one of the most successful methods to automate this task. New language
modeling methods based on neural networks alleviate the curse of dimensionality
and usually outperform conventional N-gram methods. However, there have not
been much research applying them to authorship attribution. In this paper, we
present a novel setup of a Neural Network Language Model (NNLM) and apply it to
a database of text samples from different authors. We investigate how the NNLM
performs on a task with moderate author set size and relatively limited
training and test data, and how the topics of the text samples affect the
accuracy. NNLM achieves nearly 2.5% reduction in perplexity, a measurement of
fitness of a trained language model to the test data. Given 5 random test
sentences, it also increases the author classification accuracy by 3.43% on
average, compared with the N-gram methods using SRILM tools. An open source
implementation of our methodology is freely available at
https://github.com/zge/authorship-attribution/.
| 2,016 | Computation and Language |
Multilingual Twitter Sentiment Classification: The Role of Human
Annotators | What are the limits of automated Twitter sentiment classification? We analyze
a large set of manually labeled tweets in different languages, use them as
training data, and construct automated classification models. It turns out that
the quality of classification models depends much more on the quality and size
of training data than on the type of the model trained. Experimental results
indicate that there is no statistically significant difference between the
performance of the top classification models. We quantify the quality of
training data by applying various annotator agreement measures, and identify
the weakest points of different datasets. We show that the model performance
approaches the inter-annotator agreement when the size of the training set is
sufficiently large. However, it is crucial to regularly monitor the self- and
inter-annotator agreements since this improves the training datasets and
consequently the model performance. Finally, we show that there is strong
evidence that humans perceive the sentiment classes (negative, neutral, and
positive) as ordered.
| 2,016 | Computation and Language |
Ultradense Word Embeddings by Orthogonal Transformation | Embeddings are generic representations that are useful for many NLP tasks. In
this paper, we introduce DENSIFIER, a method that learns an orthogonal
transformation of the embedding space that focuses the information relevant for
a task in an ultradense subspace of a dimensionality that is smaller by a
factor of 100 than the original space. We show that ultradense embeddings
generated by DENSIFIER reach state of the art on a lexicon creation task in
which words are annotated with three types of lexical information - sentiment,
concreteness and frequency. On the SemEval2015 10B sentiment analysis task we
show that no information is lost when the ultradense subspace is used, but
training is an order of magnitude more efficient due to the compactness of the
ultradense space.
| 2,022 | Computation and Language |
From quantum foundations via natural language meaning to a theory of
everything | In this paper we argue for a paradigmatic shift from `reductionism' to
`togetherness'. In particular, we show how interaction between systems in
quantum theory naturally carries over to modelling how word meanings interact
in natural language. Since meaning in natural language, depending on the
subject domain, encompasses discussions within any scientific discipline, we
obtain a template for theories such as social interaction, animal behaviour,
and many others.
| 2,016 | Computation and Language |
Toward Mention Detection Robustness with Recurrent Neural Networks | One of the key challenges in natural language processing (NLP) is to yield
good performance across application domains and languages. In this work, we
investigate the robustness of the mention detection systems, one of the
fundamental tasks in information extraction, via recurrent neural networks
(RNNs). The advantage of RNNs over the traditional approaches is their capacity
to capture long ranges of context and implicitly adapt the word embeddings,
trained on a large corpus, into a task-specific word representation, but still
preserve the original semantic generalization to be helpful across domains. Our
systematic evaluation for RNN architectures demonstrates that RNNs not only
outperform the best reported systems (up to 9\% relative error reduction) in
the general setting but also achieve the state-of-the-art performance in the
cross-domain setting for English. Regarding other languages, RNNs are
significantly better than the traditional methods on the similar task of named
entity recognition for Dutch (up to 22\% relative error reduction).
| 2,016 | Computation and Language |
Recurrent Neural Network Grammars | We introduce recurrent neural network grammars, probabilistic models of
sentences with explicit phrase structure. We explain efficient inference
procedures that allow application to both parsing and language modeling.
Experiments show that they provide better parsing in English than any single
previously published supervised generative model and better language modeling
than state-of-the-art sequential RNNs in English and Chinese.
| 2,016 | Computation and Language |
Automated Word Prediction in Bangla Language Using Stochastic Language
Models | Word completion and word prediction are two important phenomena in typing
that benefit users who type using keyboard or other similar devices. They can
have profound impact on the typing of disable people. Our work is based on word
prediction on Bangla sentence by using stochastic, i.e. N-gram language model
such as unigram, bigram, trigram, deleted Interpolation and backoff models for
auto completing a sentence by predicting a correct word in a sentence which
saves time and keystrokes of typing and also reduces misspelling. We use large
data corpus of Bangla language of different word types to predict correct word
with the accuracy as much as possible. We have found promising results. We hope
that our work will impact on the baseline for automated Bangla typing.
| 2,016 | Computation and Language |
QuotationFinder - Searching for Quotations and Allusions in Greek and
Latin Texts and Establishing the Degree to Which a Quotation or Allusion
Matches Its Source | The software programs generally used with the TLG (Thesaurus Linguae Graecae)
and the CLCLT (CETEDOC Library of Christian Latin Texts) CD-ROMs are not well
suited for finding quotations and allusions. QuotationFinder uses more
sophisticated criteria as it ranks search results based on how closely they
match the source text, listing search results with literal quotations first and
loose verbal parallels last.
| 2,017 | Computation and Language |
Identification of Parallel Passages Across a Large Hebrew/Aramaic Corpus | We propose a method for efficiently finding all parallel passages in a large
corpus, even if the passages are not quite identical due to rephrasing and
orthographic variation. The key ideas are the representation of each word in
the corpus by its two most infrequent letters, finding matched pairs of strings
of four or five words that differ by at most one word and then identifying
clusters of such matched pairs. Using this method, over 4600 parallel pairs of
passages were identified in the Babylonian Talmud, a Hebrew-Aramaic corpus of
over 1.8 million words, in just over 30 seconds. Empirical comparisons on
sample data indicate that the coverage obtained by our method is essentially
the same as that obtained using slow exhaustive methods.
| 2,018 | Computation and Language |
Gibberish Semantics: How Good is Russian Twitter in Word Semantic
Similarity Task? | The most studied and most successful language models were developed and
evaluated mainly for English and other close European languages, such as
French, German, etc. It is important to study applicability of these models to
other languages. The use of vector space models for Russian was recently
studied for multiple corpora, such as Wikipedia, RuWac, lib.ru. These models
were evaluated against word semantic similarity task. For our knowledge Twitter
was not considered as a corpus for this task, with this work we fill the gap.
Results for vectors trained on Twitter corpus are comparable in accuracy with
other single-corpus trained models, although the best performance is currently
achieved by combination of multiple corpora.
| 2,016 | Computation and Language |
Optimizing the Learning Order of Chinese Characters Using a Novel
Topological Sort Algorithm | We present a novel algorithm for optimizing the order in which Chinese
characters are learned, one that incorporates the benefits of learning them in
order of usage frequency and in order of their hierarchal structural
relationships. We show that our work outperforms previously published orders
and algorithms. Our algorithm is applicable to any scheduling task where nodes
have intrinsic differences in importance and must be visited in topological
order.
| 2,017 | Computation and Language |
Bioinformatics and Classical Literary Study | This paper describes the Quantitative Criticism Lab, a collaborative
initiative between classicists, quantitative biologists, and computer
scientists to apply ideas and methods drawn from the sciences to the study of
literature. A core goal of the project is the use of computational biology,
natural language processing, and machine learning techniques to investigate
authorial style, intertextuality, and related phenomena of literary
significance. As a case study in our approach, here we review the use of
sequence alignment, a common technique in genomics and computational
linguistics, to detect intertextuality in Latin literature. Sequence alignment
is distinguished by its ability to find inexact verbal similarities, which
makes it ideal for identifying phonetic echoes in large corpora of Latin texts.
Although especially suited to Latin, sequence alignment in principle can be
extended to many other languages.
| 2,017 | Computation and Language |
Representation of linguistic form and function in recurrent neural
networks | We present novel methods for analyzing the activation patterns of RNNs from a
linguistic point of view and explore the types of linguistic structure they
learn. As a case study, we use a multi-task gated recurrent network
architecture consisting of two parallel pathways with shared word embeddings
trained on predicting the representations of the visual scene corresponding to
an input sentence, and predicting the next word in the same sentence. Based on
our proposed method to estimate the amount of contribution of individual tokens
in the input to the final prediction of the networks we show that the image
prediction pathway: a) is sensitive to the information structure of the
sentence b) pays selective attention to lexical categories and grammatical
functions that carry semantic information c) learns to treat the same input
token differently depending on its grammatical functions in the sentence. In
contrast the language model is comparatively more sensitive to words with a
syntactic function. Furthermore, we propose methods to ex- plore the function
of individual hidden units in RNNs and show that the two pathways of the
architecture in our case study contain specialized units tuned to patterns
informative for the task, some of which can carry activations to later time
steps to encode long-term dependencies.
| 2,016 | Computation and Language |
Segmental Recurrent Neural Networks for End-to-end Speech Recognition | We study the segmental recurrent neural network for end-to-end acoustic
modelling. This model connects the segmental conditional random field (CRF)
with a recurrent neural network (RNN) used for feature extraction. Compared to
most previous CRF-based acoustic models, it does not rely on an external system
to provide features or segmentation boundaries. Instead, this model
marginalises out all the possible segmentations, and features are extracted
from the RNN trained together with the segmental CRF. In essence, this model is
self-contained and can be trained end-to-end. In this paper, we discuss
practical training and decoding issues as well as the method to speed up the
training in the context of speech recognition. We performed experiments on the
TIMIT dataset. We achieved 17.3 phone error rate (PER) from the first-pass
decoding --- the best reported result using CRFs, despite the fact that we only
used a zeroth-order CRF and without using any language model.
| 2,016 | Computation and Language |
Easy-First Dependency Parsing with Hierarchical Tree LSTMs | We suggest a compositional vector representation of parse trees that relies
on a recursive combination of recurrent-neural network encoders. To demonstrate
its effectiveness, we use the representation as the backbone of a greedy,
bottom-up dependency parser, achieving state-of-the-art accuracies for English
and Chinese, without relying on external word embeddings. The parser's
implementation is available for download at the first author's webpage.
| 2,016 | Computation and Language |
Improving Named Entity Recognition for Chinese Social Media with Word
Segmentation Representation Learning | Named entity recognition, and other information extraction tasks, frequently
use linguistic features such as part of speech tags or chunkings. For languages
where word boundaries are not readily identified in text, word segmentation is
a key first step to generating features for an NER system. While using word
boundary tags as features are helpful, the signals that aid in identifying
these boundaries may provide richer information for an NER system. New
state-of-the-art word segmentation systems use neural models to learn
representations for predicting word boundaries. We show that these same
representations, jointly trained with an NER system, yield significant
improvements in NER for Chinese social media. In our experiments, jointly
training NER and word segmentation with an LSTM-CRF model yields nearly 5%
absolute improvement over previously published results.
| 2,017 | Computation and Language |
Character-based Neural Machine Translation | Neural Machine Translation (MT) has reached state-of-the-art results.
However, one of the main challenges that neural MT still faces is dealing with
very large vocabularies and morphologically rich languages. In this paper, we
propose a neural MT system using character-based embeddings in combination with
convolutional and highway layers to replace the standard lookup-based word
representations. The resulting unlimited-vocabulary and affix-aware source word
embeddings are tested in a state-of-the-art neural MT based on an
attention-based bidirectional recurrent neural network. The proposed MT scheme
provides improved results even when the source language is not morphologically
rich. Improvements up to 3 BLEU points are obtained in the German-English WMT
task.
| 2,016 | Computation and Language |
Counter-fitting Word Vectors to Linguistic Constraints | In this work, we present a novel counter-fitting method which injects
antonymy and synonymy constraints into vector space representations in order to
improve the vectors' capability for judging semantic similarity. Applying this
method to publicly available pre-trained word vectors leads to a new state of
the art performance on the SimLex-999 dataset. We also show how the method can
be used to tailor the word vector space for the downstream task of dialogue
state tracking, resulting in robust improvements across different dialogue
domains.
| 2,016 | Computation and Language |
Question Answering on Freebase via Relation Extraction and Textual
Evidence | Existing knowledge-based question answering systems often rely on small
annotated training data. While shallow methods like relation extraction are
robust to data scarcity, they are less expressive than the deep meaning
representation methods like semantic parsing, thereby failing at answering
questions involving multiple constraints. Here we alleviate this problem by
empowering a relation extraction method with additional evidence from
Wikipedia. We first present a neural network based relation extractor to
retrieve the candidate answers from Freebase, and then infer over Wikipedia to
validate these answers. Experiments on the WebQuestions question answering
dataset show that our method achieves an F_1 of 53.3%, a substantial
improvement over the state-of-the-art.
| 2,016 | Computation and Language |
MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for
Sentence Classification | We introduce a novel, simple convolution neural network (CNN) architecture -
multi-group norm constraint CNN (MGNC-CNN) that capitalizes on multiple sets of
word embeddings for sentence classification. MGNC-CNN extracts features from
input embedding sets independently and then joins these at the penultimate
layer in the network to form a final feature vector. We then adopt a group
regularization strategy that differentially penalizes weights associated with
the subcomponents generated from the respective embedding sets. This model is
much simpler than comparable alternative architectures and requires
substantially less training time. Furthermore, it is flexible in that it does
not require input word embeddings to be of the same dimensionality. We show
that MGNC-CNN consistently outperforms baseline models.
| 2,016 | Computation and Language |
Right Ideals of a Ring and Sublanguages of Science | Among Zellig Harris's numerous contributions to linguistics his theory of the
sublanguages of science probably ranks among the most underrated. However, not
only has this theory led to some exhaustive and meaningful applications in the
study of the grammar of immunology language and its changes over time, but it
also illustrates the nature of mathematical relations between chunks or subsets
of a grammar and the language as a whole. This becomes most clear when dealing
with the connection between metalanguage and language, as well as when
reflecting on operators.
This paper tries to justify the claim that the sublanguages of science stand
in a particular algebraic relation to the rest of the language they are
embedded in, namely, that of right ideals in a ring.
| 2,016 | Computation and Language |
Multi-domain Neural Network Language Generation for Spoken Dialogue
Systems | Moving from limited-domain natural language generation (NLG) to open domain
is difficult because the number of semantic input combinations grows
exponentially with the number of domains. Therefore, it is important to
leverage existing resources and exploit similarities between domains to
facilitate domain adaptation. In this paper, we propose a procedure to train
multi-domain, Recurrent Neural Network-based (RNN) language generators via
multiple adaptation steps. In this procedure, a model is first trained on
counterfeited data synthesised from an out-of-domain dataset, and then fine
tuned on a small set of in-domain utterances with a discriminative objective
function. Corpus-based evaluation results show that the proposed procedure can
achieve competitive performance in terms of BLEU score and slot error rate
while significantly reducing the data needed to train generators in new, unseen
domains. In subjective testing, human judges confirm that the procedure greatly
improves generator performance when only a small amount of data is available in
the domain.
| 2,016 | Computation and Language |
Joint Learning Templates and Slots for Event Schema Induction | Automatic event schema induction (AESI) means to extract meta-event from raw
text, in other words, to find out what types (templates) of event may exist in
the raw text and what roles (slots) may exist in each event type. In this
paper, we propose a joint entity-driven model to learn templates and slots
simultaneously based on the constraints of templates and slots in the same
sentence. In addition, the entities' semantic information is also considered
for the inner connectivity of the entities. We borrow the normalized cut
criteria in image segmentation to divide the entities into more accurate
template clusters and slot clusters. The experiment shows that our model gains
a relatively higher result than previous work.
| 2,016 | Computation and Language |
Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers.
| 2,016 | Computation and Language |
A Bayesian Model of Multilingual Unsupervised Semantic Role Induction | We propose a Bayesian model of unsupervised semantic role induction in
multiple languages, and use it to explore the usefulness of parallel corpora
for this task. Our joint Bayesian model consists of individual models for each
language plus additional latent variables that capture alignments between roles
across languages. Because it is a generative Bayesian model, we can do
evaluations in a variety of scenarios just by varying the inference procedure,
without changing the model, thereby comparing the scenarios directly. We
compare using only monolingual data, using a parallel corpus, using a parallel
corpus with annotations in the other language, and using small amounts of
annotation in the target language. We find that the biggest impact of adding a
parallel corpus to training is actually the increase in mono-lingual data, with
the alignments to another language resulting in small improvements, even with
labeled data for the other language.
| 2,016 | Computation and Language |
Parallel Texts in the Hebrew Bible, New Methods and Visualizations | In this article we develop an algorithm to detect parallel texts in the
Masoretic Text of the Hebrew Bible. The results are presented online and
chapters in the Hebrew Bible containing parallel passages can be inspected
synoptically. Differences between parallel passages are highlighted. In a
similar way the MT of Isaiah is presented synoptically with 1QIsaa. We also
investigate how one can investigate the degree of similarity between parallel
passages with the help of a case study of 2 Kings 19-25 and its parallels in
Isaiah, Jeremiah and 2 Chronicles.
| 2,016 | Computation and Language |
Text Understanding with the Attention Sum Reader Network | Several large cloze-style context-question-answer datasets have been
introduced recently: the CNN and Daily Mail news data and the Children's Book
Test. Thanks to the size of these datasets, the associated text comprehension
task is well suited for deep-learning techniques that currently seem to
outperform all alternative approaches. We present a new, simple model that uses
attention to directly pick the answer from the context as opposed to computing
the answer using a blended representation of words in the document as is usual
in similar models. This makes the model particularly suitable for
question-answering problems where the answer is a single word from the
document. Ensemble of our models sets new state of the art on all evaluated
datasets.
| 2,016 | Computation and Language |
Sentiment Analysis in Scholarly Book Reviews | So far different studies have tackled the sentiment analysis in several
domains such as restaurant and movie reviews. But, this problem has not been
studied in scholarly book reviews which is different in terms of review style
and size. In this paper, we propose to combine different features in order to
be presented to a supervised classifiers which extract the opinion target
expressions and detect their polarities in scholarly book reviews. We construct
a labeled corpus for training and evaluating our methods in French book
reviews. We also evaluate them on English restaurant reviews in order to
measure their robustness across the domains and languages. The evaluation shows
that our methods are enough robust for English restaurant reviews and French
book reviews.
| 2,016 | Computation and Language |
Integrated Sequence Tagging for Medieval Latin Using Deep Representation
Learning | In this paper we consider two sequence tagging tasks for medieval Latin:
part-of-speech tagging and lemmatization. These are both basic, yet
foundational preprocessing steps in applications such as text re-use detection.
Nevertheless, they are generally complicated by the considerable orthographic
variation which is typical of medieval Latin. In Digital Classics, these tasks
are traditionally solved in a (i) cascaded and (ii) lexicon-dependent fashion.
For example, a lexicon is used to generate all the potential lemma-tag pairs
for a token, and next, a context-aware PoS-tagger is used to select the most
appropriate tag-lemma pair. Apart from the problems with out-of-lexicon items,
error percolation is a major downside of such approaches. In this paper we
explore the possibility to elegantly solve these tasks using a single,
integrated approach. For this, we make use of a layered neural network
architecture from the field of deep representation learning.
| 2,017 | Computation and Language |
Getting More Out Of Syntax with PropS | Semantic NLP applications often rely on dependency trees to recognize major
elements of the proposition structure of sentences. Yet, while much semantic
structure is indeed expressed by syntax, many phenomena are not easily read out
of dependency trees, often leading to further ad-hoc heuristic post-processing
or to information loss. To directly address the needs of semantic applications,
we present PropS -- an output representation designed to explicitly and
uniformly express much of the proposition structure which is implied from
syntax, and an associated tool for extracting it from dependency trees.
| 2,016 | Computation and Language |
Semi-Automatic Data Annotation, POS Tagging and Mildly Context-Sensitive
Disambiguation: the eXtended Revised AraMorph (XRAM) | An extended, revised form of Tim Buckwalter's Arabic lexical and
morphological resource AraMorph, eXtended Revised AraMorph (henceforth XRAM),
is presented which addresses a number of weaknesses and inconsistencies of the
original model by allowing a wider coverage of real-world Classical and
contemporary (both formal and informal) Arabic texts. Building upon previous
research, XRAM enhancements include (i) flag-selectable usage markers, (ii)
probabilistic mildly context-sensitive POS tagging, filtering, disambiguation
and ranking of alternative morphological analyses, (iii) semi-automatic
increment of lexical coverage through extraction of lexical and morphological
information from existing lexical resources. Testing of XRAM through a
front-end Python module showed a remarkable success level.
| 2,016 | Computation and Language |
A Latent Variable Recurrent Neural Network for Discourse Relation
Language Models | This paper presents a novel latent variable recurrent neural network
architecture for jointly modeling sequences of words and (possibly latent)
discourse relations between adjacent sentences. A recurrent neural network
generates individual words, thus reaping the benefits of
discriminatively-trained vector representations. The discourse relations are
represented with a latent variable, which can be predicted or marginalized,
depending on the task. The resulting model can therefore employ a training
objective that includes not only discourse relation classification, but also
word prediction. As a result, it outperforms state-of-the-art alternatives for
two tasks: implicit discourse relation classification in the Penn Discourse
Treebank, and dialog act classification in the Switchboard corpus. Furthermore,
by marginalizing over latent discourse relations at test time, we obtain a
discourse informed language model, which improves over a strong LSTM baseline.
| 2,016 | Computation and Language |
Extracting Arabic Relations from the Web | The goal of this research is to extract a large list or table from named
entities and relations in a specific domain. A small set of a handful of
instance relations is required as input from the user. The system exploits
summaries from Google search engine as a source text. These instances are used
to extract patterns. The output is a set of new entities and their relations.
The results from four experiments show that precision and recall varies
according to relation type. Precision ranges from 0.61 to 0.75 while recall
ranges from 0.71 to 0.83. The best result is obtained for (player, club)
relationship, 0.72 and 0.83 for precision and recall respectively.
| 2,016 | Computation and Language |
Variational Autoencoders for Semi-supervised Text Classification | Although semi-supervised variational autoencoder (SemiVAE) works in image
classification task, it fails in text classification task if using vanilla LSTM
as its decoder. From a perspective of reinforcement learning, it is verified
that the decoder's capability to distinguish between different categorical
labels is essential. Therefore, Semi-supervised Sequential Variational
Autoencoder (SSVAE) is proposed, which increases the capability by feeding
label into its decoder RNN at each time-step. Two specific decoder structures
are investigated and both of them are verified to be effective. Besides, in
order to reduce the computational complexity in training, a novel optimization
method is proposed, which estimates the gradient of the unlabeled objective
function by sampling, along with two variance reduction techniques.
Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus
show that the proposed approach significantly improves the classification
accuracy compared with pure-supervised classifiers, and achieves competitive
performance against previous advanced methods. State-of-the-art results can be
obtained by integrating other pretraining-based methods.
| 2,016 | Computation and Language |
Observing Trends in Automated Multilingual Media Analysis | Any large organisation, be it public or private, monitors the media for
information to keep abreast of developments in their field of interest, and
usually also to become aware of positive or negative opinions expressed towards
them. At least for the written media, computer programs have become very
efficient at helping the human analysts significantly in their monitoring task
by gathering media reports, analysing them, detecting trends and - in some
cases - even to issue early warnings or to make predictions of likely future
developments. We present here trend recognition-related functionality of the
Europe Media Monitor (EMM) system, which was developed by the European
Commission's Joint Research Centre (JRC) for public administrations in the
European Union (EU) and beyond. EMM performs large-scale media analysis in up
to seventy languages and recognises various types of trends, some of them
combining information from news articles written in different languages and
from social media posts. EMM also lets users explore the huge amount of
multilingual media data through interactive maps and graphs, allowing them to
examine the data from various view points and according to multiple criteria. A
lot of EMM's functionality is accessibly freely over the internet or via apps
for hand-held devices.
| 2,016 | Computation and Language |
The red one!: On learning to refer to things based on their
discriminative properties | As a first step towards agents learning to communicate about their visual
environment, we propose a system that, given visual representations of a
referent (cat) and a context (sofa), identifies their discriminative
attributes, i.e., properties that distinguish them (has_tail). Moreover,
despite the lack of direct supervision at the attribute level, the model learns
to assign plausible attributes to objects (sofa-has_cushion). Finally, we
present a preliminary experiment confirming the referential success of the
predicted discriminative attributes.
| 2,016 | Computation and Language |
Implicit Discourse Relation Classification via Multi-Task Neural
Networks | Without discourse connectives, classifying implicit discourse relations is a
challenging task and a bottleneck for building a practical discourse parser.
Previous research usually makes use of one kind of discourse framework such as
PDTB or RST to improve the classification performance on discourse relations.
Actually, under different discourse annotation frameworks, there exist multiple
corpora which have internal connections. To exploit the combination of
different discourse corpora, we design related discourse classification tasks
specific to a corpus, and propose a novel Convolutional Neural Network embedded
multi-task learning system to synthesize these tasks by learning both unique
and shared representations for each task. The experimental results on the PDTB
implicit discourse relation classification task demonstrate that our model
achieves significant gains over baseline systems.
| 2,016 | Computation and Language |
Unsupervised word segmentation and lexicon discovery using acoustic word
embeddings | In settings where only unlabelled speech data is available, speech technology
needs to be developed without transcriptions, pronunciation dictionaries, or
language modelling text. A similar problem is faced when modelling infant
language acquisition. In these cases, categorical linguistic structure needs to
be discovered directly from speech audio. We present a novel unsupervised
Bayesian model that segments unlabelled speech and clusters the segments into
hypothesized word groupings. The result is a complete unsupervised tokenization
of the input speech in terms of discovered word types. In our approach, a
potential word segment (of arbitrary length) is embedded in a fixed-dimensional
acoustic vector space. The model, implemented as a Gibbs sampler, then builds a
whole-word acoustic model in this space while jointly performing segmentation.
We report word error rates in a small-vocabulary connected digit recognition
task by mapping the unsupervised decoded output to ground truth transcriptions.
The model achieves around 20% error rate, outperforming a previous HMM-based
system by about 10% absolute. Moreover, in contrast to the baseline, our model
does not require a pre-specified vocabulary size.
| 2,016 | Computation and Language |
Lexical bundles in computational linguistics academic literature | In this study we analyzed a corpus of 8 million words academic literature
from Computational lingustics' academic literature. the lexical bundles from
this corpus are categorized based on structures and functions.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.