Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Language classification from bilingual word embedding graphs | We study the role of the second language in bilingual word embeddings in
monolingual semantic evaluation tasks. We find strongly and weakly positive
correlations between down-stream task performance and second language
similarity to the target language. Additionally, we show how bilingual word
embeddings can be employed for the task of semantic language classification and
that joint semantic spaces vary in meaningful ways across second languages. Our
results support the hypothesis that semantic language similarity is influenced
by both structural similarity as well as geography/contact.
| 2,016 | Computation and Language |
Joint Event Detection and Entity Resolution: a Virtuous Cycle | Clustering web documents has numerous applications, such as aggregating news
articles into meaningful events, detecting trends and hot topics on the Web,
preserving diversity in search results, etc. At the same time, the importance
of named entities and, in particular, the ability to recognize them and to
solve the associated co-reference resolution problem are widely recognized as
key enabling factors when mining, aggregating and comparing content on the Web.
Instead of considering these two problems separately, we propose in this
paper a method that tackles jointly the problem of clustering news articles
into events and cross-document co-reference resolution of named entities. The
co-occurrence of named entities in the same clusters is used as an additional
signal to decide whether two referents should be merged into one entity. These
refined entities can in turn be used as enhanced features to re-cluster the
documents and then be refined again, entering into a virtuous cycle that
improves simultaneously the performances of both tasks. We implemented a
prototype system and report results using the TDT5 collection of news articles,
demonstrating the potential of our approach.
| 2,016 | Computation and Language |
Imitation Learning with Recurrent Neural Networks | We present a novel view that unifies two frameworks that aim to solve
sequential prediction problems: learning to search (L2S) and recurrent neural
networks (RNN). We point out equivalences between elements of the two
frameworks. By complementing what is missing from one framework comparing to
the other, we introduce a more advanced imitation learning framework that, on
one hand, augments L2S s notion of search space and, on the other hand,
enhances RNNs training procedure to be more robust to compounding errors
arising from training on highly correlated examples.
| 2,016 | Computation and Language |
Discriminating between similar languages in Twitter using label
propagation | Identifying the language of social media messages is an important first step
in linguistic processing. Existing models for Twitter focus on content
analysis, which is successful for dissimilar language pairs. We propose a label
propagation approach that takes the social graph of tweet authors into account
as well as content to better tease apart similar languages. This results in
state-of-the-art shared task performance of $76.63\%$, $1.4\%$ higher than the
top system.
| 2,016 | Computation and Language |
A Supervised Authorship Attribution Framework for Bengali Language | Authorship Attribution is a long-standing problem in Natural Language
Processing. Several statistical and computational methods have been used to
find a solution to this problem. In this paper, we have proposed methods to
deal with the authorship attribution problem in Bengali.
| 2,016 | Computation and Language |
Trainable Frontend For Robust and Far-Field Keyword Spotting | Robust and far-field speech recognition is critical to enable true hands-free
communication. In far-field conditions, signals are attenuated due to distance.
To improve robustness to loudness variation, we introduce a novel frontend
called per-channel energy normalization (PCEN). The key ingredient of PCEN is
the use of an automatic gain control based dynamic compression to replace the
widely used static (such as log or root) compression. We evaluate PCEN on the
keyword spotting task. On our large rerecorded noisy and far-field eval sets,
we show that PCEN significantly improves recognition performance. Furthermore,
we model PCEN as neural network layers and optimize high-dimensional PCEN
parameters jointly with the keyword spotting acoustic model. The trained PCEN
frontend demonstrates significant further improvements without increasing model
complexity or inference-time cost.
| 2,016 | Computation and Language |
A New Bengali Readability Score | In this paper we have proposed methods to analyze the readability of Bengali
language texts. We have got some exceptionally good results out of the
experiments.
| 2,017 | Computation and Language |
Neural Contextual Conversation Learning with Labeled Question-Answering
Pairs | Neural conversational models tend to produce generic or safe responses in
different contexts, e.g., reply \textit{"Of course"} to narrative statements or
\textit{"I don't know"} to questions. In this paper, we propose an end-to-end
approach to avoid such problem in neural generative models. Additional memory
mechanisms have been introduced to standard sequence-to-sequence (seq2seq)
models, so that context can be considered while generating sentences. Three
seq2seq models, which memorize a fix-sized contextual vector from hidden input,
hidden input/output and a gated contextual attention structure respectively,
have been trained and tested on a dataset of labeled question-answering pairs
in Chinese. The model with contextual attention outperforms others including
the state-of-the-art seq2seq models on perplexity test. The novel contextual
model generates diverse and robust responses, and is able to carry out
conversations on a wide range of topics appropriately.
| 2,016 | Computation and Language |
An Adaptation of Topic Modeling to Sentences | Advances in topic modeling have yielded effective methods for characterizing
the latent semantics of textual data. However, applying standard topic modeling
approaches to sentence-level tasks introduces a number of challenges. In this
paper, we adapt the approach of latent-Dirichlet allocation to include an
additional layer for incorporating information about the sentence boundaries in
documents. We show that the addition of this minimal information of document
structure improves the perplexity results of a trained model.
| 2,016 | Computation and Language |
Incremental Learning for Fully Unsupervised Word Segmentation Using
Penalized Likelihood and Model Selection | We present a novel incremental learning approach for unsupervised word
segmentation that combines features from probabilistic modeling and model
selection. This includes super-additive penalties for addressing the cognitive
burden imposed by long word formation, and new model selection criteria based
on higher-order generative assumptions. Our approach is fully unsupervised; it
relies on a small number of parameters that permits flexible modeling and a
mechanism that automatically learns parameters from the data. Through
experimentation, we show that this intricate design has led to top-tier
performance in both phonemic and orthographic word segmentation.
| 2,016 | Computation and Language |
Compositional Sequence Labeling Models for Error Detection in Learner
Writing | In this paper, we present the first experiments using neural network models
for the task of error detection in learner writing. We perform a systematic
comparison of alternative compositional architectures and propose a framework
for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14
shared task dataset show the model is able to outperform other participants on
detecting errors in learner writing. Finally, the model is integrated with a
publicly deployed self-assessment system, leading to performance comparable to
human annotators.
| 2,017 | Computation and Language |
Exploring phrase-compositionality in skip-gram models | In this paper, we introduce a variation of the skip-gram model which jointly
learns distributed word vector representations and their way of composing to
form phrase embeddings. In particular, we propose a learning procedure that
incorporates a phrase-compositionality function which can capture how we want
to compose phrases vectors from their component word vectors. Our experiments
show improvement in word and phrase similarity tasks as well as syntactic tasks
like dependency parsing using the proposed joint models.
| 2,016 | Computation and Language |
A Perspective on Sentiment Analysis | Sentiment Analysis (SA) is indeed a fascinating area of research which has
stolen the attention of researchers as it has many facets and more importantly
it promises economic stakes in the corporate and governance sector. SA has been
stemmed out of text analytics and established itself as a separate identity and
a domain of research. The wide ranging results of SA have proved to influence
the way some critical decisions are taken. Hence, it has become relevant in
thorough understanding of the different dimensions of the input, output and the
processes and approaches of SA.
| 2,014 | Computation and Language |
Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain
Factoid Question Answering | While question answering (QA) with neural network, i.e. neural QA, has
achieved promising results in recent years, lacking of large scale real-word QA
dataset is still a challenge for developing and evaluating neural QA system. To
alleviate this problem, we propose a large scale human annotated real-world QA
dataset WebQA with more than 42k questions and 556k evidences. As existing
neural QA methods resolve QA either as sequence generation or
classification/ranking problem, they face challenges of expensive softmax
computation, unseen answers handling or separate candidate answer generation
component. In this work, we cast neural QA as a sequence labeling problem and
propose an end-to-end sequence labeling model, which overcomes all the above
challenges. Experimental results on WebQA show that our model outperforms the
baselines significantly with an F1 score of 74.69% with word-based input, and
the performance drops only 3.72 F1 points with more challenging character-based
input.
| 2,016 | Computation and Language |
Opinion Mining in Online Reviews About Distance Education Programs | The popularity of distance education programs is increasing at a fast pace.
En par with this development, online communication in fora, social media and
reviewing platforms between students is increasing as well. Exploiting this
information to support fellow students or institutions requires to extract the
relevant opinions in order to automatically generate reports providing an
overview of pros and cons of different distance education programs. We report
on an experiment involving distance education experts with the goal to develop
a dataset of reviews annotated with relevant categories and aspects in each
category discussed in the specific review together with an indication of the
sentiment.
Based on this experiment, we present an approach to extract general
categories and specific aspects under discussion in a review together with
their sentiment. We frame this task as a multi-label hierarchical text
classification problem and empirically investigate the performance of different
classification architectures to couple the prediction of a category with the
prediction of particular aspects in this category. We evaluate different
architectures and show that a hierarchical approach leads to superior results
in comparison to a flat model which makes decisions independently.
| 2,016 | Computation and Language |
La representaci\'on de la variaci\'on contextual mediante definiciones
terminol\'ogicas flexibles | In this doctoral thesis, we apply premises of cognitive linguistics to
terminological definitions and present a proposal called the flexible
terminological definition. This consists of a set of definitions of the same
concept made up of a general definition (in this case, one encompassing the
entire environmental domain) along with additional definitions describing the
concept from the perspective of the subdomains in which it is relevant. Since
context is a determining factor in the construction of the meaning of lexical
units (including terms), we assume that terminological definitions can, and
should, reflect the effects of context, even though definitions have
traditionally been treated as the expression of meaning void of any contextual
effect. The main objective of this thesis is to analyze the effects of
contextual variation on specialized environmental concepts with a view to their
representation in terminological definitions. Specifically, we focused on
contextual variation based on thematic restrictions. To accomplish the
objectives of this doctoral thesis, we conducted an empirical study consisting
of the analysis of a set of contextually variable concepts and the creation of
a flexible definition for two of them. As a result of the first part of our
empirical study, we divided our notion of domain-dependent contextual variation
into three different phenomena: modulation, perspectivization and
subconceptualization. These phenomena are additive in that all concepts
experience modulation, some concepts also undergo perspectivization, and
finally, a small number of concepts are additionally subjected to
subconceptualization. In the second part, we applied these notions to
terminological definitions and we presented we presented guidelines on how to
build flexible definitions, from the extraction of knowledge to the actual
writing of the definition.
| 2,016 | Computation and Language |
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
Embeddings | The blind application of machine learning runs the risk of amplifying biases
present in data. Such a danger is facing us with word embedding, a popular
framework to represent text data as vectors which has been used in many machine
learning and natural language processing tasks. We show that even word
embeddings trained on Google News articles exhibit female/male gender
stereotypes to a disturbing extent. This raises concerns because their
widespread use, as we describe, often tends to amplify these biases.
Geometrically, gender bias is first shown to be captured by a direction in the
word embedding. Second, gender neutral words are shown to be linearly separable
from gender definition words in the word embedding. Using these properties, we
provide a methodology for modifying an embedding to remove gender stereotypes,
such as the association between between the words receptionist and female,
while maintaining desired associations such as between the words queen and
female. We define metrics to quantify both direct and indirect gender biases in
embeddings, and develop algorithms to "debias" the embedding. Using
crowd-worker evaluation as well as standard benchmarks, we empirically
demonstrate that our algorithms significantly reduce gender bias in embeddings
while preserving the its useful properties such as the ability to cluster
related concepts and to solve analogy tasks. The resulting embeddings can be
used in applications without amplifying gender bias.
| 2,016 | Computation and Language |
Novel Word Embedding and Translation-based Language Modeling for
Extractive Speech Summarization | Word embedding methods revolve around learning continuous distributed vector
representations of words with neural networks, which can capture semantic
and/or syntactic cues, and in turn be used to induce similarity measures among
words, sentences and documents in context. Celebrated methods can be
categorized as prediction-based and count-based methods according to the
training objectives and model architectures. Their pros and cons have been
extensively analyzed and evaluated in recent studies, but there is relatively
less work continuing the line of research to develop an enhanced learning
method that brings together the advantages of the two model families. In
addition, the interpretation of the learned word representations still remains
somewhat opaque. Motivated by the observations and considering the pressing
need, this paper presents a novel method for learning the word representations,
which not only inherits the advantages of classic word embedding methods but
also offers a clearer and more rigorous interpretation of the learned word
representations. Built upon the proposed word embedding method, we further
formulate a translation-based language modeling framework for the extractive
speech summarization task. A series of empirical evaluations demonstrate the
effectiveness of the proposed word representation learning and language
modeling techniques in extractive speech summarization.
| 2,016 | Computation and Language |
Syntax-based Attention Model for Natural Language Inference | Introducing attentional mechanism in neural network is a powerful concept,
and has achieved impressive results in many natural language processing tasks.
However, most of the existing models impose attentional distribution on a flat
topology, namely the entire input representation sequence. Clearly, any
well-formed sentence has its accompanying syntactic tree structure, which is a
much rich topology. Applying attention to such topology not only exploits the
underlying syntax, but also makes attention more interpretable. In this paper,
we explore this direction in the context of natural language inference. The
results demonstrate its efficacy. We also perform extensive qualitative
analysis, deriving insights and intuitions of why and how our model works.
| 2,016 | Computation and Language |
Automated Prediction of Temporal Relations | Background: There has been growing research interest in automated answering
of questions or generation of summary of free form text such as news article.
In order to implement this task, the computer should be able to identify the
sequence of events, duration of events, time at which event occurred and the
relationship type between event pairs, time pairs or event-time pairs. Specific
Problem: It is important to accurately identify the relationship type between
combinations of event and time before the temporal ordering of events can be
defined. The machine learning approach taken in Mani et. al (2006) provides an
accuracy of only 62.5 on the baseline data from TimeBank. The researchers used
maximum entropy classifier in their methodology. TimeML uses the TLINK
annotation to tag a relationship type between events and time. The time
complexity is quadratic when it comes to tagging documents with TLINK using
human annotation. This research proposes using decision tree and parsing to
improve the relationship type tagging. This research attempts to solve the gaps
in human annotation by automating the task of relationship type tagging in an
attempt to improve the accuracy of event and time relationship in annotated
documents. Scope information: The documents from the domain of news will be
used. The tagging will be performed within the same document and not across
documents. The relationship types will be identified only for a pair of event
and time and not a chain of events. The research focuses on documents tagged
using the TimeML specification which contains tags such as EVENT, TLINK, and
TIMEX. Each tag has attributes such as identifier, relation, POS, time etc.
| 2,016 | Computation and Language |
CFGs-2-NLU: Sequence-to-Sequence Learning for Mapping Utterances to
Semantics and Pragmatics | In this paper, we present a novel approach to natural language understanding
that utilizes context-free grammars (CFGs) in conjunction with
sequence-to-sequence (seq2seq) deep learning. Specifically, we take a CFG
authored to generate dialogue for our target application for NLU, a videogame,
and train a long short-term memory (LSTM) recurrent neural network (RNN) to map
the surface utterances that it produces to traces of the grammatical expansions
that yielded them. Critically, this CFG was authored using a tool we have
developed that supports arbitrary annotation of the nonterminal symbols in the
grammar. Because we already annotated the symbols in this grammar for the
semantic and pragmatic considerations that our game's dialogue manager operates
over, we can use the grammatical trace associated with any surface utterance to
infer such information. During gameplay, we translate player utterances into
grammatical traces (using our RNN), collect the mark-up attributed to the
symbols included in that trace, and pass this information to the dialogue
manager, which updates the conversation state accordingly. From an offline
evaluation task, we demonstrate that our trained RNN translates surface
utterances to grammatical traces with great accuracy. To our knowledge, this is
the first usage of seq2seq learning for conversational agents (our game's
characters) who explicitly reason over semantic and pragmatic considerations.
| 2,016 | Computation and Language |
Neural Sentence Ordering | Sentence ordering is a general and critical task for natural language
generation applications. Previous works have focused on improving its
performance in an external, downstream task, such as multi-document
summarization. Given its importance, we propose to study it as an isolated
task. We collect a large corpus of academic texts, and derive a data driven
approach to learn pairwise ordering of sentences, and validate the efficacy
with extensive experiments. Source codes and dataset of this paper will be made
publicly available.
| 2,016 | Computation and Language |
Authorship attribution via network motifs identification | Concepts and methods of complex networks can be used to analyse texts at
their different complexity levels. Examples of natural language processing
(NLP) tasks studied via topological analysis of networks are keyword
identification, automatic extractive summarization and authorship attribution.
Even though a myriad of network measurements have been applied to study the
authorship attribution problem, the use of motifs for text analysis has been
restricted to a few works. The goal of this paper is to apply the concept of
motifs, recurrent interconnection patterns, in the authorship attribution task.
The absolute frequencies of all thirteen directed motifs with three nodes were
extracted from the co-occurrence networks and used as classification features.
The effectiveness of these features was verified with four machine learning
methods. The results show that motifs are able to distinguish the writing style
of different authors. In our best scenario, 57.5% of the books were correctly
classified. The chance baseline for this problem is 12.5%. In addition, we have
found that function words play an important role in these recurrent patterns.
Taken together, our findings suggest that motifs should be further explored in
other related linguistic tasks.
| 2,016 | Computation and Language |
Latent Tree Language Model | In this paper we introduce Latent Tree Language Model (LTLM), a novel
approach to language modeling that encodes syntax and semantics of a given
sentence as a tree of word roles.
The learning phase iteratively updates the trees by moving nodes according to
Gibbs sampling. We introduce two algorithms to infer a tree for a given
sentence. The first one is based on Gibbs sampling. It is fast, but does not
guarantee to find the most probable tree. The second one is based on dynamic
programming. It is slower, but guarantees to find the most probable tree. We
provide comparison of both algorithms.
We combine LTLM with 4-gram Modified Kneser-Ney language model via linear
interpolation. Our experiments with English and Czech corpora show significant
perplexity reductions (up to 46% for English and 49% for Czech) compared with
standalone 4-gram Modified Kneser-Ney language model.
| 2,016 | Computation and Language |
Tweet2Vec: Learning Tweet Embeddings Using Character-level CNN-LSTM
Encoder-Decoder | We present Tweet2Vec, a novel method for generating general-purpose vector
representation of tweets. The model learns tweet embeddings using
character-level CNN-LSTM encoder-decoder. We trained our model on 3 million,
randomly selected English-language tweets. The model was evaluated using two
methods: tweet semantic similarity and tweet sentiment categorization,
outperforming the previous state-of-the-art in both tasks. The evaluations
demonstrate the power of the tweet embeddings generated by our model for
various tweet categorization tasks. The vector representations generated by our
model are generic, and hence can be applied to a variety of tasks. Though the
model presented in this paper is trained on English-language tweets, the method
presented can be used to learn tweet embeddings for different languages.
| 2,016 | Computation and Language |
Grounding Dynamic Spatial Relations for Embodied (Robot) Interaction | This paper presents a computational model of the processing of dynamic
spatial relations occurring in an embodied robotic interaction setup. A
complete system is introduced that allows autonomous robots to produce and
interpret dynamic spatial phrases (in English) given an environment of moving
objects. The model unites two separate research strands: computational
cognitive semantics and on commonsense spatial representation and reasoning.
The model for the first time demonstrates an integration of these different
strands.
| 2,016 | Computation and Language |
Grounded Lexicon Acquisition - Case Studies in Spatial Language | This paper discusses grounded acquisition experiments of increasing
complexity. Humanoid robots acquire English spatial lexicons from robot tutors.
We identify how various spatial language systems, such as projective, absolute
and proximal can be learned. The proposed learning mechanisms do not rely on
direct meaning transfer or direct access to world models of interlocutors.
Finally, we show how multiple systems can be acquired at the same time.
| 2,016 | Computation and Language |
Machine Learned Resume-Job Matching Solution | Job search through online matching engines nowadays are very prominent and
beneficial to both job seekers and employers. But the solutions of traditional
engines without understanding the semantic meanings of different resumes have
not kept pace with the incredible changes in machine learning techniques and
computing capability. These solutions are usually driven by manual rules and
predefined weights of keywords which lead to an inefficient and frustrating
search experience. To this end, we present a machine learned solution with rich
features and deep learning methods. Our solution includes three configurable
modules that can be plugged with little restrictions. Namely, unsupervised
feature extraction, base classifiers training and ensemble method learning. In
our solution, rather than using manual rules, machine learned methods to
automatically detect the semantic similarity of positions are proposed. Then
four competitive "shallow" estimators and "deep" estimators are selected.
Finally, ensemble methods to bag these estimators and aggregate their
individual predictions to form a final prediction are verified. Experimental
results of over 47 thousand resumes show that our solution can significantly
improve the predication precision current position, salary, educational
background and company scale.
| 2,016 | Computation and Language |
How scientific literature has been evolving over the time? A novel
statistical approach using tracking verbal-based methods | This paper provides a global vision of the scientific publications related
with the Systemic Lupus Erythematosus (SLE), taking as starting point abstracts
of articles. Through the time, abstracts have been evolving towards higher
complexity on used terminology, which makes necessary the use of sophisticated
statistical methods and answering questions including: how vocabulary is
evolving through the time? Which ones are most influential articles? And which
one are the articles that introduced new terms and vocabulary? To answer these,
we analyze a dataset composed by 506 abstracts and downloaded from 115
different journals and cover a 18 year-period.
| 2,014 | Computation and Language |
Synthetic Language Generation and Model Validation in BEAST2 | Generating synthetic languages aids in the testing and validation of future
computational linguistic models and methods. This thesis extends the BEAST2
phylogenetic framework to add linguistic sequence generation under multiple
models. The new plugin is then used to test the effects of the phenomena of
word borrowing on the inference process under two widely used phylolinguistic
models.
| 2,016 | Computation and Language |
Joint Embedding of Hierarchical Categories and Entities for Concept
Categorization and Dataless Classification | Due to the lack of structured knowledge applied in learning distributed
representation of cate- gories, existing work cannot incorporate category
hierarchies into entity information. We propose a framework that embeds
entities and categories into a semantic space by integrating structured
knowledge and taxonomy hierarchy from large knowledge bases. The framework
allows to com- pute meaningful semantic relatedness between entities and
categories. Our framework can han- dle both single-word concepts and
multiple-word concepts with superior performance on concept categorization and
yield state of the art results on dataless hierarchical classification.
| 2,016 | Computation and Language |
Modeling selectional restrictions in a relational type system | Selectional restrictions are semantic constraints on forming certain complex
types in natural language. The paper gives an overview of modeling selectional
restrictions in a relational type system with morphological and syntactic
types. We discuss some foundations of the system and ways of formalizing
selectional restrictions.
Keywords: type theory, selectional restrictions, syntax, morphology
| 2,016 | Computation and Language |
A Novel Bilingual Word Embedding Method for Lexical Translation Using
Bilingual Sense Clique | Most of the existing methods for bilingual word embedding only consider
shallow context or simple co-occurrence information. In this paper, we propose
a latent bilingual sense unit (Bilingual Sense Clique, BSC), which is derived
from a maximum complete sub-graph of pointwise mutual information based graph
over bilingual corpus. In this way, we treat source and target words equally
and a separated bilingual projection processing that have to be used in most
existing works is not necessary any more. Several dimension reduction methods
are evaluated to summarize the BSC-word relationship. The proposed method is
evaluated on bilingual lexicon translation tasks and empirical results show
that bilingual sense embedding methods outperform existing bilingual word
embedding methods.
| 2,018 | Computation and Language |
Connecting Phrase based Statistical Machine Translation Adaptation | Although more additional corpora are now available for Statistical Machine
Translation (SMT), only the ones which belong to the same or similar domains
with the original corpus can indeed enhance SMT performance directly. Most of
the existing adaptation methods focus on sentence selection. In comparison,
phrase is a smaller and more fine grained unit for data selection, therefore we
propose a straightforward and efficient connecting phrase based adaptation
method, which is applied to both bilingual phrase pair and monolingual n-gram
adaptation. The proposed method is evaluated on IWSLT/NIST data sets, and the
results show that phrase based SMT performance are significantly improved (up
to +1.6 in comparison with phrase based SMT baseline system and +0.9 in
comparison with existing methods).
| 2,016 | Computation and Language |
Cognitive Science in the era of Artificial Intelligence: A roadmap for
reverse-engineering the infant language-learner | During their first years of life, infants learn the language(s) of their
environment at an amazing speed despite large cross cultural variations in
amount and complexity of the available language input. Understanding this
simple fact still escapes current cognitive and linguistic theories. Recently,
spectacular progress in the engineering science, notably, machine learning and
wearable technology, offer the promise of revolutionizing the study of
cognitive development. Machine learning offers powerful learning algorithms
that can achieve human-like performance on many linguistic tasks. Wearable
sensors can capture vast amounts of data, which enable the reconstruction of
the sensory experience of infants in their natural environment. The project of
'reverse engineering' language development, i.e., of building an effective
system that mimics infant's achievements appears therefore to be within reach.
Here, we analyze the conditions under which such a project can contribute to
our scientific understanding of early language development. We argue that
instead of defining a sub-problem or simplifying the data, computational models
should address the full complexity of the learning situation, and take as input
the raw sensory signals available to infants. This implies that (1) accessible
but privacy-preserving repositories of home data be setup and widely shared,
and (2) models be evaluated at different linguistic levels through a benchmark
of psycholinguist tests that can be passed by machines and humans alike, (3)
linguistically and psychologically plausible learning architectures be scaled
up to real data using probabilistic/optimization principles from machine
learning. We discuss the feasibility of this approach and present preliminary
results.
| 2,018 | Computation and Language |
Cseq2seq: Cyclic Sequence-to-Sequence Learning | The vanilla sequence-to-sequence learning (seq2seq) reads and encodes a
source sequence into a fixed-length vector only once, suffering from its
insufficiency in modeling structural correspondence between the source and
target sequence. Instead of handling this insufficiency with a linearly
weighted attention mechanism, in this paper, we propose to use a recurrent
neural network (RNN) as an alternative (Cseq2seq-I). During decoding,
Cseq2seq-I cyclically feeds the previous decoding state back to the encoder as
the initial state of the RNN, and reencodes source representations to produce
context vectors. We surprisingly find that the introduced RNN succeeds in
dynamically detecting translationrelated source tokens according to the partial
target sequence. Based on this finding, we further hypothesize that the partial
target sequence can act as a feedback to improve the understanding of the
source sequence. To test this hypothesis, we propose cyclic
sequence-to-sequence learning (Cseq2seq-II) which differs from the seq2seq only
in the reintroduction of previous decoding state into the same encoder. We
further perform parameter sharing on Cseq2seq-II to reduce parameter redundancy
and enhance regularization. In particular, we share the weights of the encoder
and decoder, and two targetside word embeddings, making Cseq2seq-II equivalent
to a single conditional RNN model, with 31% parameters pruned but even better
performance. Cseq2seq-II not only preserves the simplicity of seq2seq but also
yields comparable and promising results on machine translation tasks.
Experiments on Chinese- English and English-German translation show that
Cseq2seq achieves significant and consistent improvements over seq2seq and is
as competitive as the attention-based seq2seq model.
| 2,018 | Computation and Language |
The DLVHEX System for Knowledge Representation: Recent Advances (System
Description) | The DLVHEX system implements the HEX-semantics, which integrates answer set
programming (ASP) with arbitrary external sources. Since its first release ten
years ago, significant advancements were achieved. Most importantly, the
exploitation of properties of external sources led to efficiency improvements
and flexibility enhancements of the language, and technical improvements on the
system side increased user's convenience. In this paper, we present the current
status of the system and point out the most important recent enhancements over
early versions. While existing literature focuses on theoretical aspects and
specific components, a bird's eye view of the overall system is missing. In
order to promote the system for real-world applications, we further present
applications which were already successfully realized on top of DLVHEX. This
paper is under consideration for acceptance in Theory and Practice of Logic
Programming.
| 2,016 | Computation and Language |
Authorship Verification - An Approach based on Random Forest | Authorship attribution, being an important problem in many areas in-cluding
information retrieval, computational linguistics, law and journalism etc., has
been identified as a subject of increasingly research interest in the re-cent
years. In case of Author Identification task in PAN at CLEF 2015, the main
focus was given on cross-genre and cross-topic author verification tasks. We
have used several word-based and style-based features to identify the
dif-ferences between the known and unknown problems of one given set and label
the unknown ones accordingly using a Random Forest based classifier.
| 2,016 | Computation and Language |
Supervised Attentions for Neural Machine Translation | In this paper, we improve the attention or alignment accuracy of neural
machine translation by utilizing the alignments of training sentence pairs. We
simply compute the distance between the machine attentions and the "true"
alignments, and minimize this cost in the training procedure. Our experiments
on large-scale Chinese-to-English task show that our model improves both
translation and alignment qualities significantly over the large-vocabulary
neural machine translation system, and even beats a state-of-the-art
traditional syntax-based system.
| 2,016 | Computation and Language |
Left-corner Methods for Syntactic Modeling with Universal Structural
Constraints | The primary goal in this thesis is to identify better syntactic constraint or
bias, that is language independent but also efficiently exploitable during
sentence processing. We focus on a particular syntactic construction called
center-embedding, which is well studied in psycholinguistics and noted to cause
particular difficulty for comprehension. Since people use language as a tool
for communication, one expects such complex constructions to be avoided for
communication efficiency. From a computational perspective, center-embedding is
closely relevant to a left-corner parsing algorithm, which can capture the
degree of center-embedding of a parse tree being constructed. This connection
suggests left-corner methods can be a tool to exploit the universal syntactic
constraint that people avoid generating center-embedded structures. We explore
such utilities of center-embedding as well as left-corner methods extensively
through several theoretical and empirical examinations.
Our primary task is unsupervised grammar induction. In this task, the input
to the algorithm is a collection of sentences, from which the model tries to
extract the salient patterns on them as a grammar. This is a particularly hard
problem although we expect the universal constraint may help in improving the
performance since it can effectively restrict the possible search space for the
model. We build the model by extending the left-corner parsing algorithm for
efficiently tabulating the search space except those involving center-embedding
up to a specific degree. We examine the effectiveness of our approach on many
treebanks, and demonstrate that often our constraint leads to better parsing
performance. We thus conclude that left-corner methods are particularly useful
for syntax-oriented systems, as it can exploit efficiently the inherent
universal constraints in languages.
| 2,016 | Computation and Language |
A Neural Knowledge Language Model | Current language models have a significant limitation in the ability to
encode and decode factual knowledge. This is mainly because they acquire such
knowledge from statistical co-occurrences although most of the knowledge words
are rarely observed. In this paper, we propose a Neural Knowledge Language
Model (NKLM) which combines symbolic knowledge provided by the knowledge graph
with the RNN language model. By predicting whether the word to generate has an
underlying fact or not, the model can generate such knowledge-related words by
copying from the description of the predicted fact. In experiments, we show
that the NKLM significantly improves the performance while generating a much
smaller number of unknown words.
| 2,017 | Computation and Language |
Keyphrase Extraction using Sequential Labeling | Keyphrases efficiently summarize a document's content and are used in various
document processing and retrieval tasks. Several unsupervised techniques and
classifiers exist for extracting keyphrases from text documents. Most of these
methods operate at a phrase-level and rely on part-of-speech (POS) filters for
candidate phrase generation. In addition, they do not directly handle
keyphrases of varying lengths. We overcome these modeling shortcomings by
addressing keyphrase extraction as a sequential labeling task in this paper. We
explore a basic set of features commonly used in NLP tasks as well as
predictions from various unsupervised methods to train our taggers. In addition
to a more natural modeling for the keyphrase extraction problem, we show that
tagging models yield significant performance benefits over existing
state-of-the-art extraction methods.
| 2,016 | Computation and Language |
Crowd-sourcing NLG Data: Pictures Elicit Better Data | Recent advances in corpus-based Natural Language Generation (NLG) hold the
promise of being easily portable across domains, but require costly training
data, consisting of meaning representations (MRs) paired with Natural Language
(NL) utterances. In this work, we propose a novel framework for crowdsourcing
high quality NLG training data, using automatic quality control measures and
evaluating different MRs with which to elicit data. We show that pictorial MRs
result in better NL data being collected than logic-based MRs: utterances
elicited by pictorial MRs are judged as significantly more natural, more
informative, and better phrased, with a significant increase in average quality
ratings (around 0.5 points on a 6-point scale), compared to using the logical
MRs. As the MR becomes more complex, the benefits of pictorial stimuli
increase. The collected data will be released as part of this submission.
| 2,016 | Computation and Language |
Learning Semantically Coherent and Reusable Kernels in Convolution
Neural Nets for Sentence Classification | The state-of-the-art CNN models give good performance on sentence
classification tasks. The purpose of this work is to empirically study
desirable properties such as semantic coherence, attention mechanism and
reusability of CNNs in these tasks. Semantically coherent kernels are
preferable as they are a lot more interpretable for explaining the decision of
the learned CNN model. We observe that the learned kernels do not have semantic
coherence. Motivated by this observation, we propose to learn kernels with
semantic coherence using clustering scheme combined with Word2Vec
representation and domain knowledge such as SentiWordNet. We suggest a
technique to visualize attention mechanism of CNNs for decision explanation
purpose. Reusable property enables kernels learned on one problem to be used in
another problem. This helps in efficient learning as only a few additional
domain specific filters may have to be learned. We demonstrate the efficacy of
our core ideas of learning semantically coherent kernels and leveraging
reusable kernels for efficient learning on several benchmark datasets.
Experimental results show the usefulness of our approach by achieving
performance close to the state-of-the-art methods but with semantic and
reusable properties.
| 2,016 | Computation and Language |
Labeling Topics with Images using Neural Networks | Topics generated by topic models are usually represented by lists of $t$
terms or alternatively using short phrases and images. The current
state-of-the-art work on labeling topics using images selects images by
re-ranking a small set of candidates for a given topic. In this paper, we
present a more generic method that can estimate the degree of association
between any arbitrary pair of an unseen topic and image using a deep neural
network. Our method has better runtime performance $O(n)$ compared to $O(n^2)$
for the current state-of-the-art method, and is also significantly more
accurate.
| 2,017 | Computation and Language |
Blind phoneme segmentation with temporal prediction errors | Phonemic segmentation of speech is a critical step of speech recognition
systems. We propose a novel unsupervised algorithm based on sequence prediction
models such as Markov chains and recurrent neural network. Our approach
consists in analyzing the error profile of a model trained to predict speech
features frame-by-frame. Specifically, we try to learn the dynamics of speech
in the MFCC space and hypothesize boundaries from local maxima in the
prediction error. We evaluate our system on the TIMIT dataset, with
improvements over similar methods.
| 2,017 | Computation and Language |
Structured prediction models for RNN based sequence labeling in clinical
text | Sequence labeling is a widely used method for named entity recognition and
information extraction from unstructured natural language data. In clinical
domain one major application of sequence labeling involves extraction of
medical entities such as medication, indication, and side-effects from
Electronic Health Record narratives. Sequence labeling in this domain, presents
its own set of challenges and objectives. In this work we experimented with
various CRF based structured learning models with Recurrent Neural Networks. We
extend the previously studied LSTM-CRF models with explicit modeling of
pairwise potentials. We also propose an approximate version of skip-chain CRF
inference with RNN potentials. We use these methodologies for structured
prediction in order to improve the exact phrase detection of various medical
entities.
| 2,016 | Computation and Language |
New word analogy corpus for exploring embeddings of Czech words | The word embedding methods have been proven to be very useful in many tasks
of NLP (Natural Language Processing). Much has been investigated about word
embeddings of English words and phrases, but only little attention has been
dedicated to other languages.
Our goal in this paper is to explore the behavior of state-of-the-art word
embedding methods on Czech, the language that is characterized by very rich
morphology. We introduce new corpus for word analogy task that inspects
syntactic, morphosyntactic and semantic properties of Czech words and phrases.
We experiment with Word2Vec and GloVe algorithms and discuss the results on
this corpus. The corpus is available for the research community.
| 2,016 | Computation and Language |
Semantic Representations of Word Senses and Concepts | Representing the semantics of linguistic items in a machine-interpretable
form has been a major goal of Natural Language Processing since its earliest
days. Among the range of different linguistic items, words have attracted the
most research attention. However, word representations have an important
limitation: they conflate different meanings of a word into a single vector.
Representations of word senses have the potential to overcome this inherent
limitation. Indeed, the representation of individual word senses and concepts
has recently gained in popularity with several experimental results showing
that a considerable performance improvement can be achieved across different
NLP applications upon moving from word level to the deeper sense and concept
levels. Another interesting point regarding the representation of concepts and
word senses is that these models can be seamlessly applied to other linguistic
items, such as words, phrases and sentences.
| 2,016 | Computation and Language |
SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity | Verbs play a critical role in the meaning of sentences, but these ubiquitous
words have received little attention in recent distributional semantics
research. We introduce SimVerb-3500, an evaluation resource that provides human
ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed
verb types from the USF free-association database, providing at least three
examples for every VerbNet class. This broad coverage facilitates detailed
analyses of how syntactic and semantic phenomena together influence human
understanding of verb meaning. Further, with significantly larger development
and test sets than existing benchmarks, SimVerb-3500 enables more robust
evaluation of representation learning architectures and promotes the
development of methods tailored to verbs. We hope that SimVerb-3500 will enable
a richer understanding of the diversity and complexity of verb semantics and
guide the development of systems that can effectively represent and interpret
this meaning.
| 2,016 | Computation and Language |
Knowledge Distillation for Small-footprint Highway Networks | Deep learning has significantly advanced state-of-the-art of speech
recognition in the past few years. However, compared to conventional Gaussian
mixture acoustic models, neural network models are usually much larger, and are
therefore not very deployable in embedded devices. Previously, we investigated
a compact highway deep neural network (HDNN) for acoustic modelling, which is a
type of depth-gated feedforward neural network. We have shown that HDNN-based
acoustic models can achieve comparable recognition accuracy with much smaller
number of model parameters compared to plain deep neural network (DNN) acoustic
models. In this paper, we push the boundary further by leveraging on the
knowledge distillation technique that is also known as {\it teacher-student}
training, i.e., we train the compact HDNN model with the supervision of a high
accuracy cumbersome model. Furthermore, we also investigate sequence training
and adaptation in the context of teacher-student training. Our experiments were
performed on the AMI meeting speech recognition corpus. With this technique, we
significantly improved the recognition accuracy of the HDNN acoustic model with
less than 0.8 million parameters, and narrowed the gap between this model and
the plain DNN with 30 million parameters.
| 2,016 | Computation and Language |
Efficient Segmental Cascades for Speech Recognition | Discriminative segmental models offer a way to incorporate flexible feature
functions into speech recognition. However, their appeal has been limited by
their computational requirements, due to the large number of possible segments
to consider. Multi-pass cascades of segmental models introduce features of
increasing complexity in different passes, where in each pass a segmental model
rescores lattices produced by a previous (simpler) segmental model. In this
paper, we explore several ways of making segmental cascades efficient and
practical: reducing the feature set in the first pass, frame subsampling, and
various pruning approaches. In experiments on phonetic recognition, we find
that with a combination of such techniques, it is possible to maintain
competitive performance while greatly reducing decoding, pruning, and training
time.
| 2,016 | Computation and Language |
Proceedings of the 2016 Workshop on Semantic Spaces at the Intersection
of NLP, Physics and Cognitive Science | This volume contains the Proceedings of the 2016 Workshop on Semantic Spaces
at the Intersection of NLP, Physics and Cognitive Science (SLPCS 2016), which
was held on the 11th of June at the University of Strathclyde, Glasgow, and was
co-located with Quantum Physics and Logic (QPL 2016). Exploiting the common
ground provided by the concept of a vector space, the workshop brought together
researchers working at the intersection of Natural Language Processing (NLP),
cognitive science, and physics, offering them an appropriate forum for
presenting their uniquely motivated work and ideas. The interplay between these
three disciplines inspired theoretically motivated approaches to the
understanding of how word meanings interact with each other in sentences and
discourse, how diagrammatic reasoning depicts and simplifies this interaction,
how language models are determined by input from the world, and how word and
sentence meanings interact logically. This first edition of the workshop
consisted of three invited talks from distinguished speakers (Hans Briegel,
Peter G\"ardenfors, Dominic Widdows) and eight presentations of selected
contributed papers. Each submission was refereed by at least three members of
the Programme Committee, who delivered detailed and insightful comments and
suggestions.
| 2,016 | Computation and Language |
Morphological Priors for Probabilistic Neural Word Embeddings | Word embeddings allow natural language processing systems to share
statistical information across related words. These embeddings are typically
based on distributional statistics, making it difficult for them to generalize
to rare or unseen words. We propose to improve word embeddings by incorporating
morphological information, capturing shared sub-word features. Unlike previous
work that constructs word embeddings directly from morphemes, we combine
morphological and distributional information in a unified probabilistic
framework, in which the word embedding is a latent variable. The morphological
information provides a prior distribution on the latent word embeddings, which
in turn condition a likelihood function over an observed corpus. This approach
yields improvements on intrinsic word similarity evaluations, and also in the
downstream task of part-of-speech tagging.
| 2,016 | Computation and Language |
To Swap or Not to Swap? Exploiting Dependency Word Pairs for Reordering
in Statistical Machine Translation | Reordering poses a major challenge in machine translation (MT) between two
languages with significant differences in word order. In this paper, we present
a novel reordering approach utilizing sparse features based on dependency word
pairs. Each instance of these features captures whether two words, which are
related by a dependency link in the source sentence dependency parse tree,
follow the same order or are swapped in the translation output. Experiments on
Chinese-to-English translation show a statistically significant improvement of
1.21 BLEU point using our approach, compared to a state-of-the-art statistical
MT system that incorporates prior reordering approaches.
| 2,016 | Computation and Language |
Improving Quality of Hierarchical Clustering for Large Data Series | Brown clustering is a hard, hierarchical, bottom-up clustering of words in a
vocabulary. Words are assigned to clusters based on their usage pattern in a
given corpus. The resulting clusters and hierarchical structure can be used in
constructing class-based language models and for generating features to be used
in NLP tasks. Because of its high computational cost, the most-used version of
Brown clustering is a greedy algorithm that uses a window to restrict its
search space. Like other clustering algorithms, Brown clustering finds a
sub-optimal, but nonetheless effective, mapping of words to clusters. Because
of its ability to produce high-quality, human-understandable cluster, Brown
clustering has seen high uptake the NLP research community where it is used in
the preprocessing and feature generation steps.
Little research has been done towards improving the quality of Brown
clusters, despite the greedy and heuristic nature of the algorithm. The
approaches tried so far have focused on: studying the effect of the
initialisation in a similar algorithm; tuning the parameters used to define the
desired number of clusters and the behaviour of the algorithm; and including a
separate parameter to differentiate the window from the desired number of
clusters. However, some of these approaches have not yielded significant
improvements in cluster quality.
In this thesis, a close analysis of the Brown algorithm is provided,
revealing important under-specifications and weaknesses in the original
algorithm. These have serious effects on cluster quality and reproducibility of
research using Brown clustering. In the second part of the thesis, two
modifications are proposed. Finally, a thorough evaluation is performed,
considering both the optimization criterion of Brown clustering and the
performance of the resulting class-based language models.
| 2,016 | Computation and Language |
A Physical Metaphor to Study Semantic Drift | In accessibility tests for digital preservation, over time we experience
drifts of localized and labelled content in statistical models of evolving
semantics represented as a vector field. This articulates the need to detect,
measure, interpret and model outcomes of knowledge dynamics. To this end we
employ a high-performance machine learning algorithm for the training of
extremely large emergent self-organizing maps for exploratory data analysis.
The working hypothesis we present here is that the dynamics of semantic drifts
can be modeled on a relaxed version of Newtonian mechanics called social
mechanics. By using term distances as a measure of semantic relatedness vs.
their PageRank values indicating social importance and applied as variable
`term mass', gravitation as a metaphor to express changes in the semantic
content of a vector field lends a new perspective for experimentation. From
`term gravitation' over time, one can compute its generating potential whose
fluctuations manifest modifications in pairwise term similarity vs. social
importance, thereby updating Osgood's semantic differential. The dataset
examined is the public catalog metadata of Tate Galleries, London.
| 2,016 | Computation and Language |
Dual Density Operators and Natural Language Meaning | Density operators allow for representing ambiguity about a vector
representation, both in quantum theory and in distributional natural language
meaning. Formally equivalently, they allow for discarding part of the
description of a composite system, where we consider the discarded part to be
the context. We introduce dual density operators, which allow for two
independent notions of context. We demonstrate the use of dual density
operators within a grammatical-compositional distributional framework for
natural language meaning. We show that dual density operators can be used to
simultaneously represent: (i) ambiguity about word meanings (e.g. queen as a
person vs. queen as a band), and (ii) lexical entailment (e.g. tiger ->
mammal). We provide a proof-of-concept example.
| 2,016 | Computation and Language |
Words, Concepts, and the Geometry of Analogy | This paper presents a geometric approach to the problem of modelling the
relationship between words and concepts, focusing in particular on analogical
phenomena in language and cognition. Grounded in recent theories regarding
geometric conceptual spaces, we begin with an analysis of existing static
distributional semantic models and move on to an exploration of a dynamic
approach to using high dimensional spaces of word meaning to project subspaces
where analogies can potentially be solved in an online, contextualised way. The
crucial element of this analysis is the positioning of statistics in a
geometric environment replete with opportunities for interpretation.
| 2,016 | Computation and Language |
Quantifier Scope in Categorical Compositional Distributional Semantics | In previous work with J. Hedges, we formalised a generalised quantifiers
theory of natural language in categorical compositional distributional
semantics with the help of bialgebras. In this paper, we show how quantifier
scope ambiguity can be represented in that setting and how this representation
can be generalised to branching quantifiers.
| 2,016 | Computation and Language |
Entailment Relations on Distributions | In this paper we give an overview of partial orders on the space of
probability distributions that carry a notion of information content and serve
as a generalisation of the Bayesian order given in (Coecke and Martin, 2011).
We investigate what constraints are necessary in order to get a unique notion
of information content. These partial orders can be used to give an ordering on
words in vector space models of natural language meaning relating to the
contexts in which words are used, which is useful for a notion of entailment
and word disambiguation. The construction used also points towards a way to
create orderings on the space of density operators which allow a more
fine-grained study of entailment. The partial orders in this paper are directed
complete and form domains in the sense of domain theory.
| 2,016 | Computation and Language |
Quantum Algorithms for Compositional Natural Language Processing | We propose a new application of quantum computing to the field of natural
language processing. Ongoing work in this field attempts to incorporate
grammatical structure into algorithms that compute meaning. In (Coecke,
Sadrzadeh and Clark, 2010), the authors introduce such a model (the CSC model)
based on tensor product composition. While this algorithm has many advantages,
its implementation is hampered by the large classical computational resources
that it requires. In this work we show how computational shortcomings of the
CSC approach could be resolved using quantum computation (possibly in addition
to existing techniques for dimension reduction). We address the value of
quantum RAM (Giovannetti,2008) for this model and extend an algorithm from
Wiebe, Braun and Lloyd (2012) into a quantum algorithm to categorize sentences
in CSC. Our new algorithm demonstrates a quadratic speedup over classical
methods under certain conditions.
| 2,016 | Computation and Language |
Solving General Arithmetic Word Problems | This paper presents a novel approach to automatically solving arithmetic word
problems. This is the first algorithmic approach that can handle arithmetic
problems with multiple steps and operations, without depending on additional
annotations or predefined templates. We develop a theory for expression trees
that can be used to represent and evaluate the target arithmetic expressions;
we use it to uniquely decompose the target arithmetic problem to multiple
classification problems; we then compose an expression tree, combining these
with world knowledge through a constrained inference framework. Our classifiers
gain from the use of {\em quantity schemas} that supports better extraction of
features. Experimental results show that our method outperforms existing
systems, achieving state of the art performance on benchmark datasets of
arithmetic word problems.
| 2,016 | Computation and Language |
Word Segmentation on Micro-blog Texts with External Lexicon and
Heterogeneous Data | This paper describes our system designed for the NLPCC 2016 shared task on
word segmentation on micro-blog texts.
| 2,016 | Computation and Language |
UsingWord Embeddings for Query Translation for Hindi to English Cross
Language Information Retrieval | Cross-Language Information Retrieval (CLIR) has become an important problem
to solve in the recent years due to the growth of content in multiple languages
in the Web. One of the standard methods is to use query translation from source
to target language. In this paper, we propose an approach based on word
embeddings, a method that captures contextual clues for a particular word in
the source language and gives those words as translations that occur in a
similar context in the target language. Once we obtain the word embeddings of
the source and target language pairs, we learn a projection from source to
target word embeddings, making use of a dictionary with word translation
pairs.We then propose various methods of query translation and aggregation. The
advantage of this approach is that it does not require the corpora to be
aligned (which is difficult to obtain for resource-scarce languages), a
dictionary with word translation pairs is enough to train the word vectors for
translation. We experiment with Forum for Information Retrieval and Evaluation
(FIRE) 2008 and 2012 datasets for Hindi to English CLIR. The proposed word
embedding based approach outperforms the basic dictionary based approach by 70%
and when the word embeddings are combined with the dictionary, the hybrid
approach beats the baseline dictionary based method by 77%. It outperforms the
English monolingual baseline by 15%, when combined with the translations
obtained from Google Translate and Dictionary.
| 2,016 | Computation and Language |
Resolving Out-of-Vocabulary Words with Bilingual Embeddings in Machine
Translation | Out-of-vocabulary words account for a large proportion of errors in machine
translation systems, especially when the system is used on a different domain
than the one where it was trained. In order to alleviate the problem, we
propose to use a log-bilinear softmax-based model for vocabulary expansion,
such that given an out-of-vocabulary source word, the model generates a
probabilistic list of possible translations in the target language. Our model
uses only word embeddings trained on significantly large unlabelled monolingual
corpora and trains over a fairly small, word-to-word bilingual dictionary. We
input this probabilistic list into a standard phrase-based statistical machine
translation system and obtain consistent improvements in translation quality on
the English-Spanish language pair. Especially, we get an improvement of 3.9
BLEU points when tested over an out-of-domain test set.
| 2,016 | Computation and Language |
De-Conflated Semantic Representations | One major deficiency of most semantic representation techniques is that they
usually model a word type as a single point in the semantic space, hence
conflating all the meanings that the word can have. Addressing this issue by
learning distinct representations for individual meanings of words has been the
subject of several research studies in the past few years. However, the
generated sense representations are either not linked to any sense inventory or
are unreliable for infrequent word senses. We propose a technique that tackles
these problems by de-conflating the representations of words based on the deep
knowledge it derives from a semantic network. Our approach provides multiple
advantages in comparison to the past work, including its high coverage and the
ability to generate accurate representations even for infrequent word senses.
We carry out evaluations on six datasets across two semantic similarity tasks
and report state-of-the-art results on most of them.
| 2,016 | Computation and Language |
Text authorship identified using the dynamics of word co-occurrence
networks | The identification of authorship in disputed documents still requires human
expertise, which is now unfeasible for many tasks owing to the large volumes of
text and authors in practical applications. In this study, we introduce a
methodology based on the dynamics of word co-occurrence networks representing
written texts to classify a corpus of 80 texts by 8 authors. The texts were
divided into sections with equal number of linguistic tokens, from which time
series were created for 12 topological metrics. The series were proven to be
stationary (p-value>0.05), which permits to use distribution moments as
learning attributes. With an optimized supervised learning procedure using a
Radial Basis Function Network, 68 out of 80 texts were correctly classified,
i.e. a remarkable 85% author matching success rate. Therefore, fluctuations in
purely dynamic network metrics were found to characterize authorship, thus
opening the way for the description of texts in terms of small evolving
networks. Moreover, the approach introduced allows for comparison of texts with
diverse characteristics in a simple, fast fashion.
| 2,017 | Computation and Language |
Bridging the Gap: Incorporating a Semantic Similarity Measure for
Effectively Mapping PubMed Queries to Documents | The main approach of traditional information retrieval (IR) is to examine how
many words from a query appear in a document. A drawback of this approach,
however, is that it may fail to detect relevant documents where no or only few
words from a query are found. The semantic analysis methods such as LSA (latent
semantic analysis) and LDA (latent Dirichlet allocation) have been proposed to
address the issue, but their performance is not superior compared to common IR
approaches. Here we present a query-document similarity measure motivated by
the Word Mover's Distance. Unlike other similarity measures, the proposed
method relies on neural word embeddings to compute the distance between words.
This process helps identify related words when no direct matches are found
between a query and a document. Our method is efficient and straightforward to
implement. The experimental results on TREC Genomics data show that our
approach outperforms the BM25 ranking function by an average of 12% in mean
average precision. Furthermore, for a real-world dataset collected from the
PubMed search logs, we combine the semantic measure with BM25 using a learning
to rank method, which leads to improved ranking scores by up to 25%. This
experiment demonstrates that the proposed approach and BM25 nicely complement
each other and together produce superior performance.
| 2,017 | Computation and Language |
Boundary-based MWE segmentation with text partitioning | This work presents a fine-grained, text-chunking algorithm designed for the
task of multiword expressions (MWEs) segmentation. As a lexical class, MWEs
include a wide variety of idioms, whose automatic identification are a
necessity for the handling of colloquial language. This algorithm's core
novelty is its use of non-word tokens, i.e., boundaries, in a bottom-up
strategy. Leveraging boundaries refines token-level information, forging
high-level performance from relatively basic data. The generality of this
model's feature space allows for its application across languages and domains.
Experiments spanning 19 different languages exhibit a broadly-applicable,
state-of-the-art model. Evaluation against recent shared-task data places text
partitioning as the overall, best performing MWE segmentation algorithm,
covering all MWE classes and multiple English domains (including user-generated
text). This performance, coupled with a non-combinatorial, fast-running design,
produces an ideal combination for implementations at scale, which are
facilitated through the release of open-source software.
| 2,017 | Computation and Language |
Bi-directional Attention with Agreement for Dependency Parsing | We develop a novel bi-directional attention model for dependency parsing,
which learns to agree on headword predictions from the forward and backward
parsing directions. The parsing procedure for each direction is formulated as
sequentially querying the memory component that stores continuous headword
embeddings. The proposed parser makes use of {\it soft} headword embeddings,
allowing the model to implicitly capture high-order parsing history without
dramatically increasing the computational complexity. We conduct experiments on
English, Chinese, and 12 other languages from the CoNLL 2006 shared task,
showing that the proposed model achieves state-of-the-art unlabeled attachment
scores on 6 languages.
| 2,016 | Computation and Language |
Desiderata for Vector-Space Word Representations | A plethora of vector-space representations for words is currently available,
which is growing. These consist of fixed-length vectors containing real values,
which represent a word. The result is a representation upon which the power of
many conventional information processing and data mining techniques can be
brought to bear, as long as the representations are designed with some
forethought and fit certain constraints. This paper details desiderata for the
design of vector space representations of words.
| 2,016 | Computation and Language |
Encoder-decoder with Focus-mechanism for Sequence Labelling Based Spoken
Language Understanding | This paper investigates the framework of encoder-decoder with attention for
sequence labelling based spoken language understanding. We introduce
Bidirectional Long Short Term Memory - Long Short Term Memory networks
(BLSTM-LSTM) as the encoder-decoder model to fully utilize the power of deep
learning. In the sequence labelling task, the input and output sequences are
aligned word by word, while the attention mechanism cannot provide the exact
alignment. To address this limitation, we propose a novel focus mechanism for
encoder-decoder framework. Experiments on the standard ATIS dataset showed that
BLSTM-LSTM with focus mechanism defined the new state-of-the-art by
outperforming standard BLSTM and attention based encoder-decoder. Further
experiments also show that the proposed model is more robust to speech
recognition errors.
| 2,017 | Computation and Language |
HyperLex: A Large-Scale Evaluation of Graded Lexical Entailment | We introduce HyperLex - a dataset and evaluation resource that quantifies the
extent of of the semantic category membership, that is, type-of relation also
known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616
concept pairs. Cognitive psychology research has established that typicality
and category/class membership are computed in human semantic memory as a
gradual rather than binary relation. Nevertheless, most NLP research, and
existing large-scale invetories of concept category membership (WordNet,
DBPedia, etc.) treat category membership and LE as binary. To address this, we
asked hundreds of native English speakers to indicate typicality and strength
of category membership between a diverse range of concept pairs on a
crowdsourcing platform. Our results confirm that category membership and LE are
indeed more gradual than binary. We then compare these human judgements with
the predictions of automatic systems, which reveals a huge gap between human
performance and state-of-the-art LE, distributional and representation learning
models, and substantial differences between the models themselves. We discuss a
pathway for improving semantic models to overcome this discrepancy, and
indicate future application areas for improved graded LE systems.
| 2,017 | Computation and Language |
OCR of historical printings with an application to building diachronic
corpora: A case study using the RIDGES herbal corpus | This article describes the results of a case study that applies Neural
Network-based Optical Character Recognition (OCR) to scanned images of books
printed between 1487 and 1870 by training the OCR engine OCRopus
[@breuel2013high] on the RIDGES herbal text corpus [@OdebrechtEtAlSubmitted].
Training specific OCR models was possible because the necessary *ground truth*
is available as error-corrected diplomatic transcriptions. The OCR results have
been evaluated for accuracy against the ground truth of unseen test sets.
Character and word accuracies (percentage of correctly recognized items) for
the resulting machine-readable texts of individual documents range from 94% to
more than 99% (character level) and from 76% to 97% (word level). This includes
the earliest printed books, which were thought to be inaccessible by OCR
methods until recently. Furthermore, OCR models trained on one part of the
corpus consisting of books with different printing dates and different typesets
*(mixed models)* have been tested for their predictive power on the books from
the other part containing yet other fonts, mostly yielding character accuracies
well above 90%. It therefore seems possible to construct generalized models
trained on a range of fonts that can be applied to a wide variety of historical
printings still giving good results. A moderate postcorrection effort of some
pages will then enable the training of individual models with even better
accuracies. Using this method, diachronic corpora including early printings can
be constructed much faster and cheaper than by manual transcription. The OCR
methods reported here open up the possibility of transforming our printed
textual cultural heritage into electronic text by largely automatic means,
which is a prerequisite for the mass conversion of scanned books.
| 2,017 | Computation and Language |
Robsut Wrod Reocginiton via semi-Character Recurrent Neural Network | Language processing mechanism by humans is generally more robust than
computers. The Cmabrigde Uinervtisy (Cambridge University) effect from the
psycholinguistics literature has demonstrated such a robust word processing
mechanism, where jumbled words (e.g. Cmabrigde / Cambridge) are recognized with
little cost. On the other hand, computational models for word recognition (e.g.
spelling checkers) perform poorly on data with such noise. Inspired by the
findings from the Cmabrigde Uinervtisy effect, we propose a word recognition
model based on a semi-character level recurrent neural network (scRNN). In our
experiments, we demonstrate that scRNN has significantly more robust
performance in word spelling correction (i.e. word recognition) compared to
existing spelling checkers and character-based convolutional neural network.
Furthermore, we demonstrate that the model is cognitively plausible by
replicating a psycholinguistics experiment about human reading difficulty using
our model.
| 2,017 | Computation and Language |
Multi-task Domain Adaptation for Sequence Tagging | Many domain adaptation approaches rely on learning cross domain shared
representations to transfer the knowledge learned in one domain to other
domains. Traditional domain adaptation only considers adapting for one task. In
this paper, we explore multi-task representation learning under the domain
adaptation scenario. We propose a neural network framework that supports domain
adaptation for multiple tasks simultaneously, and learns shared representations
that better generalize for domain adaptation. We apply the proposed framework
to domain adaptation for sequence tagging problems considering two tasks:
Chinese word segmentation and named entity recognition. Experiments show that
multi-task domain adaptation works better than disjoint domain adaptation for
each task, and achieves the state-of-the-art results for both tasks in the
social media domain.
| 2,017 | Computation and Language |
Canonical Correlation Inference for Mapping Abstract Scenes to Text | We describe a technique for structured prediction, based on canonical
correlation analysis. Our learning algorithm finds two projections for the
input and the output spaces that aim at projecting a given input and its
correct output into points close to each other. We demonstrate our technique on
a language-vision problem, namely the problem of giving a textual description
to an "abstract scene".
| 2,017 | Computation and Language |
The Language of Generalization | Language provides simple ways of communicating generalizable knowledge to
each other (e.g., "Birds fly", "John hikes", "Fire makes smoke"). Though found
in every language and emerging early in development, the language of
generalization is philosophically puzzling and has resisted precise
formalization. Here, we propose the first formal account of generalizations
conveyed with language that makes quantitative predictions about human
understanding. We test our model in three diverse domains: generalizations
about categories (generic language), events (habitual language), and causes
(causal language). The model explains the gradience in human endorsement
through the interplay between a simple truth-conditional semantic theory and
diverse beliefs about properties, formalized in a probabilistic model of
language understanding. This work opens the door to understanding precisely how
abstract knowledge is learned from language.
| 2,018 | Computation and Language |
Temporal Attention Model for Neural Machine Translation | Attention-based Neural Machine Translation (NMT) models suffer from attention
deficiency issues as has been observed in recent research. We propose a novel
mechanism to address some of these limitations and improve the NMT attention.
Specifically, our approach memorizes the alignments temporally (within each
sentence) and modulates the attention with the accumulated temporal memory, as
the decoder generates the candidate translation. We compare our approach
against the baseline NMT model and two other related approaches that address
this issue either explicitly or implicitly. Large-scale experiments on two
language pairs show that our approach achieves better and robust gains over the
baseline and related NMT approaches. Our model further outperforms strong SMT
baselines in some settings even without using ensembles.
| 2,016 | Computation and Language |
Towards cross-lingual distributed representations without parallel text
trained with adversarial autoencoders | Current approaches to learning vector representations of text that are
compatible between different languages usually require some amount of parallel
text, aligned at word, sentence or at least document level. We hypothesize
however, that different natural languages share enough semantic structure that
it should be possible, in principle, to learn compatible vector representations
just by analyzing the monolingual distribution of words.
In order to evaluate this hypothesis, we propose a scheme to map word vectors
trained on a source language to vectors semantically compatible with word
vectors trained on a target language using an adversarial autoencoder.
We present preliminary qualitative results and discuss possible future
developments of this technique, such as applications to cross-lingual sentence
representations.
| 2,016 | Computation and Language |
Neural Generation of Regular Expressions from Natural Language with
Minimal Domain Knowledge | This paper explores the task of translating natural language queries into
regular expressions which embody their meaning. In contrast to prior work, the
proposed neural model does not utilize domain-specific crafting, learning to
translate directly from a parallel corpus. To fully explore the potential of
neural models, we propose a methodology for collecting a large corpus of
regular expression, natural language pairs. Our resulting model achieves a
performance gain of 19.6% over previous state-of-the-art models.
| 2,016 | Computation and Language |
Hierarchical Character-Word Models for Language Identification | Social media messages' brevity and unconventional spelling pose a challenge
to language identification. We introduce a hierarchical model that learns
character and contextualized word-level representations for language
identification. Our method performs well against strong base- lines, and can
also reveal code-switching.
| 2,016 | Computation and Language |
An assessment of orthographic similarity measures for several African
languages | Natural Language Interfaces and tools such as spellcheckers and Web search in
one's own language are known to be useful in ICT-mediated communication. Most
languages in Southern Africa are under-resourced, however. Therefore, it would
be very useful if both the generic and the few language-specific NLP tools
could be reused or easily adapted across languages. This depends on the notion,
and extent, of similarity between the languages. We assess this from the angle
of orthography and corpora. Twelve versions of the Universal Declaration of
Human Rights (UDHR) are examined, showing clusters of languages, and which are
thus more or less amenable to cross-language adaptation of NLP tools, which do
not match with Guthrie zones. To examine the generalisability of these results,
we zoom in on isiZulu both quantitatively and qualitatively with four other
corpora and texts in different genres. The results show that the UDHR is a
typical text document orthographically. The results also provide insight into
usability of typical measures such as lexical diversity and genre, and that the
same statistic may mean different things in different documents. While NLTK for
Python could be used for basic analyses of text, it, and similar NLP tools,
will need considerable customization.
| 2,016 | Computation and Language |
Sex, drugs, and violence | Automatically detecting inappropriate content can be a difficult NLP task,
requiring understanding context and innuendo, not just identifying specific
keywords. Due to the large quantity of online user-generated content, automatic
detection is becoming increasingly necessary. We take a largely unsupervised
approach using a large corpus of narratives from a community-based
self-publishing website and a small segment of crowd-sourced annotations. We
explore topic modelling using latent Dirichlet allocation (and a variation),
and use these to regress appropriateness ratings, effectively automating rating
for suitability. The results suggest that certain topics inferred may be useful
in detecting latent inappropriateness -- yielding recall up to 96% and low
regression errors.
| 2,016 | Computation and Language |
WikiReading: A Novel Large-scale Language Understanding Task over
Wikipedia | We present WikiReading, a large-scale natural language understanding task and
publicly-available dataset with 18 million instances. The task is to predict
textual values from the structured knowledge base Wikidata by reading the text
of the corresponding Wikipedia articles. The task contains a rich variety of
challenging classification and extraction sub-tasks, making it well-suited for
end-to-end models such as deep neural networks (DNNs). We compare various
state-of-the-art DNN-based architectures for document classification,
information extraction, and question answering. We find that models supporting
a rich answer space, such as word or character sequences, perform best. Our
best-performing model, a word-level sequence to sequence model with a mechanism
to copy out-of-vocabulary words, obtains an accuracy of 71.8%.
| 2,016 | Computation and Language |
The statistical trade-off between word order and word structure -
large-scale evidence for the principle of least effort | Languages employ different strategies to transmit structural and grammatical
information. While, for example, grammatical dependency relationships in
sentences are mainly conveyed by the ordering of the words for languages like
Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for
languages such as Inupiatun or Quechua, as those languages (also) use the
internal structure of words (e.g. inflectional morphology) to mark grammatical
relationships in a sentence. Based on a quantitative analysis of more than
1,500 unique translations of different books of the Bible in more than 1,100
different languages that are spoken as a native language by approximately 6
billion people (more than 80% of the world population), we present large-scale
evidence for a statistical trade-off between the amount of information conveyed
by the ordering of words and the amount of information conveyed by internal
word structure: languages that rely more strongly on word order information
tend to rely less on word structure information and vice versa. In addition, we
find that - despite differences in the way information is expressed - there is
also evidence for a trade-off between different books of the biblical canon
that recurs with little variation across languages: the more informative the
word order of the book, the less informative its word structure and vice versa.
We argue that this might suggest that, on the one hand, languages encode
information in very different (but efficient) ways. On the other hand,
content-related and stylistic features are statistically encoded in very
similar ways.
| 2,017 | Computation and Language |
Extracting Biological Pathway Models From NLP Event Representations | This paper describes an an open-source software system for the automatic
conversion of NLP event representations to system biology structured data
interchange formats such as SBML and BioPAX. It is part of a larger effort to
make results of the NLP community available for system biology pathway
modelers.
| 2,015 | Computation and Language |
Measuring the State of the Art of Automated Pathway Curation Using Graph
Algorithms - A Case Study of the mTOR Pathway | This paper evaluates the difference between human pathway curation and
current NLP systems. We propose graph analysis methods for quantifying the gap
between human curated pathway maps and the output of state-of-the-art automatic
NLP systems. Evaluation is performed on the popular mTOR pathway. Based on
analyzing where current systems perform well and where they fail, we identify
possible avenues for progress.
| 2,016 | Computation and Language |
Redefining part-of-speech classes with distributional semantic models | This paper studies how word embeddings trained on the British National Corpus
interact with part of speech boundaries. Our work targets the Universal PoS tag
set, which is currently actively being used for annotation of a range of
languages. We experiment with training classifiers for predicting PoS tags for
words based on their embeddings. The results show that the information about
PoS affiliation contained in the distributional vectors allows us to discover
groups of words with distributional patterns that differ from other words of
the same part of speech.
This data often reveals hidden inconsistencies of the annotation process or
guidelines. At the same time, it supports the notion of `soft' or `graded' part
of speech affiliations. Finally, we show that information about PoS is
distributed among dozens of vector components, not limited to only one or two
features.
| 2,016 | Computation and Language |
Rapid Classification of Crisis-Related Data on Social Networks using
Convolutional Neural Networks | The role of social media, in particular microblogging platforms such as
Twitter, as a conduit for actionable and tactical information during disasters
is increasingly acknowledged. However, time-critical analysis of big crisis
data on social media streams brings challenges to machine learning techniques,
especially the ones that use supervised learning. The Scarcity of labeled data,
particularly in the early hours of a crisis, delays the machine learning
process. The current state-of-the-art classification methods require a
significant amount of labeled data specific to a particular event for training
plus a lot of feature engineering to achieve best results. In this work, we
introduce neural network based classification methods for binary and
multi-class tweet classification task. We show that neural network based models
do not require any feature engineering and perform better than state-of-the-art
methods. In the early hours of a disaster when no labeled data is available,
our proposed method makes the best use of the out-of-event data and achieves
good results.
| 2,016 | Computation and Language |
Determining Health Utilities through Data Mining of Social Media | 'Health utilities' measure patient preferences for perfect health compared to
specific unhealthy states, such as asthma, a fractured hip, or colon cancer.
When integrated over time, these estimations are called quality adjusted life
years (QALYs). Until now, characterizing health utilities (HUs) required
detailed patient interviews or written surveys. While reliable and specific,
this data remained costly due to efforts to locate, enlist and coordinate
participants. Thus the scope, context and temporality of diseases examined has
remained limited.
Now that more than a billion people use social media, we propose a novel
strategy: use natural language processing to analyze public online
conversations for signals of the severity of medical conditions and correlate
these to known HUs using machine learning. In this work, we filter a dataset
that originally contained 2 billion tweets for relevant content on 60 diseases.
Using this data, our algorithm successfully distinguished mild from severe
diseases, which had previously been categorized only by traditional techniques.
This represents progress towards two related applications: first, predicting
HUs where such information is nonexistent; and second, (where rich HU data
already exists) estimating temporal or geographic patterns of disease severity
through data mining.
| 2,016 | Computation and Language |
An Analysis of Lemmatization on Topic Models of Morphologically Rich
Language | Topic models are typically represented by top-$m$ word lists for human
interpretation. The corpus is often pre-processed with lemmatization (or
stemming) so that those representations are not undermined by a proliferation
of words with similar meanings, but there is little public work on the effects
of that pre-processing. Recent work studied the effect of stemming on topic
models of English texts and found no supporting evidence for the practice. We
study the effect of lemmatization on topic models of Russian Wikipedia
articles, finding in one configuration that it significantly improves
interpretability according to a word intrusion metric. We conclude that
lemmatization may benefit topic models on morphologically rich languages, but
that further investigation is needed.
| 2,019 | Computation and Language |
Viewpoint and Topic Modeling of Current Events | There are multiple sides to every story, and while statistical topic models
have been highly successful at topically summarizing the stories in corpora of
text documents, they do not explicitly address the issue of learning the
different sides, the viewpoints, expressed in the documents. In this paper, we
show how these viewpoints can be learned completely unsupervised and
represented in a human interpretable form. We use a novel approach of applying
CorrLDA2 for this purpose, which learns topic-viewpoint relations that can be
used to form groups of topics, where each group represents a viewpoint. A
corpus of documents about the Israeli-Palestinian conflict is then used to
demonstrate how a Palestinian and an Israeli viewpoint can be learned. By
leveraging the magnitudes and signs of the feature weights of a linear SVM, we
introduce a principled method to evaluate associations between topics and
viewpoints. With this, we demonstrate, both quantitatively and qualitatively,
that the learned topic groups are contextually coherent, and form consistently
correct topic-viewpoint associations.
| 2,016 | Computation and Language |
Numerically Grounded Language Models for Semantic Error Correction | Semantic error detection and correction is an important task for applications
such as fact checking, speech-to-text or grammatical error correction. Current
approaches generally focus on relatively shallow semantics and do not account
for numeric quantities. Our approach uses language models grounded in numbers
within the text. Such groundings are easily achieved for recurrent neural
language model architectures, which can be further conditioned on incomplete
background knowledge bases. Our evaluation on clinical reports shows that
numerical grounding improves perplexity by 33% and F1 for semantic error
correction by 5 points when compared to ungrounded approaches. Conditioning on
a knowledge base yields further improvements.
| 2,016 | Computation and Language |
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction
Tasks | There is a lot of research interest in encoding variable length sentences
into fixed length vectors, in a way that preserves the sentence meanings. Two
common methods include representations based on averaging word vectors, and
representations based on the hidden states of recurrent neural networks such as
LSTMs. The sentence vectors are used as features for subsequent machine
learning tasks or for pre-training in the context of deep learning. However,
not much is known about the properties that are encoded in these sentence
representations and about the language information they capture. We propose a
framework that facilitates better understanding of the encoded representations.
We define prediction tasks around isolated aspects of sentence structure
(namely sentence length, word content, and word order), and score
representations by the ability to train a classifier to solve each prediction
task when using the representation as input. We demonstrate the potential
contribution of the approach by analyzing different sentence representation
mechanisms. The analysis sheds light on the relative strengths of different
sentence embedding methods with respect to these low level prediction tasks,
and on the effect of the encoded vector's dimensionality on the resulting
representations.
| 2,017 | Computation and Language |
Natural Language Processing using Hadoop and KOSHIK | Natural language processing, as a data analytics related technology, is used
widely in many research areas such as artificial intelligence, human language
processing, and translation. At present, due to explosive growth of data, there
are many challenges for natural language processing. Hadoop is one of the
platforms that can process the large amount of data required for natural
language processing. KOSHIK is one of the natural language processing
architectures, and utilizes Hadoop and contains language processing components
such as Stanford CoreNLP and OpenNLP. This study describes how to build a
KOSHIK platform with the relevant tools, and provides the steps to analyze wiki
data. Finally, it evaluates and discusses the advantages and disadvantages of
the KOSHIK architecture, and gives recommendations on improving the processing
performance.
| 2,016 | Computation and Language |
Fast, Small and Exact: Infinite-order Language Modelling with Compressed
Suffix Trees | Efficient methods for storing and querying are critical for scaling
high-order n-gram language models to large corpora. We propose a language model
based on compressed suffix trees, a representation that is highly compact and
can be easily held in memory, while supporting queries needed in computing
language model probabilities on-the-fly. We present several optimisations which
improve query runtimes up to 2500x, despite only incurring a modest increase in
construction time and memory usage. For large corpora and high Markov orders,
our method is highly competitive with the state-of-the-art KenLM package. It
imposes much lower memory requirements, often by orders of magnitude, and has
runtimes that are either similar (for training) or comparable (for querying).
| 2,016 | Computation and Language |
Authorship clustering using multi-headed recurrent neural networks | A recurrent neural network that has been trained to separately model the
language of several documents by unknown authors is used to measure similarity
between the documents. It is able to find clues of common authorship even when
the documents are very short and about disparate topics. While it is easy to
make statistically significant predictions regarding authorship, it is
difficult to group documents into definite clusters with high accuracy.
| 2,016 | Computation and Language |
Neural versus Phrase-Based Machine Translation Quality: a Case Study | Within the field of Statistical Machine Translation (SMT), the neural
approach (NMT) has recently emerged as the first technology able to challenge
the long-standing dominance of phrase-based approaches (PBMT). In particular,
at the IWSLT 2015 evaluation campaign, NMT outperformed well established
state-of-the-art PBMT systems on English-German, a language pair known to be
particularly hard because of morphology and syntactic differences. To
understand in what respects NMT provides better translation quality than PBMT,
we perform a detailed analysis of neural versus phrase-based SMT outputs,
leveraging high quality post-edits performed by professional translators on the
IWSLT data. For the first time, our analysis provides useful insights on what
linguistic phenomena are best modeled by neural models -- such as the
reordering of verbs -- while pointing out other aspects that remain to be
improved.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.