Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Spoken Language Translation for Polish | Spoken language translation (SLT) is becoming more important in the
increasingly globalized world, both from a social and economic point of view.
It is one of the major challenges for automatic speech recognition (ASR) and
machine translation (MT), driving intense research activities in these areas.
While past research in SLT, due to technology limitations, dealt mostly with
speech recorded under controlled conditions, today's major challenge is the
translation of spoken language as it can be found in real life. Considered
application scenarios range from portable translators for tourists, lectures
and presentations translation, to broadcast news and shows with live
captioning. We would like to present PJIIT's experiences in the SLT gained from
the Eu-Bridge 7th framework project and the U-Star consortium activities for
the Polish/English language pair. Presented research concentrates on ASR
adaptation for Polish (state-of-the-art acoustic models: DBN-BLSTM training,
Kaldi: LDA+MLLT+SAT+MMI), language modeling for ASR & MT (text normalization,
RNN-based LMs, n-gram model domain interpolation) and statistical translation
techniques (hierarchical models, factored translation models, automatic casing
and punctuation, comparable and bilingual corpora preparation). While results
for the well-defined domains (phrases for travelers, parliament speeches,
medical documentation, movie subtitling) are very encouraging, less defined
domains (presentation, lectures) still form a challenge. Our progress in the
IWSLT TED task (MT only) will be presented, as well as current progress in the
Polish ASR.
| 2,015 | Computation and Language |
Natural Language Understanding with Distributed Representation | This is a lecture note for the course DS-GA 3001 <Natural Language
Understanding with Distributed Representation> at the Center for Data Science ,
New York University in Fall, 2015. As the name of the course suggests, this
lecture note introduces readers to a neural network based approach to natural
language understanding/processing. In order to make it as self-contained as
possible, I spend much time on describing basics of machine learning and neural
networks, only after which how they are used for natural languages is
introduced. On the language front, I almost solely focus on language modelling
and machine translation, two of which I personally find most fascinating and
most fundamental to natural language understanding.
| 2,015 | Computation and Language |
Towards Universal Paraphrastic Sentence Embeddings | We consider the problem of learning general-purpose, paraphrastic sentence
embeddings based on supervision from the Paraphrase Database (Ganitkevitch et
al., 2013). We compare six compositional architectures, evaluating them on
annotated textual similarity datasets drawn both from the same distribution as
the training data and from a wide range of other domains. We find that the most
complex architectures, such as long short-term memory (LSTM) recurrent neural
networks, perform best on the in-domain data. However, in out-of-domain
scenarios, simple architectures such as word averaging vastly outperform LSTMs.
Our simplest averaging model is even competitive with systems tuned for the
particular tasks while also being extremely efficient and easy to use.
In order to better understand how these architectures compare, we conduct
further experiments on three supervised NLP tasks: sentence similarity,
entailment, and sentiment classification. We again find that the word averaging
models perform well for sentence similarity and entailment, outperforming
LSTMs. However, on sentiment classification, we find that the LSTM performs
very strongly-even recording new state-of-the-art performance on the Stanford
Sentiment Treebank.
We then demonstrate how to combine our pretrained sentence embeddings with
these supervised tasks, using them both as a prior and as a black box feature
extractor. This leads to performance rivaling the state of the art on the SICK
similarity and entailment tasks. We release all of our resources to the
research community with the hope that they can serve as the new baseline for
further work on universal sentence embeddings.
| 2,016 | Computation and Language |
Named Entity Recognition with Bidirectional LSTM-CNNs | Named entity recognition is a challenging task that has traditionally
required large amounts of knowledge in the form of feature engineering and
lexicons to achieve high performance. In this paper, we present a novel neural
network architecture that automatically detects word- and character-level
features using a hybrid bidirectional LSTM and CNN architecture, eliminating
the need for most feature engineering. We also propose a novel method of
encoding partial lexicon matches in neural networks and compare it to existing
approaches. Extensive evaluation shows that, given only tokenized text and
publicly available word embeddings, our system is competitive on the CoNLL-2003
dataset and surpasses the previously reported state of the art performance on
the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed
from publicly-available sources, we establish new state of the art performance
with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing
systems that employ heavy feature engineering, proprietary lexicons, and rich
entity linking information.
| 2,016 | Computation and Language |
The Mechanism of Additive Composition | Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitchell
and Lapata, 2010) is a widely used method for computing meanings of phrases,
which takes the average of vector representations of the constituent words. In
this article, we prove an upper bound for the bias of additive composition,
which is the first theoretical analysis on compositional frameworks from a
machine learning point of view. The bound is written in terms of collocation
strength; we prove that the more exclusively two successive words tend to occur
together, the more accurate one can guarantee their additive composition as an
approximation to the natural phrase vector. Our proof relies on properties of
natural language data that are empirically verified, and can be theoretically
derived from an assumption that the data is generated from a Hierarchical
Pitman-Yor Process. The theory endorses additive composition as a reasonable
operation for calculating meanings of phrases, and suggests ways to improve
additive compositionality, including: transforming entries of distributional
word vectors by a function that meets a specific condition, constructing a
novel type of vector representations to make additive composition sensitive to
word order, and utilizing singular value decomposition to train word vectors.
| 2,017 | Computation and Language |
OntoSeg: a Novel Approach to Text Segmentation using Ontological
Similarity | Text segmentation (TS) aims at dividing long text into coherent segments
which reflect the subtopic structure of the text. It is beneficial to many
natural language processing tasks, such as Information Retrieval (IR) and
document summarisation. Current approaches to text segmentation are similar in
that they all use word-frequency metrics to measure the similarity between two
regions of text, so that a document is segmented based on the lexical cohesion
between its words. Various NLP tasks are now moving towards the semantic web
and ontologies, such as ontology-based IR systems, to capture the
conceptualizations associated with user needs and contents. Text segmentation
based on lexical cohesion between words is hence not sufficient anymore for
such tasks. This paper proposes OntoSeg, a novel approach to text segmentation
based on the ontological similarity between text blocks. The proposed method
uses ontological similarity to explore conceptual relations between text
segments and a Hierarchical Agglomerative Clustering (HAC) algorithm to
represent the text as a tree-like hierarchy that is conceptually structured.
The rich structure of the created tree further allows the segmentation of text
in a linear fashion at various levels of granularity. The proposed method was
evaluated on a wellknown dataset, and the results show that using ontological
similarity in text segmentation is very promising. Also we enhance the proposed
method by combining ontological similarity with lexical similarity and the
results show an enhancement of the segmentation quality.
| 2,015 | Computation and Language |
Category Enhanced Word Embedding | Distributed word representations have been demonstrated to be effective in
capturing semantic and syntactic regularities. Unsupervised representation
learning from large unlabeled corpora can learn similar representations for
those words that present similar co-occurrence statistics. Besides local
occurrence statistics, global topical information is also important knowledge
that may help discriminate a word from another. In this paper, we incorporate
category information of documents in the learning of word representations and
to learn the proposed models in a document-wise manner. Our models outperform
several state-of-the-art models in word analogy and word similarity tasks.
Moreover, we evaluate the learned word vectors on sentiment analysis and text
classification tasks, which shows the superiority of our learned word vectors.
We also learn high-quality category embeddings that reflect topical meanings.
| 2,015 | Computation and Language |
A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks.
| 2,015 | Computation and Language |
Bootstrapping Ternary Relation Extractors | Binary relation extraction methods have been widely studied in recent years.
However, few methods have been developed for higher n-ary relation extraction.
One limiting factor is the effort required to generate training data. For
binary relations, one only has to provide a few dozen pairs of entities per
relation, as training data. For ternary relations (n=3), each training instance
is a triplet of entities, placing a greater cognitive load on people. For
example, many people know that Google acquired Youtube but not the dollar
amount or the date of the acquisition and many people know that Hillary Clinton
is married to Bill Clinton by not the location or date of their wedding. This
makes higher n-nary training data generation a time consuming exercise in
searching the Web. We present a resource for training ternary relation
extractors. This was generated using a minimally supervised yet effective
approach. We present statistics on the size and the quality of the dataset.
| 2,019 | Computation and Language |
Machine Learning Sentiment Prediction based on Hybrid Document
Representation | Automated sentiment analysis and opinion mining is a complex process
concerning the extraction of useful subjective information from text. The
explosion of user generated content on the Web, especially the fact that
millions of users, on a daily basis, express their opinions on products and
services to blogs, wikis, social networks, message boards, etc., render the
reliable, automated export of sentiments and opinions from unstructured text
crucial for several commercial applications. In this paper, we present a novel
hybrid vectorization approach for textual resources that combines a weighted
variant of the popular Word2Vec representation (based on Term Frequency-Inverse
Document Frequency) representation and with a Bag- of-Words representation and
a vector of lexicon-based sentiment values. The proposed text representation
approach is assessed through the application of several machine learning
classification algorithms on a dataset that is used extensively in literature
for sentiment detection. The classification accuracy derived through the
proposed hybrid vectorization approach is higher than when its individual
components are used for text represenation, and comparable with
state-of-the-art sentiment detection methodologies.
| 2,015 | Computation and Language |
Aspect-based Opinion Summarization with Convolutional Neural Networks | This paper considers Aspect-based Opinion Summarization (AOS) of reviews on
particular products. To enable real applications, an AOS system needs to
address two core subtasks, aspect extraction and sentiment classification. Most
existing approaches to aspect extraction, which use linguistic analysis or
topic modeling, are general across different products but not precise enough or
suitable for particular products. Instead we take a less general but more
precise scheme, directly mapping each review sentence into pre-defined aspects.
To tackle aspect mapping and sentiment classification, we propose two
Convolutional Neural Network (CNN) based methods, cascaded CNN and multitask
CNN. Cascaded CNN contains two levels of convolutional networks. Multiple CNNs
at level 1 deal with aspect mapping task, and a single CNN at level 2 deals
with sentiment classification. Multitask CNN also contains multiple aspect CNNs
and a sentiment CNN, but different networks share the same word embeddings.
Experimental results indicate that both cascaded and multitask CNNs outperform
SVM-based methods by large margins. Multitask CNN generally performs better
than cascaded CNN.
| 2,015 | Computation and Language |
Modeling Dynamic Relationships Between Characters in Literary Novels | Studying characters plays a vital role in computationally representing and
interpreting narratives. Unlike previous work, which has focused on inferring
character roles, we focus on the problem of modeling their relationships.
Rather than assuming a fixed relationship for a character pair, we hypothesize
that relationships are dynamic and temporally evolve with the progress of the
narrative, and formulate the problem of relationship modeling as a structured
prediction problem. We propose a semi-supervised framework to learn
relationship sequences from fully as well as partially labeled data. We present
a Markovian model capable of accumulating historical beliefs about the
relationship and status changes. We use a set of rich linguistic and
semantically motivated features that incorporate world knowledge to investigate
the textual content of narrative. We empirically demonstrate that such a
framework outperforms competitive baselines.
| 2,015 | Computation and Language |
Enhancements in statistical spoken language translation by
de-normalization of ASR results | Spoken language translation (SLT) has become very important in an
increasingly globalized world. Machine translation (MT) for automatic speech
recognition (ASR) systems is a major challenge of great interest. This research
investigates that automatic sentence segmentation of speech that is important
for enriching speech recognition output and for aiding downstream language
processing. This article focuses on the automatic sentence segmentation of
speech and improving MT results. We explore the problem of identifying sentence
boundaries in the transcriptions produced by automatic speech recognition
systems in the Polish language. We also experiment with reverse normalization
of the recognized speech samples.
| 2,016 | Computation and Language |
Multilingual Language Processing From Bytes | We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads
text as bytes and outputs span annotations of the form [start, length, label]
where start positions, lengths, and labels are separate entries in our
vocabulary. Because we operate directly on unicode bytes rather than
language-specific words or characters, we can analyze text in many languages
with a single model. Due to the small vocabulary size, these multilingual
models are very compact, but produce results similar to or better than the
state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that
use only the provided training datasets (no external data sources). Our models
are learning "from scratch" in that they do not rely on any elements of the
standard pipeline in Natural Language Processing (including tokenization), and
thus can run in standalone fashion on raw text.
| 2,016 | Computation and Language |
Inferring Interpersonal Relations in Narrative Summaries | Characterizing relationships between people is fundamental for the
understanding of narratives. In this work, we address the problem of inferring
the polarity of relationships between people in narrative summaries. We
formulate the problem as a joint structured prediction for each narrative, and
present a model that combines evidence from linguistic and semantic features,
as well as features based on the structure of the social community in the text.
We also provide a clustering-based approach that can exploit regularities in
narrative types. e.g., learn an affinity for love-triangles in romantic
stories. On a dataset of movie summaries from Wikipedia, our structured models
provide more than a 30% error-reduction over a competitive baseline that
considers pairs of characters in isolation.
| 2,015 | Computation and Language |
Augmenting Phrase Table by Employing Lexicons for Pivot-based SMT | Pivot language is employed as a way to solve the data sparseness problem in
machine translation, especially when the data for a particular language pair
does not exist. The combination of source-to-pivot and pivot-to-target
translation models can induce a new translation model through the pivot
language. However, the errors in two models may compound as noise, and still,
the combined model may suffer from a serious phrase sparsity problem. In this
paper, we directly employ the word lexical model in IBM models as an additional
resource to augment pivot phrase table. In addition, we also propose a phrase
table pruning method which takes into account both of the source and target
phrasal coverage. Experimental result shows that our pruning method
significantly outperforms the conventional one, which only considers source
side phrasal coverage. Furthermore, by including the entries in the lexicon
model, the phrase coverage increased, and we achieved improved results in
Chinese-to-Japanese translation using English as pivot language.
| 2,015 | Computation and Language |
LSTM Neural Reordering Feature for Statistical Machine Translation | Artificial neural networks are powerful models, which have been widely
applied into many aspects of machine translation, such as language modeling and
translation modeling. Though notable improvements have been made in these
areas, the reordering problem still remains a challenge in statistical machine
translations. In this paper, we present a novel neural reordering model that
directly models word pairs and alignment. By utilizing LSTM recurrent neural
networks, much longer context could be learned for reordering prediction.
Experimental results on NIST OpenMT12 Arabic-English and Chinese-English
1000-best rescoring task show that our LSTM neural reordering feature is robust
and achieves significant improvements over various baseline systems.
| 2,017 | Computation and Language |
Benchmarking sentiment analysis methods for large-scale texts: A case
for using continuum-scored words and word shift graphs | The emergence and global adoption of social media has rendered possible the
real-time estimation of population-scale sentiment, bearing profound
implications for our understanding of human behavior. Given the growing
assortment of sentiment measuring instruments, comparisons between them are
evidently required. Here, we perform detailed tests of 6 dictionary-based
methods applied to 4 different corpora, and briefly examine a further 20
methods. We show that a dictionary-based method will only perform both reliably
and meaningfully if (1) the dictionary covers a sufficiently large enough
portion of a given text's lexicon when weighted by word usage frequency; and
(2) words are scored on a continuous scale.
| 2,016 | Computation and Language |
Probabilistic Latent Semantic Analysis (PLSA) untuk Klasifikasi Dokumen
Teks Berbahasa Indonesia | One task that is included in managing documents is how to find substantial
information inside. Topic modeling is a technique that has been developed to
produce document representation in form of keywords. The keywords will be used
in the indexing process and document retrieval as needed by users. In this
research, we will discuss specifically about Probabilistic Latent Semantic
Analysis (PLSA). It will cover PLSA mechanism which involves Expectation
Maximization (EM) as the training algorithm, how to conduct testing, and obtain
the accuracy result.
| 2,015 | Computation and Language |
Klasifikasi Komponen Argumen Secara Otomatis pada Dokumen Teks berbentuk
Esai Argumentatif | By automatically recognize argument component, essay writers can do some
inspections to texts that they have written. It will assist essay scoring
process objectively and precisely because essay grader is able to see how well
the argument components are constructed. Some reseachers have tried to do
argument detection and classification along with its implementation in some
domains. The common approach is by doing feature extraction to the text.
Generally, the features are structural, lexical, syntactic, indicator, and
contextual. In this research, we add new feature to the existing features. It
adopts keywords list by Knott and Dale (1993). The experiment result shows the
argument classification achieves 72.45% accuracy. Moreover, we still get the
same accuracy without the keyword lists. This concludes that the keyword lists
do not affect significantly to the features. All features are still weak to
classify major claim and claim, so we need other features which are useful to
differentiate those two kind of argument components.
| 2,015 | Computation and Language |
Annotating Character Relationships in Literary Texts | We present a dataset of manually annotated relationships between characters
in literary texts, in order to support the training and evaluation of automatic
methods for relation type prediction in this domain (Makazhanov et al., 2014;
Kokkinakis, 2013) and the broader computational analysis of literary character
(Elson et al., 2010; Bamman et al., 2014; Vala et al., 2015; Flekova and
Gurevych, 2015). In this work, we solicit annotations from workers on Amazon
Mechanical Turk for 109 texts ranging from Homer's _Iliad_ to Joyce's _Ulysses_
on four dimensions of interest: for a given pair of characters, we collect
judgments as to the coarse-grained category (professional, social, familial),
fine-grained category (friend, lover, parent, rival, employer), and affinity
(positive, negative, neutral) that describes their primary relationship in a
text. We do not assume that this relationship is static; we also collect
judgments as to whether it changes at any point in the course of the text.
| 2,015 | Computation and Language |
Effective LSTMs for Target-Dependent Sentiment Classification | Target-dependent sentiment classification remains a challenge: modeling the
semantic relatedness of a target with its context words in a sentence.
Different context words have different influences on determining the sentiment
polarity of a sentence towards the target. Therefore, it is desirable to
integrate the connections between target word and context words when building a
learning system. In this paper, we develop two target dependent long short-term
memory (LSTM) models, where target information is automatically taken into
account. We evaluate our methods on a benchmark dataset from Twitter. Empirical
results show that modeling sentence representation with standard LSTM does not
perform well. Incorporating target information into LSTM can significantly
boost the classification accuracy. The target-dependent LSTM models achieve
state-of-the-art performances without using syntactic parser or external
sentiment lexicons.
| 2,016 | Computation and Language |
Building Memory with Concept Learning Capabilities from Large-scale
Knowledge Base | We present a new perspective on neural knowledge base (KB) embeddings, from
which we build a framework that can model symbolic knowledge in the KB together
with its learning process. We show that this framework well regularizes
previous neural KB embedding model for superior performance in reasoning tasks,
while having the capabilities of dealing with unseen entities, that is, to
learn their embeddings from natural language descriptions, which is very like
human's behavior of learning semantic concepts.
| 2,015 | Computation and Language |
Predicting the top and bottom ranks of billboard songs using Machine
Learning | The music industry is a $130 billion industry. Predicting whether a song
catches the pulse of the audience impacts the industry. In this paper we
analyze language inside the lyrics of the songs using several computational
linguistic algorithms and predict whether a song would make to the top or
bottom of the billboard rankings based on the language features. We trained and
tested an SVM classifier with a radial kernel function on the linguistic
features. Results indicate that we can classify whether a song belongs to top
and bottom of the billboard charts with a precision of 0.76.
| 2,015 | Computation and Language |
Neural Generative Question Answering | This paper presents an end-to-end neural network model, named Neural
Generative Question Answering (GENQA), that can generate answers to simple
factoid questions, based on the facts in a knowledge-base. More specifically,
the model is built on the encoder-decoder framework for sequence-to-sequence
learning, while equipped with the ability to enquire the knowledge-base, and is
trained on a corpus of question-answer pairs, with their associated triples in
the knowledge-base. Empirical study shows the proposed model can effectively
deal with the variations of questions and answers, and generate right and
natural answers by referring to the facts in the knowledge-base. The experiment
on question answering demonstrates that the proposed model can outperform an
embedding-based QA model as well as a neural dialogue model trained on the same
data.
| 2,016 | Computation and Language |
Topic segmentation via community detection in complex networks | Many real systems have been modelled in terms of network concepts, and
written texts are a particular example of information networks. In recent
years, the use of network methods to analyze language has allowed the discovery
of several interesting findings, including the proposition of novel models to
explain the emergence of fundamental universal patterns. While syntactical
networks, one of the most prevalent networked models of written texts, display
both scale-free and small-world properties, such representation fails in
capturing other textual features, such as the organization in topics or
subjects. In this context, we propose a novel network representation whose main
purpose is to capture the semantical relationships of words in a simple way. To
do so, we link all words co-occurring in the same semantic context, which is
defined in a threefold way. We show that the proposed representations favours
the emergence of communities of semantically related words, and this feature
may be used to identify relevant topics. The proposed methodology to detect
topics was applied to segment selected Wikipedia articles. We have found that,
in general, our methods outperform traditional bag-of-words representations,
which suggests that a high-level textual representation may be useful to study
semantical features of texts.
| 2,016 | Computation and Language |
What Makes it Difficult to Understand a Scientific Literature? | In the artificial intelligence area, one of the ultimate goals is to make
computers understand human language and offer assistance. In order to achieve
this ideal, researchers of computer science have put forward a lot of models
and algorithms attempting at enabling the machine to analyze and process human
natural language on different levels of semantics. Although recent progress in
this field offers much hope, we still have to ask whether current research can
provide assistance that people really desire in reading and comprehension. To
this end, we conducted a reading comprehension test on two scientific papers
which are written in different styles. We use the semantic link models to
analyze the understanding obstacles that people will face in the process of
reading and figure out what makes it difficult for human to understand a
scientific literature. Through such analysis, we summarized some
characteristics and problems which are reflected by people with different
levels of knowledge on the comprehension of difficult science and technology
literature, which can be modeled in semantic link network. We believe that
these characteristics and problems will help us re-examine the existing machine
models and are helpful in the designing of new one.
| 2,015 | Computation and Language |
Extracting Biomolecular Interactions Using Semantic Parsing of
Biomedical Text | We advance the state of the art in biomolecular interaction extraction with
three contributions: (i) We show that deep, Abstract Meaning Representations
(AMR) significantly improve the accuracy of a biomolecular interaction
extraction system when compared to a baseline that relies solely on surface-
and syntax-based features; (ii) In contrast with previous approaches that infer
relations on a sentence-by-sentence basis, we expand our framework to enable
consistent predictions over sets of sentences (documents); (iii) We further
modify and expand a graph kernel learning framework to enable concurrent
exploitation of automatically induced AMR (semantic) and dependency structure
(syntactic) representations. Our experiments show that our approach yields
interaction extraction systems that are more robust in environments where there
is a significant mismatch between training and test conditions.
| 2,015 | Computation and Language |
PJAIT Systems for the IWSLT 2015 Evaluation Campaign Enhanced by
Comparable Corpora | In this paper, we attempt to improve Statistical Machine Translation (SMT)
systems on a very diverse set of language pairs (in both directions): Czech -
English, Vietnamese - English, French - English and German - English. To
accomplish this, we performed translation model training, created adaptations
of training settings for each language pair, and obtained comparable corpora
for our SMT systems. Innovative tools and data adaptation techniques were
employed. The TED parallel text corpora for the IWSLT 2015 evaluation campaign
were used to train language models, and to develop, tune, and test the system.
In addition, we prepared Wikipedia-based comparable corpora for use with our
SMT system. This data was specified as permissible for the IWSLT 2015
evaluation. We explored the use of domain adaptation techniques, symmetrized
word alignment models, the unsupervised transliteration models and the KenLM
language modeling tool. To evaluate the effects of different preparations on
translation results, we conducted experiments and used the BLEU, NIST and TER
metrics. Our results indicate that our approach produced a positive impact on
SMT quality.
| 2,015 | Computation and Language |
Unsupervised comparable corpora preparation and exploration for
bi-lingual translation equivalents | The multilingual nature of the world makes translation a crucial requirement
today. Parallel dictionaries constructed by humans are a widely-available
resource, but they are limited and do not provide enough coverage for good
quality translation purposes, due to out-of-vocabulary words and neologisms.
This motivates the use of statistical translation systems, which are
unfortunately dependent on the quantity and quality of training data. Such
systems have a very limited availability especially for some languages and very
narrow text domains. In this research we present our improvements to current
comparable corpora mining methodologies by re- implementation of the comparison
algorithms (using Needleman-Wunch algorithm), introduction of a tuning script
and computation time improvement by GPU acceleration. Experiments are carried
out on bilingual data extracted from the Wikipedia, on various domains. For the
Wikipedia itself, additional cross-lingual comparison heuristics were
introduced. The modifications made a positive impact on the quality and
quantity of mined data and on the translation quality.
| 2,015 | Computation and Language |
Generating News Headlines with Recurrent Neural Networks | We describe an application of an encoder-decoder recurrent neural network
with LSTM units and attention to generating headlines from the text of news
articles. We find that the model is quite effective at concisely paraphrasing
news articles. Furthermore, we study how the neural network decides which input
words to pay attention to, and specifically we identify the function of the
different neurons in a simplified attention mechanism. Interestingly, our
simplified attention mechanism performs better that the more complex attention
mechanism on a held out set of articles.
| 2,015 | Computation and Language |
Want Answers? A Reddit Inspired Study on How to Pose Questions | Questions form an integral part of our everyday communication, both offline
and online. Getting responses to our questions from others is fundamental to
satisfying our information need and in extending our knowledge boundaries. A
question may be represented using various factors such as social, syntactic,
semantic, etc. We hypothesize that these factors contribute with varying
degrees towards getting responses from others for a given question. We perform
a thorough empirical study to measure effects of these factors using a novel
question and answer dataset from the website Reddit.com. To the best of our
knowledge, this is the first such analysis of its kind on this important topic.
We also use a sparse nonnegative matrix factorization technique to
automatically induce interpretable semantic factors from the question dataset.
We also document various patterns on response prediction we observe during our
analysis in the data. For instance, we found that preference-probing questions
are scantily answered. Our method is robust to capture such latent response
factors. We hope to make our code and datasets publicly available upon
publication of the paper.
| 2,015 | Computation and Language |
SentiBench - a benchmark comparison of state-of-the-practice sentiment
analysis methods | In the last few years thousands of scientific papers have investigated
sentiment analysis, several startups that measure opinions on real data have
emerged and a number of innovative products related to this theme have been
developed. There are multiple methods for measuring sentiments, including
lexical-based and supervised machine learning methods. Despite the vast
interest on the theme and wide popularity of some methods, it is unclear which
one is better for identifying the polarity (i.e., positive or negative) of a
message. Accordingly, there is a strong need to conduct a thorough
apple-to-apple comparison of sentiment analysis methods, \textit{as they are
used in practice}, across multiple datasets originated from different data
sources. Such a comparison is key for understanding the potential limitations,
advantages, and disadvantages of popular methods. This article aims at filling
this gap by presenting a benchmark comparison of twenty-four popular sentiment
analysis methods (which we call the state-of-the-practice methods). Our
evaluation is based on a benchmark of eighteen labeled datasets, covering
messages posted on social networks, movie and product reviews, as well as
opinions and comments in news articles. Our results highlight the extent to
which the prediction performance of these methods varies considerably across
datasets. Aiming at boosting the development of this research area, we open the
methods' codes and datasets used in this article, deploying them in a benchmark
system, which provides an open API for accessing and comparing sentence-level
sentiment analysis methods.
| 2,016 | Computation and Language |
THCHS-30 : A Free Chinese Speech Corpus | Speech data is crucially important for speech recognition research. There are
quite some speech databases that can be purchased at prices that are reasonable
for most research institutes. However, for young people who just start research
activities or those who just gain initial interest in this direction, the cost
for data is still an annoying barrier. We support the `free data' movement in
speech recognition: research institutes (particularly supported by public
funds) publish their data freely so that new researchers can obtain sufficient
data to kick of their career. In this paper, we follow this trend and release a
free Chinese speech database THCHS-30 that can be used to build a full- edged
Chinese speech recognition system. We report the baseline system established
with this database, including the performance under highly noisy conditions.
| 2,015 | Computation and Language |
Jointly Modeling Topics and Intents with Global Order Structure | Modeling document structure is of great importance for discourse analysis and
related applications. The goal of this research is to capture the document
intent structure by modeling documents as a mixture of topic words and
rhetorical words. While the topics are relatively unchanged through one
document, the rhetorical functions of sentences usually change following
certain orders in discourse. We propose GMM-LDA, a topic modeling based
Bayesian unsupervised model, to analyze the document intent structure
cooperated with order information. Our model is flexible that has the ability
to combine the annotations and do supervised learning. Additionally, entropic
regularization can be introduced to model the significant divergence between
topics and intents. We perform experiments in both unsupervised and supervised
settings, results show the superiority of our model over several
state-of-the-art baselines.
| 2,015 | Computation and Language |
Minimum Risk Training for Neural Machine Translation | We propose minimum risk training for end-to-end neural machine translation.
Unlike conventional maximum likelihood estimation, minimum risk training is
capable of optimizing model parameters directly with respect to arbitrary
evaluation metrics, which are not necessarily differentiable. Experiments show
that our approach achieves significant improvements over maximum likelihood
estimation on a state-of-the-art neural machine translation system across
various languages pairs. Transparent to architectures, our approach can be
applied to more neural networks and potentially benefit more NLP tasks.
| 2,016 | Computation and Language |
Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale.
| 2,015 | Computation and Language |
Mined Semantic Analysis: A New Concept Space Model for Semantic
Representation of Textual Data | Mined Semantic Analysis (MSA) is a novel concept space model which employs
unsupervised learning to generate semantic representations of text. MSA
represents textual structures (terms, phrases, documents) as a Bag of Concepts
(BoC) where concepts are derived from concept rich encyclopedic corpora.
Traditional concept space models exploit only target corpus content to
construct the concept space. MSA, alternatively, uncovers implicit relations
between concepts by mining for their associations (e.g., mining Wikipedia's
"See also" link graph). We evaluate MSA's performance on benchmark datasets for
measuring semantic relatedness of words and sentences. Empirical results show
competitive performance of MSA compared to prior state-of-the-art methods.
Additionally, we introduce the first analytical study to examine statistical
significance of results reported by different semantic relatedness methods. Our
study shows that, the nuances of results across top performing methods could be
statistically insignificant. The study positions MSA as one of state-of-the-art
methods for measuring semantic relatedness, besides the inherent
interpretability and simplicity of the generated semantic representation.
| 2,018 | Computation and Language |
Words are not Equal: Graded Weighting Model for building Composite
Document Vectors | Despite the success of distributional semantics, composing phrases from word
vectors remains an important challenge. Several methods have been tried for
benchmark tasks such as sentiment classification, including word vector
averaging, matrix-vector approaches based on parsing, and on-the-fly learning
of paragraph vectors. Most models usually omit stop words from the composition.
Instead of such an yes-no decision, we consider several graded schemes where
words are weighted according to their discriminatory relevance with respect to
its use in the document (e.g., idf). Some of these methods (particularly
tf-idf) are seen to result in a significant improvement in performance over
prior state of the art. Further, combining such approaches into an ensemble
based on alternate classifiers such as the RNN model, results in an 1.6%
performance improvement on the standard IMDB movie review dataset, and a 7.01%
improvement on Amazon product reviews. Since these are language free models and
can be obtained in an unsupervised manner, they are of interest also for
under-resourced languages such as Hindi as well and many more languages. We
demonstrate the language free aspects by showing a gain of 12% for two review
datasets over earlier results, and also release a new larger dataset for future
testing (Singh,2015).
| 2,015 | Computation and Language |
A Hidden Markov Model Based System for Entity Extraction from Social
Media English Text at FIRE 2015 | This paper presents the experiments carried out by us at Jadavpur University
as part of the participation in FIRE 2015 task: Entity Extraction from Social
Media Text - Indian Languages (ESM-IL). The tool that we have developed for the
task is based on Trigram Hidden Markov Model that utilizes information like
gazetteer list, POS tag and some other word level features to enhance the
observation probabilities of the known tokens as well as unknown tokens. We
submitted runs for English only. A statistical HMM (Hidden Markov Models) based
model has been used to implement our system. The system has been trained and
tested on the datasets released for FIRE 2015 task: Entity Extraction from
Social Media Text - Indian Languages (ESM-IL). Our system is the best performer
for English language and it obtains precision, recall and F-measures of 61.96,
39.46 and 48.21 respectively.
| 2,015 | Computation and Language |
Stack Exchange Tagger | The goal of our project is to develop an accurate tagger for questions posted
on Stack Exchange. Our problem is an instance of the more general problem of
developing accurate classifiers for large scale text datasets. We are tackling
the multilabel classification problem where each item (in this case, question)
can belong to multiple classes (in this case, tags). We are predicting the tags
(or keywords) for a particular Stack Exchange post given only the question text
and the title of the post. In the process, we compare the performance of
Support Vector Classification (SVC) for different kernel functions, loss
function, etc. We found linear SVC with Crammer Singer technique produces best
results.
| 2,015 | Computation and Language |
Small-footprint Deep Neural Networks with Highway Connections for Speech
Recognition | For speech recognition, deep neural networks (DNNs) have significantly
improved the recognition accuracy in most of benchmark datasets and application
domains. However, compared to the conventional Gaussian mixture models,
DNN-based acoustic models usually have much larger number of model parameters,
making it challenging for their applications in resource constrained platforms,
e.g., mobile devices. In this paper, we study the application of the recently
proposed highway network to train small-footprint DNNs, which are {\it thinner}
and {\it deeper}, and have significantly smaller number of model parameters
compared to conventional DNNs. We investigated this approach on the AMI meeting
speech transcription corpus which has around 70 hours of audio data. The
highway neural networks constantly outperformed their plain DNN counterparts,
and the number of model parameters can be reduced significantly without
sacrificing the recognition accuracy.
| 2,017 | Computation and Language |
Sentence Entailment in Compositional Distributional Semantics | Distributional semantic models provide vector representations for words by
gathering co-occurrence frequencies from corpora of text. Compositional
distributional models extend these from words to phrases and sentences. In
categorical compositional distributional semantics, phrase and sentence
representations are functions of their grammatical structure and
representations of the words therein. In this setting, grammatical structures
are formalised by morphisms of a compact closed category and meanings of words
are formalised by objects of the same category. These can be instantiated in
the form of vectors or density matrices. This paper concerns the applications
of this model to phrase and sentence level entailment. We argue that
entropy-based distances of vectors and density matrices provide a good
candidate to measure word-level entailment, show the advantage of density
matrices over vectors for word level entailments, and prove that these
distances extend compositionally from words to phrases and sentences. We
exemplify our theoretical constructions on real data and a toy entailment
dataset and provide preliminary experimental evidence.
| 2,018 | Computation and Language |
Agreement-based Joint Training for Bidirectional Attention-based Neural
Machine Translation | The attentional mechanism has proven to be effective in improving end-to-end
neural machine translation. However, due to the intricate structural divergence
between natural languages, unidirectional attention-based models might only
capture partial aspects of attentional regularities. We propose agreement-based
joint training for bidirectional attention-based end-to-end neural machine
translation. Instead of training source-to-target and target-to-source
translation models independently,our approach encourages the two complementary
models to agree on word alignment matrices on the same training data.
Experiments on Chinese-English and English-French translation tasks show that
agreement-based joint training significantly improves both alignment and
translation quality over independent training.
| 2,016 | Computation and Language |
Strategies for Training Large Vocabulary Neural Language Models | Training neural network language models over large vocabularies is still
computationally very costly compared to count-based models such as Kneser-Ney.
At the same time, neural language models are gaining popularity for many
applications such as speech recognition and machine translation whose success
depends on scalability. We present a systematic comparison of strategies to
represent and train large vocabularies, including softmax, hierarchical
softmax, target sampling, noise contrastive estimation and self normalization.
We further extend self normalization to be a proper estimator of likelihood and
introduce an efficient variant of softmax. We evaluate each method on three
popular benchmarks, examining performance on rare words, the speed/accuracy
trade-off and complementarity to Kneser-Ney.
| 2,015 | Computation and Language |
Morpho-syntactic Lexicon Generation Using Graph-based Semi-supervised
Learning | Morpho-syntactic lexicons provide information about the morphological and
syntactic roles of words in a language. Such lexicons are not available for all
languages and even when available, their coverage can be limited. We present a
graph-based semi-supervised learning method that uses the morphological,
syntactic and semantic relations between words to automatically construct wide
coverage lexicons from small seed sets. Our method is language-independent, and
we show that we can expand a 1000 word seed lexicon to more than 100 times its
size with high quality for 11 languages. In addition, the automatically created
lexicons provide features that improve performance in two downstream tasks:
morphological tagging and dependency parsing.
| 2,016 | Computation and Language |
ABCNN: Attention-Based Convolutional Neural Network for Modeling
Sentence Pairs | How to model a pair of sentences is a critical issue in many NLP tasks such
as answer selection (AS), paraphrase identification (PI) and textual entailment
(TE). Most prior work (i) deals with one individual task by fine-tuning a
specific system; (ii) models each sentence's representation separately, rarely
considering the impact of the other sentence; or (iii) relies fully on manually
designed, task-specific linguistic features. This work presents a general
Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of
sentences. We make three contributions. (i) ABCNN can be applied to a wide
variety of tasks that require modeling of sentence pairs. (ii) We propose three
attention schemes that integrate mutual influence between sentences into CNN;
thus, the representation of each sentence takes into consideration its
counterpart. These interdependent sentence pair representations are more
powerful than isolated sentence representations. (iii) ABCNN achieves
state-of-the-art performance on AS, PI and TE tasks.
| 2,018 | Computation and Language |
Kauffman's adjacent possible in word order evolution | Word order evolution has been hypothesized to be constrained by a word order
permutation ring: transitions involving orders that are closer in the
permutation ring are more likely. The hypothesis can be seen as a particular
case of Kauffman's adjacent possible in word order evolution. Here we consider
the problem of the association of the six possible orders of S, V and O to
yield a couple of primary alternating orders as a window to word order
evolution. We evaluate the suitability of various competing hypotheses to
predict one member of the couple from the other with the help of information
theoretic model selection. Our ensemble of models includes a six-way model that
is based on the word order permutation ring (Kauffman's adjacent possible) and
another model based on the dual two-way of standard typology, that reduces word
order to basic orders preferences (e.g., a preference for SV over VS and
another for SO over OS). Our analysis indicates that the permutation ring
yields the best model when favoring parsimony strongly, providing support for
Kauffman's general view and a six-way typology.
| 2,016 | Computation and Language |
Towards automating the generation of derivative nouns in Sanskrit by
simulating Panini | About 1115 rules in Astadhyayi from A.4.1.76 to A.5.4.160 deal with
generation of derivative nouns, making it one of the largest topical sections
in Astadhyayi, called as the Taddhita section owing to the head rule A.4.1.76.
This section is a systematic arrangement of rules that enumerates various
affixes that are used in the derivation under specific semantic relations. We
propose a system that automates the process of generation of derivative nouns
as per the rules in Astadhyayi. The proposed system follows a completely object
oriented approach, that models each rule as a class of its own and then groups
them as rule groups. The rule groups are decided on the basis of selective
grouping of rules by virtue of anuvrtti. The grouping of rules results in an
inheritance network of rules which is a directed acyclic graph. Every rule
group has a head rule and the head rule notifies all the direct member rules of
the group about the environment which contains all the details about data
entities, participating in the derivation process. The system implements this
mechanism using multilevel inheritance and observer design patterns. The system
focuses not only on generation of the desired final form, but also on the
correctness of sequence of rules applied to make sure that the derivation has
taken place in strict adherence to Astadhyayi. The proposed system's design
allows to incorporate various conflict resolution methods mentioned in
authentic texts and hence the effectiveness of those rules can be validated
with the results from the system. We also present cases where we have checked
the applicability of the system with the rules which are not specifically
applicable to derivation of derivative nouns, in order to see the effectiveness
of the proposed schema as a generic system for modeling Astadhyayi.
| 2,015 | Computation and Language |
Semi-supervised Question Retrieval with Gated Convolutions | Question answering forums are rapidly growing in size with no effective
automated ability to refer to and reuse answers already available for previous
posted questions. In this paper, we develop a methodology for finding
semantically related questions. The task is difficult since 1) key pieces of
information are often buried in extraneous details in the question body and 2)
available annotations on similar questions are scarce and fragmented. We design
a recurrent and convolutional model (gated convolution) to effectively map
questions to their semantic representations. The models are pre-trained within
an encoder-decoder framework (from body to title) on the basis of the entire
raw corpus, and fine-tuned discriminatively from limited annotations. Our
evaluation demonstrates that our model yields substantial gains over a standard
IR baseline and various neural network architectures (including CNNs, LSTMs and
GRUs).
| 2,016 | Computation and Language |
A Survey of Available Corpora for Building Data-Driven Dialogue Systems | During the past decade, several areas of speech and language understanding
have witnessed substantial breakthroughs from the use of data-driven models. In
the area of dialogue systems, the trend is less obvious, and most practical
systems are still built through significant engineering and expert knowledge.
Nevertheless, several recent results suggest that data-driven approaches are
feasible and quite promising. To facilitate research in this area, we have
carried out a wide survey of publicly available datasets suitable for
data-driven learning of dialogue systems. We discuss important characteristics
of these datasets, how they can be used to learn diverse dialogue strategies,
and their other potential uses. We also examine methods for transfer learning
between datasets and the use of external knowledge. Finally, we discuss
appropriate choice of evaluation metrics for the learning objective.
| 2,017 | Computation and Language |
A Planning based Framework for Essay Generation | Generating an article automatically with computer program is a challenging
task in artificial intelligence and natural language processing. In this paper,
we target at essay generation, which takes as input a topic word in mind and
generates an organized article under the theme of the topic. We follow the idea
of text planning \cite{Reiter1997} and develop an essay generation framework.
The framework consists of three components, including topic understanding,
sentence extraction and sentence reordering. For each component, we studied
several statistical algorithms and empirically compared between them in terms
of qualitative or quantitative analysis. Although we run experiments on Chinese
corpus, the method is language independent and can be easily adapted to other
language. We lay out the remaining challenges and suggest avenues for future
research.
| 2,016 | Computation and Language |
Morphological Inflection Generation Using Character Sequence to Sequence
Learning | Morphological inflection generation is the task of generating the inflected
form of a given lemma corresponding to a particular linguistic transformation.
We model the problem of inflection generation as a character sequence to
sequence learning problem and present a variant of the neural encoder-decoder
model for solving it. Our model is language independent and can be trained in
both supervised and semi-supervised settings. We evaluate our system on seven
datasets of morphologically rich languages and achieve either better or
comparable results to existing state-of-the-art models of inflection
generation.
| 2,016 | Computation and Language |
Backward and Forward Language Modeling for Constrained Sentence
Generation | Recent language models, especially those based on recurrent neural networks
(RNNs), make it possible to generate natural language from a learned
probability. Language generation has wide applications including machine
translation, summarization, question answering, conversation systems, etc.
Existing methods typically learn a joint probability of words conditioned on
additional information, which is (either statically or dynamically) fed to
RNN's hidden layer. In many applications, we are likely to impose hard
constraints on the generated texts, i.e., a particular word must appear in the
sentence. Unfortunately, existing approaches could not solve this problem. In
this paper, we propose a novel backward and forward language model. Provided a
specific word, we use RNNs to generate previous words and future words, either
simultaneously or asynchronously, resulting in two model variants. In this way,
the given word could appear at any position in the sentence. Experimental
results show that the generated texts are comparable to sequential LMs in
quality.
| 2,016 | Computation and Language |
The 2015 Sheffield System for Transcription of Multi-Genre Broadcast
Media | We describe the University of Sheffield system for participation in the 2015
Multi-Genre Broadcast (MGB) challenge task of transcribing multi-genre
broadcast shows. Transcription was one of four tasks proposed in the MGB
challenge, with the aim of advancing the state of the art of automatic speech
recognition, speaker diarisation and automatic alignment of subtitles for
broadcast media. Four topics are investigated in this work: Data selection
techniques for training with unreliable data, automatic speech segmentation of
broadcast media shows, acoustic modelling and adaptation in highly variable
environments, and language modelling of multi-genre shows. The final system
operates in multiple passes, using an initial unadapted decoding stage to
refine segmentation, followed by three adapted passes: a hybrid DNN pass with
input features normalised by speaker-based cepstral normalisation, another
hybrid stage with input features normalised by speaker feature-MLLR
transformations, and finally a bottleneck-based tandem stage with noise and
speaker factorisation. The combination of these three system outputs provides a
final error rate of 27.5% on the official development set, consisting of 47
multi-genre shows.
| 2,016 | Computation and Language |
The Improvement of Negative Sentences Translation in English-to-Korean
Machine Translation | This paper describes the algorithm for translating English negative sentences
into Korean in English-Korean Machine Translation (EKMT). The proposed
algorithm is based on the comparative study of English and Korean negative
sentences. The earlier translation software cannot translate English negative
sentences into accurate Korean equivalents. We established a new algorithm for
the negative sentence translation and evaluated it.
| 2,015 | Computation and Language |
Learning Document Embeddings by Predicting N-grams for Sentiment
Classification of Long Movie Reviews | Despite the loss of semantic information, bag-of-ngram based methods still
achieve state-of-the-art results for tasks such as sentiment classification of
long movie reviews. Many document embeddings methods have been proposed to
capture semantics, but they still can't outperform bag-of-ngram based methods
on this task. In this paper, we modify the architecture of the recently
proposed Paragraph Vector, allowing it to learn document vectors by predicting
not only words, but n-gram features as well. Our model is able to capture both
semantics and word order in documents while keeping the expressive power of
learned vectors. Experimental results on IMDB movie review dataset shows that
our model outperforms previous deep learning models and bag-of-ngram based
models due to the above advantages. More robust results are also obtained when
our model is combined with other models. The source code of our model will be
also published together with this paper.
| 2,016 | Computation and Language |
Communicating with sentences: A multi-word naming game model | Naming game simulates the process of naming an object by a single word, in
which a population of communicating agents can reach global consensus
asymptotically through iteratively pair-wise conversations. We propose an
extension of the single-word model to a multi-word naming game (MWNG),
simulating the case of describing a complex object by a sentence (multiple
words). Words are defined in categories, and then organized as sentences by
combining them from different categories. We refer to a formatted combination
of several words as a pattern. In such an MWNG, through a pair-wise
conversation, it requires the hearer to achieve consensus with the speaker with
respect to both every single word in the sentence as well as the sentence
pattern, so as to guarantee the correct meaning of the saying, otherwise, they
fail reaching consensus in the interaction. We validate the model in three
typical topologies as the underlying communication network, and employ both
conventional and man-designed patterns in performing the MWNG.
| 2,018 | Computation and Language |
Natural Language Inference by Tree-Based Convolution and Heuristic
Matching | In this paper, we propose the TBCNN-pair model to recognize entailment and
contradiction between two sentences. In our model, a tree-based convolutional
neural network (TBCNN) captures sentence-level semantics; then heuristic
matching layers like concatenation, element-wise product/difference combine the
information in individual sentences. Experimental results show that our model
outperforms existing sentence encoding-based approaches by a large margin.
| 2,016 | Computation and Language |
Learning Natural Language Inference with LSTM | Natural language inference (NLI) is a fundamentally important task in natural
language processing that has many applications. The recently released Stanford
Natural Language Inference (SNLI) corpus has made it possible to develop and
evaluate learning-centered methods such as deep neural networks for natural
language inference (NLI). In this paper, we propose a special long short-term
memory (LSTM) architecture for NLI. Our model builds on top of a recently
proposed neural attention model for NLI but is based on a significantly
different idea. Instead of deriving sentence embeddings for the premise and the
hypothesis to be used for classification, our solution uses a match-LSTM to
perform word-by-word matching of the hypothesis with the premise. This LSTM is
able to place more emphasis on important word-level matching results. In
particular, we observe that this LSTM remembers important mismatches that are
critical for predicting the contradiction or the neutral relationship label. On
the SNLI corpus, our model achieves an accuracy of 86.1%, outperforming the
state of the art.
| 2,016 | Computation and Language |
Online Keyword Spotting with a Character-Level Recurrent Neural Network | In this paper, we propose a context-aware keyword spotting model employing a
character-level recurrent neural network (RNN) for spoken term detection in
continuous speech. The RNN is end-to-end trained with connectionist temporal
classification (CTC) to generate the probabilities of character and
word-boundary labels. There is no need for the phonetic transcription, senone
modeling, or system dictionary in training and testing. Also, keywords can
easily be added and modified by editing the text based keyword list without
retraining the RNN. Moreover, the unidirectional RNN processes an infinitely
long input audio streams without pre-segmentation and keywords are detected
with low-latency before the utterance is finished. Experimental results show
that the proposed keyword spotter significantly outperforms the deep neural
network (DNN) and hidden Markov model (HMM) based keyword-filler model even
with less computations.
| 2,015 | Computation and Language |
Sentiment/Subjectivity Analysis Survey for Languages other than English | Subjective and sentiment analysis have gained considerable attention
recently. Most of the resources and systems built so far are done for English.
The need for designing systems for other languages is increasing. This paper
surveys different ways used for building systems for subjective and sentiment
analysis for languages other than English. There are three different types of
systems used for building these systems. The first (and the best) one is the
language specific systems. The second type of systems involves reusing or
transferring sentiment resources from English to the target language. The third
type of methods is based on using language independent methods. The paper
presents a separate section devoted to Arabic sentiment analysis.
| 2,016 | Computation and Language |
Contrastive Entropy: A new evaluation metric for unnormalized language
models | Perplexity (per word) is the most widely used metric for evaluating language
models. Despite this, there has been no dearth of criticism for this metric.
Most of these criticisms center around lack of correlation with extrinsic
metrics like word error rate (WER), dependence upon shared vocabulary for model
comparison and unsuitability for unnormalized language model evaluation. In
this paper, we address the last problem and propose a new discriminative
entropy based intrinsic metric that works for both traditional word level
models and unnormalized language models like sentence level models. We also
propose a discriminatively trained sentence level interpretation of recurrent
neural network based language model (RNN) as an example of unnormalized
sentence level model. We demonstrate that for word level models, contrastive
entropy shows a strong correlation with perplexity. We also observe that when
trained at lower distortion levels, sentence level RNN considerably outperforms
traditional RNNs on this new metric.
| 2,016 | Computation and Language |
Mutual Information and Diverse Decoding Improve Neural Machine
Translation | Sequence-to-sequence neural translation models learn semantic and syntactic
relations between sentence pairs by optimizing the likelihood of the target
given the source, i.e., $p(y|x)$, an objective that ignores other potentially
useful sources of information. We introduce an alternative objective function
for neural MT that maximizes the mutual information between the source and
target sentences, modeling the bi-directional dependency of sources and
targets. We implement the model with a simple re-ranking method, and also
introduce a decoding algorithm that increases diversity in the N-best list
produced by the first pass. Applied to the WMT German/English and
French/English tasks, the proposed models offers a consistent performance boost
on both standard LSTM and attention-based neural MT architectures.
| 2,016 | Computation and Language |
Distant IE by Bootstrapping Using Lists and Document Structure | Distant labeling for information extraction (IE) suffers from noisy training
data. We describe a way of reducing the noise associated with distant IE by
identifying coupling constraints between potential instance labels. As one
example of coupling, items in a list are likely to have the same label. A
second example of coupling comes from analysis of document structure: in some
corpora, sections can be identified such that items in the same section are
likely to have the same label. Such sections do not exist in all corpora, but
we show that augmenting a large corpus with coupling constraints from even a
small, well-structured corpus can improve performance substantially, doubling
F1 on one task.
| 2,016 | Computation and Language |
Multi-Source Neural Translation | We build a multi-source machine translation model and train it to maximize
the probability of a target English string given French and German sources.
Using the neural encoder-decoder framework, we explore several combination
methods and report up to +4.8 Bleu increases on top of a very strong
attention-based neural translation model.
| 2,016 | Computation and Language |
End-to-End Relation Extraction using LSTMs on Sequences and Tree
Structures | We present a novel end-to-end neural model to extract entities and relations
between them. Our recurrent neural network based model captures both word
sequence and dependency tree substructure information by stacking bidirectional
tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows
our model to jointly represent both entities and relations with shared
parameters in a single model. We further encourage detection of entities during
training and use of entity information in relation extraction via entity
pretraining and scheduled sampling. Our model improves over the
state-of-the-art feature-based model on end-to-end relation extraction,
achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and
ACE2004, respectively. We also show that our LSTM-RNN based model compares
favorably to the state-of-the-art CNN based model (in F1-score) on nominal
relation classification (SemEval-2010 Task 8). Finally, we present an extensive
ablation analysis of several model components.
| 2,016 | Computation and Language |
The Role of Context Types and Dimensionality in Learning Word Embeddings | We provide the first extensive evaluation of how using different types of
context to learn skip-gram word embeddings affects performance on a wide range
of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic
tasks tend to exhibit a clear preference to particular types of contexts and
higher dimensionality, more careful tuning is required for finding the optimal
settings for most of the extrinsic tasks that we considered. Furthermore, for
these extrinsic tasks, we find that once the benefit from increasing the
embedding dimensionality is mostly exhausted, simple concatenation of word
embeddings, learned with different context types, can yield further performance
gains. As an additional contribution, we propose a new variant of the skip-gram
model that learns word embeddings from weighted contexts of substitute words.
| 2,017 | Computation and Language |
Multi-Way, Multilingual Neural Machine Translation with a Shared
Attention Mechanism | We propose multi-way, multilingual neural machine translation. The proposed
approach enables a single neural translation model to translate between
multiple languages, with a number of parameters that grows only linearly with
the number of languages. This is made possible by having a single attention
mechanism that is shared across all language pairs. We train the proposed
multi-way, multilingual model on ten language pairs from WMT'15 simultaneously
and observe clear performance improvements over models trained on only one
language pair. In particular, we observe that the proposed model significantly
improves the translation quality of low-resource language pairs.
| 2,016 | Computation and Language |
Incorporating Structural Alignment Biases into an Attentional Neural
Translation Model | Neural encoder-decoder models of machine translation have achieved impressive
results, rivalling traditional translation models. However their modelling
formulation is overly simplistic, and omits several key inductive biases built
into traditional models. In this paper we extend the attentional neural
translation model to include structural biases from word based alignment
models, including positional bias, Markov conditioning, fertility and agreement
over translation directions. We show improvements over a baseline attentional
model and standard phrase-based model over several language pairs, evaluating
on difficult languages in a low resource setting.
| 2,016 | Computation and Language |
Part-of-Speech Tagging for Code-mixed Indian Social Media Text at ICON
2015 | This paper discusses the experiments carried out by us at Jadavpur University
as part of the participation in ICON 2015 task: POS Tagging for Code-mixed
Indian Social Media Text. The tool that we have developed for the task is based
on Trigram Hidden Markov Model that utilizes information from dictionary as
well as some other word level features to enhance the observation probabilities
of the known tokens as well as unknown tokens. We submitted runs for
Bengali-English, Hindi-English and Tamil-English Language pairs. Our system has
been trained and tested on the datasets released for ICON 2015 shared task: POS
Tagging For Code-mixed Indian Social Media Text. In constrained mode, our
system obtains average overall accuracy (averaged over all three language
pairs) of 75.60% which is very close to other participating two systems (76.79%
for IIITH and 75.79% for AMRITA_CEN) ranked higher than our system. In
unconstrained mode, our system obtains average overall accuracy of 70.65% which
is also close to the system (72.85% for AMRITA_CEN) which obtains the highest
average overall accuracy.
| 2,016 | Computation and Language |
Recurrent Memory Networks for Language Modeling | Recurrent Neural Networks (RNN) have obtained excellent result in many
natural language processing (NLP) tasks. However, understanding and
interpreting the source of this success remains a challenge. In this paper, we
propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only
amplifies the power of RNN but also facilitates our understanding of its
internal functioning and allows us to discover underlying patterns in data. We
demonstrate the power of RMN on language modeling and sentence completion
tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM)
network on three large German, Italian, and English dataset. Additionally we
perform in-depth analysis of various linguistic dimensions that RMN captures.
On Sentence Completion Challenge, for which it is essential to capture sentence
coherence, our RMN obtains 69.2% accuracy, surpassing the previous
state-of-the-art by a large margin.
| 2,016 | Computation and Language |
Language to Logical Form with Neural Attention | Semantic parsing aims at mapping natural language to machine interpretable
meaning representations. Traditional approaches rely on high-quality lexicons,
manually-built templates, and linguistic features which are either domain- or
representation-specific. In this paper we present a general method based on an
attention-enhanced encoder-decoder model. We encode input utterances into
vector representations, and generate their logical forms by conditioning the
output sequences or trees on the encoding vectors. Experimental results on four
datasets show that our approach performs competitively without using
hand-engineered features and is easy to adapt across domains and meaning
representations.
| 2,016 | Computation and Language |
Joint Learning of the Embedding of Words and Entities for Named Entity
Disambiguation | Named Entity Disambiguation (NED) refers to the task of resolving multiple
named entity mentions in a document to their correct references in a knowledge
base (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method
specifically designed for NED. The proposed method jointly maps words and
entities into the same continuous vector space. We extend the skip-gram model
by using two models. The KB graph model learns the relatedness of entities
using the link structure of the KB, whereas the anchor context model aims to
align vectors such that similar words and entities occur close to one another
in the vector space by leveraging KB anchors and their context words. By
combining contexts based on the proposed embedding with standard NED features,
we achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset
and 85.2% on the TAC 2010 dataset.
| 2,016 | Computation and Language |
Leveraging Sentence-level Information with Encoder LSTM for Semantic
Slot Filling | Recurrent Neural Network (RNN) and one of its specific architectures, Long
Short-Term Memory (LSTM), have been widely used for sequence labeling. In this
paper, we first enhance LSTM-based sequence labeling to explicitly model label
dependencies. Then we propose another enhancement to incorporate the global
information spanning over the whole input sequence. The latter proposed method,
encoder-labeler LSTM, first encodes the whole input sequence into a fixed
length vector with the encoder LSTM, and then uses this encoded vector as the
initial state of another LSTM for sequence labeling. Combining these methods,
we can predict the label sequence with considering label dependencies and
information of whole input sequence. In the experiments of a slot filling task,
which is an essential component of natural language understanding, with using
the standard ATIS corpus, we achieved the state-of-the-art F1-score of 95.66%.
| 2,016 | Computation and Language |
Learning to Compose Neural Networks for Question Answering | We describe a question answering model that applies to both images and
structured knowledge bases. The model uses natural language strings to
automatically assemble neural networks from a collection of composable modules.
Parameters for these modules are learned jointly with network-assembly
parameters via reinforcement learning, with only (world, question, answer)
triples as supervision. Our approach, which we term a dynamic neural model
network, achieves state-of-the-art results on benchmark datasets in both visual
and structured domains.
| 2,016 | Computation and Language |
Research Project: Text Engineering Tool for Ontological Scientometry | The number of scientific papers grows exponentially in many disciplines. The
share of online available papers grows as well. At the same time, the period of
time for a paper to loose at chance to be cited anymore shortens. The decay of
the citing rate shows similarity to ultradiffusional processes as for other
online contents in social networks. The distribution of papers per author shows
similarity to the distribution of posts per user in social networks. The rate
of uncited papers for online available papers grows while some papers 'go
viral' in terms of being cited. Summarized, the practice of scientific
publishing moves towards the domain of social networks. The goal of this
project is to create a text engineering tool, which can semi-automatically
categorize a paper according to its type of contribution and extract
relationships between them into an ontological database. Semi-automatic
categorization means that the mistakes made by automatic pre-categorization and
relationship-extraction will be corrected through a wikipedia-like front-end by
volunteers from general public. This tool should not only help researchers and
the general public to find relevant supplementary material and peers faster,
but also provide more information for research funding agencies.
| 2,016 | Computation and Language |
Empirical Gaussian priors for cross-lingual transfer learning | Sequence model learning algorithms typically maximize log-likelihood minus
the norm of the model (or minimize Hamming loss + norm). In cross-lingual
part-of-speech (POS) tagging, our target language training data consists of
sequences of sentences with word-by-word labels projected from translations in
$k$ languages for which we have labeled data, via word alignments. Our training
data is therefore very noisy, and if Rademacher complexity is high, learning
algorithms are prone to overfit. Norm-based regularization assumes a constant
width and zero mean prior. We instead propose to use the $k$ source language
models to estimate the parameters of a Gaussian prior for learning new POS
taggers. This leads to significantly better performance in multi-source
transfer set-ups. We also present a drop-out version that injects (empirical)
Gaussian noise during online learning. Finally, we note that using empirical
Gaussian priors leads to much lower Rademacher complexity, and is superior to
optimally weighted model interpolation.
| 2,016 | Computation and Language |
Argumentation Mining in User-Generated Web Discourse | The goal of argumentation mining, an evolving research field in computational
linguistics, is to design methods capable of analyzing people's argumentation.
In this article, we go beyond the state of the art in several ways. (i) We deal
with actual Web data and take up the challenges given by the variety of
registers, multiple domains, and unrestricted noisy user-generated Web
discourse. (ii) We bridge the gap between normative argumentation theories and
argumentation phenomena encountered in actual data by adapting an argumentation
model tested in an extensive annotation study. (iii) We create a new gold
standard corpus (90k tokens in 340 documents) and experiment with several
machine learning methods to identify argument components. We offer the data,
source codes, and annotation guidelines to the community under free licenses.
Our findings show that argumentation mining in user-generated Web discourse is
a feasible but challenging task.
| 2,017 | Computation and Language |
The Effects of Age, Gender and Region on Non-standard Linguistic
Variation in Online Social Networks | We present a corpus-based analysis of the effects of age, gender and region
of origin on the production of both "netspeak" or "chatspeak" features and
regional speech features in Flemish Dutch posts that were collected from a
Belgian online social network platform. The present study shows that combining
quantitative and qualitative approaches is essential for understanding
non-standard linguistic variation in a CMC corpus. It also presents a
methodology that enables the systematic study of this variation by including
all non-standard words in the corpus. The analyses resulted in a convincing
illustration of the Adolescent Peak Principle. In addition, our approach
revealed an intriguing correlation between the use of regional speech features
and chatspeak features.
| 2,016 | Computation and Language |
Trans-gram, Fast Cross-lingual Word-embeddings | We introduce Trans-gram, a simple and computationally-efficient method to
simultaneously learn and align wordembeddings for a variety of languages, using
only monolingual data and a smaller set of sentence-aligned data. We use our
new method to compute aligned wordembeddings for twenty-one languages using
English as a pivot language. We show that some linguistic features are aligned
across languages for which we do not have aligned data, even though those
properties do not exist in the pivot language. We also achieve state of the art
results on standard cross-lingual text classification and word translation
tasks.
| 2,016 | Computation and Language |
Investigating gated recurrent neural networks for speech synthesis | Recently, recurrent neural networks (RNNs) as powerful sequence models have
re-emerged as a potential acoustic model for statistical parametric speech
synthesis (SPSS). The long short-term memory (LSTM) architecture is
particularly attractive because it addresses the vanishing gradient problem in
standard RNNs, making them easier to train. Although recent studies have
demonstrated that LSTMs can achieve significantly better performance on SPSS
than deep feed-forward neural networks, little is known about why. Here we
attempt to answer two questions: a) why do LSTMs work well as a sequence model
for SPSS; b) which component (e.g., input gate, output gate, forget gate) is
most important. We present a visual analysis alongside a series of experiments,
resulting in a proposal for a simplified architecture. The simplified
architecture has significantly fewer parameters than an LSTM, thus reducing
generation complexity considerably without degrading quality.
| 2,016 | Computation and Language |
Evaluating the Performance of a Speech Recognition based System | Speech based solutions have taken center stage with growth in the services
industry where there is a need to cater to a very large number of people from
all strata of the society. While natural language speech interfaces are the
talk in the research community, yet in practice, menu based speech solutions
thrive. Typically in a menu based speech solution the user is required to
respond by speaking from a closed set of words when prompted by the system. A
sequence of human speech response to the IVR prompts results in the completion
of a transaction. A transaction is deemed successful if the speech solution can
correctly recognize all the spoken utterances of the user whenever prompted by
the system. The usual mechanism to evaluate the performance of a speech
solution is to do an extensive test of the system by putting it to actual
people use and then evaluating the performance by analyzing the logs for
successful transactions. This kind of evaluation could lead to dissatisfied
test users especially if the performance of the system were to result in a poor
transaction completion rate. To negate this the Wizard of Oz approach is
adopted during evaluation of a speech system. Overall this kind of evaluations
is an expensive proposition both in terms of time and cost. In this paper, we
propose a method to evaluate the performance of a speech solution without
actually putting it to people use. We first describe the methodology and then
show experimentally that this can be used to identify the performance
bottlenecks of the speech solution even before the system is actually used thus
saving evaluation time and expenses.
| 2,011 | Computation and Language |
Environmental Noise Embeddings for Robust Speech Recognition | We propose a novel deep neural network architecture for speech recognition
that explicitly employs knowledge of the background environmental noise within
a deep neural network acoustic model. A deep neural network is used to predict
the acoustic environment in which the system in being used. The discriminative
embedding generated at the bottleneck layer of this network is then
concatenated with traditional acoustic features as input to a deep neural
network acoustic model. Through a series of experiments on Resource Management,
CHiME-3 task, and Aurora4, we show that the proposed approach significantly
improves speech recognition accuracy in noisy and highly reverberant
environments, outperforming multi-condition training, noise-aware training,
i-vector framework, and multi-task learning on both in-domain noise and unseen
noise.
| 2,016 | Computation and Language |
Comparison and Adaptation of Automatic Evaluation Metrics for Quality
Assessment of Re-Speaking | Re-speaking is a mechanism for obtaining high quality subtitles for use in
live broadcast and other public events. Because it relies on humans performing
the actual re-speaking, the task of estimating the quality of the results is
non-trivial. Most organisations rely on humans to perform the actual quality
assessment, but purely automatic methods have been developed for other similar
problems, like Machine Translation. This paper will try to compare several of
these methods: BLEU, EBLEU, NIST, METEOR, METEOR-PL, TER and RIBES. These will
then be matched to the human-derived NER metric, commonly used in re-speaking.
| 2,016 | Computation and Language |
Learning Hidden Unit Contributions for Unsupervised Acoustic Model
Adaptation | This work presents a broad study on the adaptation of neural network acoustic
models by means of learning hidden unit contributions (LHUC) -- a method that
linearly re-combines hidden units in a speaker- or environment-dependent manner
using small amounts of unsupervised adaptation data. We also extend LHUC to a
speaker adaptive training (SAT) framework that leads to a more adaptable DNN
acoustic model, working both in a speaker-dependent and a speaker-independent
manner, without the requirements to maintain auxiliary speaker-dependent
feature extractors or to introduce significant speaker-dependent changes to the
DNN structure. Through a series of experiments on four different speech
recognition benchmarks (TED talks, Switchboard, AMI meetings, and Aurora4)
comprising 270 test speakers, we show that LHUC in both its test-only and SAT
variants results in consistent word error rate reductions ranging from 5% to
23% relative depending on the task and the degree of mismatch between training
and test data. In addition, we have investigated the effect of the amount of
adaptation data per speaker, the quality of unsupervised adaptation targets,
the complementarity to other adaptation techniques, one-shot adaptation, and an
extension to adapting DNNs trained in a sequence discriminative manner.
| 2,016 | Computation and Language |
The scarcity of crossing dependencies: a direct outcome of a specific
constraint? | The structure of a sentence can be represented as a network where vertices
are words and edges indicate syntactic dependencies. Interestingly, crossing
syntactic dependencies have been observed to be infrequent in human languages.
This leads to the question of whether the scarcity of crossings in languages
arises from an independent and specific constraint on crossings. We provide
statistical evidence suggesting that this is not the case, as the proportion of
dependency crossings of sentences from a wide range of languages can be
accurately estimated by a simple predictor based on a null hypothesis on the
local probability that two dependencies cross given their lengths. The relative
error of this predictor never exceeds 5% on average, whereas the error of a
baseline predictor assuming a random ordering of the words of a sentence is at
least 6 times greater. Our results suggest that the low frequency of crossings
in natural languages is neither originated by hidden knowledge of language nor
by the undesirability of crossings per se, but as a mere side effect of the
principle of dependency length minimization.
| 2,017 | Computation and Language |
Predicting the Effectiveness of Self-Training: Application to Sentiment
Classification | The goal of this paper is to investigate the connection between the
performance gain that can be obtained by selftraining and the similarity
between the corpora used in this approach. Self-training is a semi-supervised
technique designed to increase the performance of machine learning algorithms
by automatically classifying instances of a task and adding these as additional
training material to the same classifier. In the context of language processing
tasks, this training material is mostly an (annotated) corpus. Unfortunately
self-training does not always lead to a performance increase and whether it
will is largely unpredictable. We show that the similarity between corpora can
be used to identify those setups for which self-training can be beneficial. We
consider this research as a step in the process of developing a classifier that
is able to adapt itself to each new test corpus that it is presented with.
| 2,016 | Computation and Language |
Political Speech Generation | In this report we present a system that can generate political speeches for a
desired political party. Furthermore, the system allows to specify whether a
speech should hold a supportive or opposing opinion. The system relies on a
combination of several state-of-the-art NLP methods which are discussed in this
report. These include n-grams, Justeson & Katz POS tag filter, recurrent neural
networks, and latent Dirichlet allocation. Sequences of words are generated
based on probabilities obtained from two underlying models: A language model
takes care of the grammatical correctness while a topic model aims for textual
consistency. Both models were trained on the Convote dataset which contains
transcripts from US congressional floor debates. Furthermore, we present a
manual and an automated approach to evaluate the quality of generated speeches.
In an experimental evaluation generated speeches have shown very high quality
in terms of grammatical correctness and sentence transitions.
| 2,016 | Computation and Language |
Implicit Distortion and Fertility Models for Attention-based
Encoder-Decoder NMT Model | Neural machine translation has shown very promising results lately. Most NMT
models follow the encoder-decoder framework. To make encoder-decoder models
more flexible, attention mechanism was introduced to machine translation and
also other tasks like speech recognition and image captioning. We observe that
the quality of translation by attention-based encoder-decoder can be
significantly damaged when the alignment is incorrect. We attribute these
problems to the lack of distortion and fertility models. Aiming to resolve
these problems, we propose new variations of attention-based encoder-decoder
and compare them with other models on machine translation. Our proposed method
achieved an improvement of 2 BLEU points over the original attention-based
encoder-decoder.
| 2,016 | Computation and Language |
EvoGrader: an online formative assessment tool for automatically
evaluating written evolutionary explanations | EvoGrader is a free, online, on-demand formative assessment service designed
for use in undergraduate biology classrooms. EvoGrader's web portal is powered
by Amazon's Elastic Cloud and run with LightSIDE Lab's open-source
machine-learning tools. The EvoGrader web portal allows biology instructors to
upload a response file (.csv) containing unlimited numbers of evolutionary
explanations written in response to 86 different ACORNS (Assessing COntextual
Reasoning about Natural Selection) instrument items. The system automatically
analyzes the responses and provides detailed information about the scientific
and naive concepts contained within each student's response, as well as overall
student (and sample) reasoning model types. Graphs and visual models provided
by EvoGrader summarize class-level responses; downloadable files of raw scores
(in .csv format) are also provided for more detailed analyses. Although the
computational machinery that EvoGrader employs is complex, using the system is
easy. Users only need to know how to use spreadsheets to organize student
responses, upload files to the web, and use a web browser. A series of
experiments using new samples of 2,200 written evolutionary explanations
demonstrate that EvoGrader scores are comparable to those of trained human
raters, although EvoGrader scoring takes 99% less time and is free. EvoGrader
will be of interest to biology instructors teaching large classes who seek to
emphasize scientific practices such as generating scientific explanations, and
to teach crosscutting ideas such as evolution and natural selection. The
software architecture of EvoGrader is described as it may serve as a template
for developing machine-learning portals for other core concepts within biology
and across other disciplines.
| 2,014 | Computation and Language |
Smoothing parameter estimation framework for IBM word alignment models | IBM models are very important word alignment models in Machine Translation.
Following the Maximum Likelihood Estimation principle to estimate their
parameters, the models will easily overfit the training data when the data are
sparse. While smoothing is a very popular solution in Language Model, there
still lacks studies on smoothing for word alignment. In this paper, we propose
a framework which generalizes the notable work Moore [2004] of applying
additive smoothing to word alignment models. The framework allows developers to
customize the smoothing amount for each pair of word. The added amount will be
scaled appropriately by a common factor which reflects how much the framework
trusts the adding strategy according to the performance on data. We also
carefully examine various performance criteria and propose a smoothened version
of the error count, which generally gives the best result.
| 2,016 | Computation and Language |
Improved Relation Classification by Deep Recurrent Neural Networks with
Data Augmentation | Nowadays, neural networks play an important role in the task of relation
classification. By designing different neural architectures, researchers have
improved the performance to a large extent in comparison with traditional
methods. However, existing neural networks for relation classification are
usually of shallow architectures (e.g., one-layer convolutional neural networks
or recurrent networks). They may fail to explore the potential representation
space in different abstraction levels. In this paper, we propose deep recurrent
neural networks (DRNNs) for relation classification to tackle this challenge.
Further, we propose a data augmentation method by leveraging the directionality
of relations. We evaluated our DRNNs on the SemEval-2010 Task~8, and achieve an
F1-score of 86.1%, outperforming previous state-of-the-art recorded results.
| 2,016 | Computation and Language |
Linear Algebraic Structure of Word Senses, with Applications to Polysemy | Word embeddings are ubiquitous in NLP and information retrieval, but it is
unclear what they represent when the word is polysemous. Here it is shown that
multiple word senses reside in linear superposition within the word embedding
and simple sparse coding can recover vectors that approximately capture the
senses. The success of our approach, which applies to several embedding
methods, is mathematically explained using a variant of the random walk on
discourses model (Arora et al., 2016). A novel aspect of our technique is that
each extracted word sense is accompanied by one of about 2000 "discourse atoms"
that gives a succinct description of which other words co-occur with that word
sense. Discourse atoms can be of independent interest, and make the method
potentially more useful. Empirical tests are used to verify and support the
theory.
| 2,018 | Computation and Language |
Towards Turkish ASR: Anatomy of a rule-based Turkish g2p | This paper describes the architecture and implementation of a rule-based
grapheme to phoneme converter for Turkish. The system accepts surface form as
input, outputs SAMPA mapping of the all parallel pronounciations according to
the morphological analysis together with stress positions. The system has been
implemented in Python
| 2,016 | Computation and Language |
Automatic Description Generation from Images: A Survey of Models,
Datasets, and Evaluation Measures | Automatic description generation from natural images is a challenging problem
that has recently received a large amount of interest from the computer vision
and natural language processing communities. In this survey, we classify the
existing approaches based on how they conceptualize this problem, viz., models
that cast description as either generation problem or as a retrieval problem
over a visual or multimodal representational space. We provide a detailed
review of existing models, highlighting their advantages and disadvantages.
Moreover, we give an overview of the benchmark image datasets and the
evaluation measures that have been developed to assess the quality of
machine-generated image descriptions. Finally we extrapolate future directions
in the area of automatic image description generation.
| 2,017 | Computation and Language |
Multimodal Pivots for Image Caption Translation | We present an approach to improve statistical machine translation of image
descriptions by multimodal pivots defined in visual space. The key idea is to
perform image retrieval over a database of images that are captioned in the
target language, and use the captions of the most similar images for
crosslingual reranking of translation outputs. Our approach does not depend on
the availability of large amounts of in-domain parallel data, but only relies
on available large datasets of monolingually captioned images, and on
state-of-the-art convolutional neural networks to compute image similarities.
Our experimental evaluation shows improvements of 1 BLEU point over strong
baselines.
| 2,021 | Computation and Language |
Detecting and Extracting Events from Text Documents | Events of various kinds are mentioned and discussed in text documents,
whether they are books, news articles, blogs or microblog feeds. The paper
starts by giving an overview of how events are treated in linguistics and
philosophy. We follow this discussion by surveying how events and associated
information are handled in computationally. In particular, we look at how
textual documents can be mined to extract events and ancillary information.
These days, it is mostly through the application of various machine learning
techniques. We also discuss applications of event detection and extraction
systems, particularly in summarization, in the medical domain and in the
context of Twitter posts. We end the paper with a discussion of challenges and
future directions.
| 2,016 | Computation and Language |
Bandit Structured Prediction for Learning from Partial Feedback in
Statistical Machine Translation | We present an approach to structured prediction from bandit feedback, called
Bandit Structured Prediction, where only the value of a task loss function at a
single predicted point, instead of a correct structure, is observed in
learning. We present an application to discriminative reranking in Statistical
Machine Translation (SMT) where the learning algorithm only has access to a
1-BLEU loss evaluation of a predicted translation instead of obtaining a gold
standard reference translation. In our experiment bandit feedback is obtained
by evaluating BLEU on reference translations without revealing them to the
algorithm. This can be thought of as a simulation of interactive machine
translation where an SMT system is personalized by a user who provides single
point feedback to predicted translations. Our experiments show that our
approach improves translation quality and is comparable to approaches that
employ more informative feedback in learning.
| 2,016 | Computation and Language |
Nonparametric Bayesian Storyline Detection from Microtexts | News events and social media are composed of evolving storylines, which
capture public attention for a limited period of time. Identifying storylines
requires integrating temporal and linguistic information, and prior work takes
a largely heuristic approach. We present a novel online non-parametric Bayesian
framework for storyline detection, using the distance-dependent Chinese
Restaurant Process (dd-CRP). To ensure efficient linear-time inference, we
employ a fixed-lag Gibbs sampling procedure, which is novel for the dd-CRP. We
evaluate on the TREC Twitter Timeline Generation (TTG), obtaining encouraging
results: despite using a weak baseline retrieval model, the dd-CRP story
clustering method is competitive with the best entries in the 2014 TTG task.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.