Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Morphological Analysis of the Bishnupriya Manipuri Language using Finite
State Transducers | In this work we present a morphological analysis of Bishnupriya Manipuri
language, an Indo-Aryan language spoken in the north eastern India. As of now,
there is no computational work available for the language. Finite state
morphology is one of the successful approaches applied in a wide variety of
languages over the year. Therefore we adapted the finite state approach to
analyse morphology of the Bishnupriya Manipuri language.
| 2,014 | Computation and Language |
Lexicon Infused Phrase Embeddings for Named Entity Resolution | Most state-of-the-art approaches for named-entity recognition (NER) use semi
supervised information in the form of word clusters and lexicons. Recently
neural network-based language models have been explored, as they as a byproduct
generate highly informative vector representations for words, known as word
embeddings. In this paper we present two contributions: a new form of learning
word embeddings that can leverage information from relevant lexicons to improve
the representations, and the first system to use neural word embeddings to
achieve state-of-the-art results on named-entity recognition in both CoNLL and
Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for
CoNLL 2003---significantly better than any previous system trained on public
data, and matching a system employing massive private industrial query-log
data.
| 2,014 | Computation and Language |
A Structural Query System for Han Characters | The IDSgrep structural query system for Han character dictionaries is
presented. This system includes a data model and syntax for describing the
spatial structure of Han characters using Extended Ideographic Description
Sequences (EIDSes) based on the Unicode IDS syntax; a language for querying
EIDS databases, designed to suit the needs of font developers and foreign
language learners; a bit vector index inspired by Bloom filters for faster
query operations; a freely available implementation; and format translation
from popular third-party IDS and XML character databases. Experimental results
are included, with a comparison to other software used for similar
applications.
| 2,015 | Computation and Language |
Reconstructing Native Language Typology from Foreign Language Usage | Linguists and psychologists have long been studying cross-linguistic
transfer, the influence of native language properties on linguistic performance
in a foreign language. In this work we provide empirical evidence for this
process in the form of a strong correlation between language similarities
derived from structural features in English as Second Language (ESL) texts and
equivalent similarities obtained from the typological features of the native
languages. We leverage this finding to recover native language typological
similarity structure directly from ESL text, and perform prediction of
typological features in an unsupervised fashion with respect to the target
languages. Our method achieves 72.2% accuracy on the typology prediction task,
a result that is highly competitive with equivalent methods that rely on
typological resources.
| 2,014 | Computation and Language |
An Account of Opinion Implicatures | While previous sentiment analysis research has concentrated on the
interpretation of explicitly stated opinions and attitudes, this work initiates
the computational study of a type of opinion implicature (i.e.,
opinion-oriented inference) in text. This paper described a rule-based
framework for representing and analyzing opinion implicatures which we hope
will contribute to deeper automatic interpretation of subjective language. In
the course of understanding implicatures, the system recognizes implicit
sentiments (and beliefs) toward various events and entities in the sentence,
often attributed to different sources (holders) and of mixed polarities; thus,
it produces a richer interpretation than is typical in opinion analysis.
| 2,014 | Computation and Language |
A Deep Architecture for Semantic Parsing | Many successful approaches to semantic parsing build on top of the syntactic
analysis of text, and make use of distributional representations or statistical
models to match parses to ontology-specific queries. This paper presents a
novel deep learning architecture which provides a semantic parsing system
through the union of two neural models of language semantics. It allows for the
generation of ontology-specific queries from natural language statements and
questions without the need for parsing, which makes it especially suitable to
grammatically malformed or syntactically atypical text, such as tweets, as well
as permitting the development of semantic parsers for resource-poor languages.
| 2,014 | Computation and Language |
Concise comparative summaries (CCS) of large text corpora with a human
experiment | In this paper we propose a general framework for topic-specific summarization
of large text corpora and illustrate how it can be used for the analysis of
news databases. Our framework, concise comparative summarization (CCS), is
built on sparse classification methods. CCS is a lightweight and flexible tool
that offers a compromise between simple word frequency based methods currently
in wide use and more heavyweight, model-intensive methods such as latent
Dirichlet allocation (LDA). We argue that sparse methods have much to offer for
text analysis and hope CCS opens the door for a new branch of research in this
important field. For a particular topic of interest (e.g., China or energy),
CSS automatically labels documents as being either on- or off-topic (usually
via keyword search), and then uses sparse classification methods to predict
these labels with the high-dimensional counts of all the other words and
phrases in the documents. The resulting small set of phrases found as
predictive are then harvested as the summary. To validate our tool, we, using
news articles from the New York Times international section, designed and
conducted a human survey to compare the different summarizers with human
understanding. We demonstrate our approach with two case studies, a media
analysis of the framing of "Egypt" in the New York Times throughout the Arab
Spring and an informal comparison of the New York Times' and Wall Street
Journal's coverage of "energy." Overall, we find that the Lasso with $L^2$
normalization can be effectively and usefully used to summarize large corpora,
regardless of document size.
| 2,014 | Computation and Language |
Exemplar Dynamics Models of the Stability of Phonological Categories | We develop a model for the stability and maintenance of phonological
categories. Examples of phonological categories are vowel sounds such as "i"
and "e". We model such categories as consisting of collections of labeled
exemplars that language users store in their memory. Each exemplar is a
detailed memory of an instance of the linguistic entity in question. Starting
from an exemplar-level model we derive integro-differential equations for the
long-term evolution of the density of exemplars in different portions of
phonetic space. Using these latter equations we investigate under what
conditions two phonological categories merge or not. Our main conclusion is
that for the preservation of distinct phonological categories, it is necessary
that anomalous speech tokens of a given category are discarded, and not merely
stored in memory as an exemplar of another category.
| 2,014 | Computation and Language |
Contextual Semantic Parsing using Crowdsourced Spatial Descriptions | We describe a contextual parser for the Robot Commands Treebank, a new
crowdsourced resource. In contrast to previous semantic parsers that select the
most-probable parse, we consider the different problem of parsing using
additional situational context to disambiguate between different readings of a
sentence. We show that multiple semantic analyses can be searched using dynamic
programming via interaction with a spatial planner, to guide the parsing
process. We are able to parse sentences in near linear-time by ruling out
analyses early on that are incompatible with spatial context. We report a 34%
upper bound on accuracy, as our planner correctly processes spatial context for
3,394 out of 10,000 sentences. However, our parser achieves a 96.53%
exact-match score for parsing within the subset of sentences recognized by the
planner, compared to 82.14% for a non-contextual parser.
| 2,014 | Computation and Language |
Extracting Family Relationship Networks from Novels | We present an approach to the extraction of family relations from literary
narrative, which incorporates a technique for utterance attribution proposed
recently by Elson and McKeown (2010). In our work this technique is used in
combination with the detection of vocatives - the explicit forms of address
used by the characters in a novel. We take advantage of the fact that certain
vocatives indicate family relations between speakers. The extracted relations
are then propagated using a set of rules. We report the results of the
application of our method to Jane Austen's Pride and Prejudice.
| 2,014 | Computation and Language |
Automated Attribution and Intertextual Analysis | In this work, we employ quantitative methods from the realm of statistics and
machine learning to develop novel methodologies for author attribution and
textual analysis. In particular, we develop techniques and software suitable
for applications to Classical study, and we illustrate the efficacy of our
approach in several interesting open questions in the field. We apply our
numerical analysis techniques to questions of authorship attribution in the
case of the Greek tragedian Euripides, to instances of intertextuality and
influence in the poetry of the Roman statesman Seneca the Younger, and to cases
of "interpolated" text with respect to the histories of Livy.
| 2,014 | Computation and Language |
"Translation can't change a name": Using Multilingual Data for Named
Entity Recognition | Named Entities (NEs) are often written with no orthographic changes across
different languages that share a common alphabet. We show that this can be
leveraged so as to improve named entity recognition (NER) by using unsupervised
word clusters from secondary languages as features in state-of-the-art
discriminative NER systems. We observe significant increases in performance,
finding that person and location identification is particularly improved, and
that phylogenetically close languages provide more valuable features than more
distant languages.
| 2,014 | Computation and Language |
Learning Bilingual Word Representations by Marginalizing Alignments | We present a probabilistic model that simultaneously learns alignments and
distributed representations for bilingual data. By marginalizing over word
alignments the model captures a larger semantic context than prior work relying
on hard alignments. The advantage of this approach is demonstrated in a
cross-lingual classification task, where we outperform the prior published
state of the art.
| 2,014 | Computation and Language |
Automatic Method Of Domain Ontology Construction based on
Characteristics of Corpora POS-Analysis | It is now widely recognized that ontologies, are one of the fundamental
cornerstones of knowledge-based systems. What is lacking, however, is a
currently accepted strategy of how to build ontology; what kinds of the
resources and techniques are indispensables to optimize the expenses and the
time on the one hand and the amplitude, the completeness, the robustness of en
ontology on the other hand. The paper offers a semi-automatic ontology
construction method from text corpora in the domain of radiological protection.
This method is composed from next steps: 1) text annotation with part-of-speech
tags; 2) revelation of the significant linguistic structures and forming the
templates; 3) search of text fragments corresponding to these templates; 4)
basic ontology instantiation process
| 2,014 | Computation and Language |
Latent semantics of action verbs reflect phonetic parameters of
intensity and emotional content | Conjuring up our thoughts, language reflects statistical patterns of word
co-occurrences which in turn come to describe how we perceive the world.
Whether counting how frequently nouns and verbs combine in Google search
queries, or extracting eigenvectors from term document matrices made up of
Wikipedia lines and Shakespeare plots, the resulting latent semantics capture
not only the associative links which form concepts, but also spatial dimensions
embedded within the surface structure of language. As both the shape and
movements of objects have been found to be associated with phonetic contrasts
already in toddlers, this study explores whether articulatory and acoustic
parameters may likewise differentiate the latent semantics of action verbs.
Selecting 3 x 20 emotion, face, and hand related verbs known to activate
premotor areas in the brain, their mutual cosine similarities were computed
using latent semantic analysis LSA, and the resulting adjacency matrices were
compared based on two different large scale text corpora; HAWIK and TASA.
Applying hierarchical clustering to identify common structures across the two
text corpora, the verbs largely divide into combined mouth and hand movements
versus emotional expressions. Transforming the verbs into their constituent
phonemes, the clustered small and large size movements appear differentiated by
front versus back vowels corresponding to increasing levels of arousal. Whereas
the clustered emotional verbs seem characterized by sequences of close versus
open jaw produced phonemes, generating up- or downwards shifts in formant
frequencies that may influence their perceived valence. Suggesting, that the
latent semantics of action verbs reflect parameters of intensity and emotional
polarity that appear correlated with the articulatory contrasts and acoustic
characteristics of phonemes
| 2,015 | Computation and Language |
D-Bees: A Novel Method Inspired by Bee Colony Optimization for Solving
Word Sense Disambiguation | Word sense disambiguation (WSD) is a problem in the field of computational
linguistics given as finding the intended sense of a word (or a set of words)
when it is activated within a certain context. WSD was recently addressed as a
combinatorial optimization problem in which the goal is to find a sequence of
senses that maximize the semantic relatedness among the target words. In this
article, a novel algorithm for solving the WSD problem called D-Bees is
proposed which is inspired by bee colony optimization (BCO)where artificial bee
agents collaborate to solve the problem. The D-Bees algorithm is evaluated on a
standard dataset (SemEval 2007 coarse-grained English all-words task corpus)and
is compared to simulated annealing, genetic algorithms, and two ant colony
optimization techniques (ACO). It will be observed that the BCO and ACO
approaches are on par.
| 2,014 | Computation and Language |
A Corpus of Sentence-level Revisions in Academic Writing: A Step towards
Understanding Statement Strength in Communication | The strength with which a statement is made can have a significant impact on
the audience. For example, international relations can be strained by how the
media in one country describes an event in another; and papers can be rejected
because they overstate or understate their findings. It is thus important to
understand the effects of statement strength. A first step is to be able to
distinguish between strong and weak statements. However, even this problem is
understudied, partly due to a lack of data. Since strength is inherently
relative, revisions of texts that make claims are a natural source of data on
strength differences. In this paper, we introduce a corpus of sentence-level
revisions from academic writing. We also describe insights gained from our
annotation efforts for this task.
| 2,014 | Computation and Language |
DepecheMood: a Lexicon for Emotion Analysis from Crowd-Annotated News | While many lexica annotated with words polarity are available for sentiment
analysis, very few tackle the harder task of emotion analysis and are usually
quite limited in coverage. In this paper, we present a novel approach for
extracting - in a totally automated way - a high-coverage and high-precision
lexicon of roughly 37 thousand terms annotated with emotion scores, called
DepecheMood. Our approach exploits in an original way 'crowd-sourced' affective
annotation implicitly provided by readers of news articles from rappler.com. By
providing new state-of-the-art performances in unsupervised settings for
regression and classification tasks, even using a na\"{\i}ve approach, our
experiments show the beneficial impact of harvesting social media data for
affective lexicon building.
| 2,014 | Computation and Language |
Initial Comparison of Linguistic Networks Measures for Parallel Texts | This paper presents preliminary results of Croatian syllable networks
analysis. Syllable network is a network in which nodes are syllables and links
between them are constructed according to their connections within words. In
this paper we analyze networks of syllables generated from texts collected from
the Croatian Wikipedia and Blogs. As a main tool we use complex network
analysis methods which provide mechanisms that can reveal new patterns in a
language structure. We aim to show that syllable networks have much higher
clustering coefficient in comparison to Erd\"os-Renyi random networks. The
results indicate that Croatian syllable networks exhibit certain properties of
a small world networks. Furthermore, we compared Croatian syllable networks
with Portuguese and Chinese syllable networks and we showed that they have
similar properties.
| 2,014 | Computation and Language |
An Expert System for Automatic Reading of A Text Written in Standard
Arabic | In this work we present our expert system of Automatic reading or speech
synthesis based on a text written in Standard Arabic, our work is carried out
in two great stages: the creation of the sound data base, and the
transformation of the written text into speech (Text To Speech TTS). This
transformation is done firstly by a Phonetic Orthographical Transcription (POT)
of any written Standard Arabic text with the aim of transforming it into his
corresponding phonetics sequence, and secondly by the generation of the voice
signal which corresponds to the chain transcribed. We spread out the different
of conception of the system, as well as the results obtained compared to others
works studied to realize TTS based on Standard Arabic.
| 2,014 | Computation and Language |
Coordinate System Selection for Minimum Error Rate Training in
Statistical Machine Translation | Minimum error rate training (MERT) is a widely used training procedure for
statistical machine translation. A general problem of this approach is that the
search space is easy to converge to a local optimum and the acquired weight set
is not in accord with the real distribution of feature functions. This paper
introduces coordinate system selection (RSS) into the search algorithm for
MERT. Contrary to previous approaches in which every dimension only corresponds
to one independent feature function, we create several coordinate systems by
moving one of the dimensions to a new direction. The basic idea is quite simple
but critical that the training procedure of MERT should be based on a
coordinate system formed by search directions but not directly on feature
functions. Experiments show that by selecting coordinate systems with tuning
set results, better results can be obtained without any other language
knowledge.
| 2,014 | Computation and Language |
Comparison of the language networks from literature and blogs | In this paper we present the comparison of the linguistic networks from
literature and blog texts. The linguistic networks are constructed from texts
as directed and weighted co-occurrence networks of words. Words are nodes and
links are established between two nodes if they are directly co-occurring
within the sentence. The comparison of the networks structure is performed at
global level (network) in terms of: average node degree, average shortest path
length, diameter, clustering coefficient, density and number of components.
Furthermore, we perform analysis on the local level (node) by comparing the
rank plots of in and out degree, strength and selectivity. The
selectivity-based results point out that there are differences between the
structure of the networks constructed from literature and blogs.
| 2,014 | Computation and Language |
A Study of Entanglement in a Categorical Framework of Natural Language | In both quantum mechanics and corpus linguistics based on vector spaces, the
notion of entanglement provides a means for the various subsystems to
communicate with each other. In this paper we examine a number of
implementations of the categorical framework of Coecke, Sadrzadeh and Clark
(2010) for natural language, from an entanglement perspective. Specifically,
our goal is to better understand in what way the level of entanglement of the
relational tensors (or the lack of it) affects the compositional structures in
practical situations. Our findings reveal that a number of proposals for verb
construction lead to almost separable tensors, a fact that considerably
simplifies the interactions between the words. We examine the ramifications of
this fact, and we show that the use of Frobenius algebras mitigates the
potential problems to a great extent. Finally, we briefly examine a machine
learning method that creates verb tensors exhibiting a sufficient level of
entanglement.
| 2,014 | Computation and Language |
Phonetic based SoundEx & ShapeEx algorithm for Sindhi Spell Checker
System | This paper presents a novel combinational phonetic algorithm for Sindhi
Language, to be used in developing Sindhi Spell Checker which has yet not been
developed prior to this work. The compound textual forms and glyphs of Sindhi
language presents a substantial challenge for developing Sindhi spell checker
system and generating similar suggestion list for misspelled words. In order to
implement such a system, phonetic based Sindhi language rules and patterns must
be considered into account for increasing the accuracy and efficiency. The
proposed system is developed with a blend between Phonetic based SoundEx
algorithm and ShapeEx algorithm for pattern or glyph matching, generating
accurate and efficient suggestion list for incorrect or misspelled Sindhi
words. A table of phonetically similar sounding Sindhi characters for SoundEx
algorithm is also generated along with another table containing similar glyph
or shape based character groups for ShapeEx algorithm. Both these are first
ever attempt of any such type of categorization and representation for Sindhi
Language.
| 2,014 | Computation and Language |
How to Ask for a Favor: A Case Study on the Success of Altruistic
Requests | Requests are at the core of many social media systems such as question &
answer sites and online philanthropy communities. While the success of such
requests is critical to the success of the community, the factors that lead
community members to satisfy a request are largely unknown. Success of a
request depends on factors like who is asking, how they are asking, when are
they asking, and most critically what is being requested, ranging from small
favors to substantial monetary donations. We present a case study of altruistic
requests in an online community where all requests ask for the very same
contribution and do not offer anything tangible in return, allowing us to
disentangle what is requested from textual and social factors. Drawing from
social psychology literature, we extract high-level social features from text
that operationalize social relations between recipient and donor and
demonstrate that these extracted relations are predictive of success. More
specifically, we find that clearly communicating need through the narrative is
essential and that that linguistic indications of gratitude, evidentiality, and
generalized reciprocity, as well as high status of the asker further increase
the likelihood of success. Building on this understanding, we develop a model
that can predict the success of unseen requests, significantly improving over
several baselines. We link these findings to research in psychology on helping
behavior, providing a basis for further analysis of success in social media
systems.
| 2,014 | Computation and Language |
Temporal Analysis of Language through Neural Language Models | We provide a method for automatically detecting change in language across
time through a chronologically trained neural language model. We train the
model on the Google Books Ngram corpus to obtain word vector representations
specific to each year, and identify words that have changed significantly from
1900 to 2009. The model identifies words such as "cell" and "gay" as having
changed during that time period. The model simultaneously identifies the
specific years during which such words underwent change.
| 2,014 | Computation and Language |
Credibility Adjusted Term Frequency: A Supervised Term Weighting Scheme
for Sentiment Analysis and Text Classification | We provide a simple but novel supervised weighting scheme for adjusting term
frequency in tf-idf for sentiment analysis and text classification. We compare
our method to baseline weighting schemes and find that it outperforms them on
multiple benchmarks. The method is robust and works well on both snippets and
longer documents.
| 2,014 | Computation and Language |
INAUT, a Controlled Language for the French Coast Pilot Books
Instructions nautiques | We describe INAUT, a controlled natural language dedicated to collaborative
update of a knowledge base on maritime navigation and to automatic generation
of coast pilot books (Instructions nautiques) of the French National
Hydrographic and Oceanographic Service SHOM. INAUT is based on French language
and abundantly uses georeferenced entities. After describing the structure of
the overall system, giving details on the language and on its generation, and
discussing the three major applications of INAUT (document production,
interaction with ENCs and collaborative updates of the knowledge base), we
conclude with future extensions and open problems.
| 2,014 | Computation and Language |
Complex Networks Measures for Differentiation between Normal and
Shuffled Croatian Texts | This paper studies the properties of the Croatian texts via complex networks.
We present network properties of normal and shuffled Croatian texts for
different shuffling principles: on the sentence level and on the text level. In
both experiments we preserved the vocabulary size, word and sentence frequency
distributions. Additionally, in the first shuffling approach we preserved the
sentence structure of the text and the number of words per sentence. Obtained
results showed that degree rank distributions exhibit no substantial deviation
in shuffled networks, and strength rank distributions are preserved due to the
same word frequencies. Therefore, standard approach to study the structure of
linguistic co-occurrence networks showed no clear difference among the
topologies of normal and shuffled texts. Finally, we showed that the in- and
out- selectivity values from shuffled texts are constantly below selectivity
values calculated from normal texts. Our results corroborate that the node
selectivity measure can capture structural differences between original and
shuffled Croatian texts.
| 2,014 | Computation and Language |
M\'ethodes pour la repr\'esentation informatis\'ee de donn\'ees
lexicales / Methoden der Speicherung lexikalischer Daten | In recent years, new developments in the area of lexicography have altered
not only the management, processing and publishing of lexicographical data, but
also created new types of products such as electronic dictionaries and
thesauri. These expand the range of possible uses of lexical data and support
users with more flexibility, for instance in assisting human translation. In
this article, we give a short and easy-to-understand introduction to the
problematic nature of the storage, display and interpretation of lexical data.
We then describe the main methods and specifications used to build and
represent lexical data. This paper is targeted for the following groups of
people: linguists, lexicographers, IT specialists, computer linguists and all
others who wish to learn more about the modelling, representation and
visualization of lexical knowledge. This paper is written in two languages:
French and German.
| 2,014 | Computation and Language |
Distributed Representations of Sentences and Documents | Many machine learning algorithms require the input to be represented as a
fixed-length feature vector. When it comes to texts, one of the most common
fixed-length features is bag-of-words. Despite their popularity, bag-of-words
features have two major weaknesses: they lose the ordering of the words and
they also ignore semantics of the words. For example, "powerful," "strong" and
"Paris" are equally distant. In this paper, we propose Paragraph Vector, an
unsupervised algorithm that learns fixed-length feature representations from
variable-length pieces of texts, such as sentences, paragraphs, and documents.
Our algorithm represents each document by a dense vector which is trained to
predict words in the document. Its construction gives our algorithm the
potential to overcome the weaknesses of bag-of-words models. Empirical results
show that Paragraph Vectors outperform bag-of-words models as well as other
techniques for text representations. Finally, we achieve new state-of-the-art
results on several text classification and sentiment analysis tasks.
| 2,014 | Computation and Language |
A preliminary study of Croatian Language Syllable Networks | This paper presents preliminary results of Croatian syllable networks
analysis. Syllable network is a network in which nodes are syllables and links
between them are constructed according to their connections within words. In
this paper we analyze networks of syllables generated from texts collected from
the Croatian Wikipedia and Blogs. As a main tool we use complex network
analysis methods which provide mechanisms that can reveal new patterns in a
language structure. We aim to show that syllable networks have much higher
clustering coefficient in comparison to Erd\"os-Renyi random networks. The
results indicate that Croatian syllable networks exhibit certain properties of
a small world networks. Furthermore, we compared Croatian syllable networks
with Portuguese and Chinese syllable networks and we showed that they have
similar properties.
| 2,013 | Computation and Language |
Les math\'ematiques de la langue : l'approche formelle de Montague | We present a natural language modelization method which is strongely relying
on mathematics. This method, called "Formal Semantics," has been initiated by
the American linguist Richard M. Montague in the 1970's. It uses mathematical
tools such as formal languages and grammars, first-order logic, type theory and
$\lambda$-calculus. Our goal is to have the reader discover both Montagovian
formal semantics and the mathematical tools that he used in his method.
-----
Nous pr\'esentons une m\'ethode de mod\'elisation de la langue naturelle qui
est fortement bas\'ee sur les math\'ematiques. Cette m\'ethode, appel\'ee
{\guillemotleft}s\'emantique formelle{\guillemotright}, a \'et\'e initi\'ee par
le linguiste am\'ericain Richard M. Montague dans les ann\'ees 1970. Elle
utilise des outils math\'ematiques tels que les langages et grammaires formels,
la logique du 1er ordre, la th\'eorie de types et le $\lambda$-calcul. Nous
nous proposons de faire d\'ecouvrir au lecteur tant la s\'emantique formelle de
Montague que les outils math\'ematiques dont il s'est servi.
| 2,014 | Computation and Language |
Compositional Morphology for Word Representations and Language Modelling | This paper presents a scalable method for integrating compositional
morphological representations into a vector-based probabilistic language model.
Our approach is evaluated in the context of log-bilinear language models,
rendered suitably efficient for implementation inside a machine translation
decoder by factoring the vocabulary. We perform both intrinsic and extrinsic
evaluations, presenting results on a range of languages which demonstrate that
our model learns morphological representations that both perform well on word
similarity tasks and lead to substantial reductions in perplexity. When used
for translation into morphologically rich languages with large vocabularies,
our models obtain improvements of up to 1.2 BLEU points relative to a baseline
system using back-off n-gram models.
| 2,014 | Computation and Language |
Thematically Reinforced Explicit Semantic Analysis | We present an extended, thematically reinforced version of Gabrilovich and
Markovitch's Explicit Semantic Analysis (ESA), where we obtain thematic
information through the category structure of Wikipedia. For this we first
define a notion of categorical tfidf which measures the relevance of terms in
categories. Using this measure as a weight we calculate a maximal spanning tree
of the Wikipedia corpus considered as a directed graph of pages and categories.
This tree provides us with a unique path of "most related categories" between
each page and the top of the hierarchy. We reinforce tfidf of words in a page
by aggregating it with categorical tfidfs of the nodes of these paths, and
define a thematically reinforced ESA semantic relatedness measure which is more
robust than standard ESA and less sensitive to noise caused by out-of-context
words. We apply our method to the French Wikipedia corpus, evaluate it through
a text classification on a 37.5 MB corpus of 20 French newsgroups and obtain a
precision increase of 9-10% compared with standard ESA.
| 2,013 | Computation and Language |
That's sick dude!: Automatic identification of word sense change across
different timescales | In this paper, we propose an unsupervised method to identify noun sense
changes based on rigorous analysis of time-varying text data available in the
form of millions of digitized books. We construct distributional thesauri based
networks from data at different time points and cluster each of them separately
to obtain word-centric sense clusters corresponding to the different time
points. Subsequently, we compare these sense clusters of two different time
points to find if (i) there is birth of a new sense or (ii) if an older sense
has got split into more than one sense or (iii) if a newer sense has been
formed from the joining of older senses or (iv) if a particular sense has died.
We conduct a thorough evaluation of the proposed methodology both manually as
well as through comparison with WordNet. Manual evaluation indicates that the
algorithm could correctly identify 60.4% birth cases from a set of 48 randomly
picked samples and 57% split/join cases from a set of 21 randomly picked
samples. Remarkably, in 44% cases the birth of a novel sense is attested by
WordNet, while in 46% cases and 43% cases split and join are respectively
confirmed by WordNet. Our approach can be applied for lexicography, as well as
for applications like word sense disambiguation or semantic search.
| 2,014 | Computation and Language |
Preliminary Report on the Structure of Croatian Linguistic Co-occurrence
Networks | In this article, we investigate the structure of Croatian linguistic
co-occurrence networks. We examine the change of network structure properties
by systematically varying the co-occurrence window sizes, the corpus sizes and
removing stopwords. In a co-occurrence window of size $n$ we establish a link
between the current word and $n-1$ subsequent words. The results point out that
the increase of the co-occurrence window size is followed by a decrease in
diameter, average path shortening and expectedly condensing the average
clustering coefficient. The same can be noticed for the removal of the
stopwords. Finally, since the size of texts is reflected in the network
properties, our results suggest that the corpus influence can be reduced by
increasing the co-occurrence window size.
| 2,014 | Computation and Language |
Modelling Data Dispersion Degree in Automatic Robust Estimation for
Multivariate Gaussian Mixture Models with an Application to Noisy Speech
Processing | The trimming scheme with a prefixed cutoff portion is known as a method of
improving the robustness of statistical models such as multivariate Gaussian
mixture models (MG- MMs) in small scale tests by alleviating the impacts of
outliers. However, when this method is applied to real- world data, such as
noisy speech processing, it is hard to know the optimal cut-off portion to
remove the outliers and sometimes removes useful data samples as well. In this
paper, we propose a new method based on measuring the dispersion degree (DD) of
the training data to avoid this problem, so as to realise automatic robust
estimation for MGMMs. The DD model is studied by using two different measures.
For each one, we theoretically prove that the DD of the data samples in a
context of MGMMs approximately obeys a specific (chi or chi-square)
distribution. The proposed method is evaluated on a real-world application with
a moderately-sized speaker recognition task. Experiments show that the proposed
method can significantly improve the robustness of the conventional training
method of GMMs for speaker recognition.
| 2,014 | Computation and Language |
Narrowing the Modeling Gap: A Cluster-Ranking Approach to Coreference
Resolution | Traditional learning-based coreference resolvers operate by training the
mention-pair model for determining whether two mentions are coreferent or not.
Though conceptually simple and easy to understand, the mention-pair model is
linguistically rather unappealing and lags far behind the heuristic-based
coreference models proposed in the pre-statistical NLP era in terms of
sophistication. Two independent lines of recent research have attempted to
improve the mention-pair model, one by acquiring the mention-ranking model to
rank preceding mentions for a given anaphor, and the other by training the
entity-mention model to determine whether a preceding cluster is coreferent
with a given mention. We propose a cluster-ranking approach to coreference
resolution, which combines the strengths of the mention-ranking model and the
entity-mention model, and is therefore theoretically more appealing than both
of these models. In addition, we seek to improve cluster rankers via two
extensions: (1) lexicalization and (2) incorporating knowledge of anaphoricity
by jointly modeling anaphoricity determination and coreference resolution.
Experimental results on the ACE data sets demonstrate the superior performance
of cluster rankers to competing approaches as well as the effectiveness of our
two extensions.
| 2,011 | Computation and Language |
A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference
in Natural Language Processing | Dual decomposition, and more generally Lagrangian relaxation, is a classical
method for combinatorial optimization; it has recently been applied to several
inference problems in natural language processing (NLP). This tutorial gives an
overview of the technique. We describe example algorithms, describe formal
guarantees for the method, and describe practical issues in implementing the
algorithms. While our examples are predominantly drawn from the NLP literature,
the material should be of general relevance to inference problems in machine
learning. A central theme of this tutorial is that Lagrangian relaxation is
naturally applied in conjunction with a broad class of combinatorial
algorithms, allowing inference in models that go significantly beyond previous
work on Lagrangian relaxation for inference in graphical models.
| 2,012 | Computation and Language |
New Perspectives in Sinographic Language Processing Through the Use of
Character Structure | Chinese characters have a complex and hierarchical graphical structure
carrying both semantic and phonetic information. We use this structure to
enhance the text model and obtain better results in standard NLP operations.
First of all, to tackle the problem of graphical variation we define
allographic classes of characters. Next, the relation of inclusion of a
subcharacter in a characters, provides us with a directed graph of allographic
classes. We provide this graph with two weights: semanticity (semantic relation
between subcharacter and character) and phoneticity (phonetic relation) and
calculate "most semantic subcharacter paths" for each character. Finally,
adding the information contained in these paths to unigrams we claim to
increase the efficiency of text mining methods. We evaluate our method on a
text classification task on two corpora (Chinese and Japanese) of a total of 18
million characters and get an improvement of 3% on an already high baseline of
89.6% precision, obtained by a linear SVM classifier. Other possible
applications and perspectives of the system are discussed.
| 2,013 | Computation and Language |
Machine Translation Model based on Non-parallel Corpus and
Semi-supervised Transductive Learning | Although the parallel corpus has an irreplaceable role in machine
translation, its scale and coverage is still beyond the actual needs.
Non-parallel corpus resources on the web have an inestimable potential value in
machine translation and other natural language processing tasks. This article
proposes a semi-supervised transductive learning method for expanding the
training corpus in statistical machine translation system by extracting
parallel sentences from the non-parallel corpus. This method only requires a
small amount of labeled corpus and a large unlabeled corpus to build a
high-performance classifier, especially for when there is short of labeled
corpus. The experimental results show that by combining the non-parallel corpus
alignment and the semi-supervised transductive learning method, we can more
effectively use their respective strengths to improve the performance of
machine translation system.
| 2,014 | Computation and Language |
Mot\`aMot project: conversion of a French-Khmer published dictionary for
building a multilingual lexical system | Economic issues related to the information processing techniques are very
important. The development of such technologies is a major asset for developing
countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and
Thailand. The MotAMot project aims to computerize an under-resourced language:
Khmer, spoken mainly in Cambodia. The main goal of the project is the
development of a multilingual lexical system targeted for Khmer. The
macrostructure is a pivot one with each word sense of each language linked to a
pivot axi. The microstructure comes from a simplification of the explanatory
and combinatory dictionary. The lexical system has been initialized with data
coming mainly from the conversion of the French-Khmer bilingual dictionary of
Denis Richer from Word to XML format. The French part was completed with
pronunciation and parts-of-speech coming from the FeM French-english-Malay
dictionary. The Khmer headwords noted in IPA in the Richer dictionary were
converted to Khmer writing with OpenFST, a finite state transducer tool. The
resulting resource is available online for lookup, editing, download and remote
programming via a REST API on a Jibiki platform.
| 2,014 | Computation and Language |
Computerization of African languages-French dictionaries | This paper relates work done during the DiLAF project. It consists in
converting 5 bilingual African language-French dictionaries originally in Word
format into XML following the LMF model. The languages processed are Bambara,
Hausa, Kanuri, Tamajaq and Songhai-zarma, still considered as under-resourced
languages concerning Natural Language Processing tools. Once converted, the
dictionaries are available online on the Jibiki platform for lookup and
modification. The DiLAF project is first presented. A description of each
dictionary follows. Then, the conversion methodology from .doc format to XML
files is presented. A specific point on the usage of Unicode follows. Then,
each step of the conversion into XML and LMF is detailed. The last part
presents the Jibiki lexical resources management platform used for the project.
| 2,014 | Computation and Language |
Building of Networks of Natural Hierarchies of Terms Based on Analysis
of Texts Corpora | The technique of building of networks of hierarchies of terms based on the
analysis of chosen text corpora is offered. The technique is based on the
methodology of horizontal visibility graphs. Constructed and investigated
language network, formed on the basis of electronic preprints arXiv on topics
of information retrieval.
| 2,014 | Computation and Language |
Evaluating the fully automatic multi-language translation of the Swiss
avalanche bulletin | The Swiss avalanche bulletin is produced twice a day in four languages. Due
to the lack of time available for manual translation, a fully automated
translation system is employed, based on a catalogue of predefined phrases and
predetermined rules of how these phrases can be combined to produce sentences.
The system is able to automatically translate such sentences from German into
the target languages French, Italian and English without subsequent
proofreading or correction. Our catalogue of phrases is limited to a small
sublanguage. The reduction of daily translation costs is expected to offset the
initial development costs within a few years. After being operational for two
winter seasons, we assess here the quality of the produced texts based on an
evaluation where participants rate real danger descriptions from both origins,
the catalogue of phrases versus the manually written and translated texts. With
a mean recognition rate of 55%, users can hardly distinguish between the two
types of texts, and give similar ratings with respect to their language
quality. Overall, the output from the catalogue system can be considered
virtually equivalent to a text written by avalanche forecasters and then
manually translated by professional translators. Furthermore, forecasters
declared that all relevant situations were captured by the system with
sufficient accuracy and within the limited time available.
| 2,014 | Computation and Language |
Generating Natural Language Descriptions from OWL Ontologies: the
NaturalOWL System | We present NaturalOWL, a natural language generation system that produces
texts describing individuals or classes of OWL ontologies. Unlike simpler OWL
verbalizers, which typically express a single axiom at a time in controlled,
often not entirely fluent natural language primarily for the benefit of domain
experts, we aim to generate fluent and coherent multi-sentence texts for
end-users. With a system like NaturalOWL, one can publish information in OWL on
the Web, along with automatically produced corresponding texts in multiple
languages, making the information accessible not only to computer programs and
domain experts, but also end-users. We discuss the processing stages of
NaturalOWL, the optional domain-dependent linguistic resources that the system
can use at each stage, and why they are useful. We also present trials showing
that when the domain-dependent llinguistic resources are available, NaturalOWL
produces significantly better texts compared to a simpler verbalizer, and that
the resources can be created with relatively light effort.
| 2,013 | Computation and Language |
Cross-Language Personal Name Mapping | Name matching between multiple natural languages is an important step in
cross-enterprise integration applications and data mining. It is difficult to
decide whether or not two syntactic values (names) from two heterogeneous data
sources are alternative designation of the same semantic entity (person), this
process becomes more difficult with Arabic language due to several factors
including spelling and pronunciation variation, dialects and special vowel and
consonant distinction and other linguistic characteristics. This paper proposes
a new framework for name matching between the Arabic language and other
languages. The framework uses a dictionary based on a new proposed version of
the Soundex algorithm to encapsulate the recognition of special features of
Arabic names. The framework proposes a new proximity matching algorithm to suit
the high importance of order sensitivity in Arabic name matching. New
performance evaluation metrics are proposed as well. The framework is
implemented and verified empirically in several case studies demonstrating
substantial improvements compared to other well-known techniques found in
literature.
| 2,013 | Computation and Language |
Optimality Theory as a Framework for Lexical Acquisition | This paper re-investigates a lexical acquisition system initially developed
for French.We show that, interestingly, the architecture of the system
reproduces and implements the main components of Optimality Theory. However, we
formulate the hypothesis that some of its limitations are mainly due to a poor
representation of the constraints used. Finally, we show how a better
representation of the constraints used would yield better results.
| 2,014 | Computation and Language |
An HMM Based Named Entity Recognition System for Indian Languages: The
JU System at ICON 2013 | This paper reports about our work in the ICON 2013 NLP TOOLS CONTEST on Named
Entity Recognition. We submitted runs for Bengali, English, Hindi, Marathi,
Punjabi, Tamil and Telugu. A statistical HMM (Hidden Markov Models) based model
has been used to implement our system. The system has been trained and tested
on the NLP TOOLS CONTEST: ICON 2013 datasets. Our system obtains F-measures of
0.8599, 0.7704, 0.7520, 0.4289, 0.5455, 0.4466, and 0.4003 for Bengali,
English, Hindi, Marathi, Punjabi, Tamil and Telugu respectively.
| 2,014 | Computation and Language |
Training a Multilingual Sportscaster: Using Perceptual Context to Learn
Language | We present a novel framework for learning to interpret and generate language
using only perceptual context as supervision. We demonstrate its capabilities
by developing a system that learns to sportscast simulated robot soccer games
in both English and Korean without any language-specific prior knowledge.
Training employs only ambiguous supervision consisting of a stream of
descriptive textual comments and a sequence of events extracted from the
simulation trace. The system simultaneously establishes correspondences between
individual comments and the events that they describe while building a
translation model that supports both parsing and generation. We also present a
novel algorithm for learning which events are worth describing. Human
evaluations of the generated commentaries indicate they are of reasonable
quality and in some cases even on par with those produced by humans for our
limited domain.
| 2,010 | Computation and Language |
Using Local Alignments for Relation Recognition | This paper discusses the problem of marrying structural similarity with
semantic relatedness for Information Extraction from text. Aiming at accurate
recognition of relations, we introduce local alignment kernels and explore
various possibilities of using them for this task. We give a definition of a
local alignment (LA) kernel based on the Smith-Waterman score as a sequence
similarity measure and proceed with a range of possibilities for computing
similarity between elements of sequences. We show how distributional similarity
measures obtained from unlabeled data can be incorporated into the learning
task as semantic knowledge. Our experiments suggest that the LA kernel yields
promising results on various biomedical corpora outperforming two baselines by
a large margin. Additional series of experiments have been conducted on the
data sets of seven general relation types, where the performance of the LA
kernel is comparable to the current state-of-the-art results.
| 2,010 | Computation and Language |
Semantic Composition and Decomposition: From Recognition to Generation | Semantic composition is the task of understanding the meaning of text by
composing the meanings of the individual words in the text. Semantic
decomposition is the task of understanding the meaning of an individual word by
decomposing it into various aspects (factors, constituents, components) that
are latent in the meaning of the word. We take a distributional approach to
semantics, in which a word is represented by a context vector. Much recent work
has considered the problem of recognizing compositions and decompositions, but
we tackle the more difficult generation problem. For simplicity, we focus on
noun-modifier bigrams and noun unigrams. A test for semantic composition is,
given context vectors for the noun and modifier in a noun-modifier bigram ("red
salmon"), generate a noun unigram that is synonymous with the given bigram
("sockeye"). A test for semantic decomposition is, given a context vector for a
noun unigram ("snifter"), generate a noun-modifier bigram that is synonymous
with the given unigram ("brandy glass"). With a vocabulary of about 73,000
unigrams from WordNet, there are 73,000 candidate unigram compositions for a
bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a
unigram. We generate ranked lists of potential solutions in two passes. A fast
unsupervised learning algorithm generates an initial list of candidates and
then a slower supervised learning algorithm refines the list. We evaluate the
candidate solutions by comparing them to WordNet synonym sets. For
decomposition (unigram to bigram), the top 100 most highly ranked bigrams
include a WordNet synonym of the given unigram 50.7% of the time. For
composition (bigram to unigram), the top 100 most highly ranked unigrams
include a WordNet synonym of the given bigram 77.8% of the time.
| 2,014 | Computation and Language |
Comparing and Combining Sentiment Analysis Methods | Several messages express opinions about events, products, and services,
political views or even their author's emotional state and mood. Sentiment
analysis has been used in several applications including analysis of the
repercussions of events in social networks, analysis of opinions about products
and services, and simply to better understand aspects of social communication
in Online Social Networks (OSNs). There are multiple methods for measuring
sentiments, including lexical-based approaches and supervised machine learning
methods. Despite the wide use and popularity of some methods, it is unclear
which method is better for identifying the polarity (i.e., positive or
negative) of a message as the current literature does not provide a method of
comparison among existing methods. Such a comparison is crucial for
understanding the potential limitations, advantages, and disadvantages of
popular methods in analyzing the content of OSNs messages. Our study aims at
filling this gap by presenting comparisons of eight popular sentiment analysis
methods in terms of coverage (i.e., the fraction of messages whose sentiment is
identified) and agreement (i.e., the fraction of identified sentiments that are
in tune with ground truth). We develop a new method that combines existing
approaches, providing the best coverage results and competitive agreement. We
also present a free Web service called iFeel, which provides an open API for
accessing and comparing results across different sentiment methods for a given
text.
| 2,014 | Computation and Language |
Bridging the gap between Legal Practitioners and Knowledge Engineers
using semi-formal KR | The use of Structured English as a computation independent knowledge
representation format for non-technical users in business rules representation
has been proposed in OMGs Semantics and Business Vocabulary Representation
(SBVR). In the legal domain we face a similar problem. Formal representation
languages, such as OASIS LegalRuleML and legal ontologies (LKIF, legal OWL2
ontologies etc.) support the technical knowledge engineer and the automated
reasoning. But, they can be hardly used directly by the legal domain experts
who do not have a computer science background. In this paper we adapt the SBVR
Structured English approach for the legal domain and implement a
proof-of-concept, called KR4IPLaw, which enables legal domain experts to
represent their knowledge in Structured English in a computational independent
and hence, for them, more usable way. The benefit of this approach is that the
underlying pre-defined semantics of the Structured English approach makes
transformations into formal languages such as OASIS LegalRuleML and OWL2
ontologies possible. We exemplify our approach in the domain of patent law.
| 2,014 | Computation and Language |
Learning Phrase Representations using RNN Encoder-Decoder for
Statistical Machine Translation | In this paper, we propose a novel neural network model called RNN
Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN
encodes a sequence of symbols into a fixed-length vector representation, and
the other decodes the representation into another sequence of symbols. The
encoder and decoder of the proposed model are jointly trained to maximize the
conditional probability of a target sequence given a source sequence. The
performance of a statistical machine translation system is empirically found to
improve by using the conditional probabilities of phrase pairs computed by the
RNN Encoder-Decoder as an additional feature in the existing log-linear model.
Qualitatively, we show that the proposed model learns a semantically and
syntactically meaningful representation of linguistic phrases.
| 2,014 | Computation and Language |
A Semantic Approach to Summarization | Sentence extraction based summarization methods has some limitations as it
doesn't go into the semantics of the document. Also, it lacks the capability of
sentence generation which is intuitive to humans. Here we present a novel
method to summarize text documents taking the process to semantic levels with
the use of WordNet and other resources, and using a technique for sentence
generation. We involve semantic role labeling to get the semantic
representation of text and use of segmentation to form clusters of the related
pieces of text. Picking out the centroids and sentence generation completes the
task. We evaluate our system against human composed summaries and also present
an evaluation done by humans to measure the quality attributes of our
summaries.
| 2,014 | Computation and Language |
A Geometric Method to Obtain the Generation Probability of a Sentence | "How to generate a sentence" is the most critical and difficult problem in
all the natural language processing technologies. In this paper, we present a
new approach to explain the generation process of a sentence from the
perspective of mathematics. Our method is based on the premise that in our
brain a sentence is a part of a word network which is formed by many word
nodes. Experiments show that the probability of the entire sentence can be
obtained by the probabilities of single words and the probabilities of the
co-occurrence of word pairs, which indicate that human use the synthesis method
to generate a sentence.
| 2,014 | Computation and Language |
The Best Templates Match Technique For Example Based Machine Translation | It has been proved that large scale realistic Knowledge Based Machine
Translation applications require acquisition of huge knowledge about language
and about the world. This knowledge is encoded in computational grammars,
lexicons and domain models. Another approach which avoids the need for
collecting and analyzing massive knowledge, is the Example Based approach,
which is the topic of this paper. We show through the paper that using Example
Based in its native form is not suitable for translating into Arabic. Therefore
a modification to the basic approach is presented to improve the accuracy of
the translation process. The basic idea of the new approach is to improve the
technique by which template-based approaches select the appropriate templates.
| 2,014 | Computation and Language |
Basis Identification for Automatic Creation of Pronunciation Lexicon for
Proper Names | Development of a proper names pronunciation lexicon is usually a manual
effort which can not be avoided. Grapheme to phoneme (G2P) conversion modules,
in literature, are usually rule based and work best for non-proper names in a
particular language. Proper names are foreign to a G2P module. We follow an
optimization approach to enable automatic construction of proper names
pronunciation lexicon. The idea is to construct a small orthogonal set of words
(basis) which can span the set of names in a given database. We propose two
algorithms for the construction of this basis. The transcription lexicon of all
the proper names in a database can be produced by the manual transcription of
only the small set of basis words. We first construct a cost function and show
that the minimization of the cost function results in a basis. We derive
conditions for convergence of this cost function and validate them
experimentally on a very large proper name database. Experiments show the
transcription can be achieved by transcribing a set of small number of basis
words. The algorithms proposed are generic and independent of language; however
performance is better if the proper names have same origin, namely, same
language or geographical region.
| 2,014 | Computation and Language |
Recursive Neural Networks Can Learn Logical Semantics | Tree-structured recursive neural networks (TreeRNNs) for sentence meaning
have been successful for many applications, but it remains an open question
whether the fixed-length representations that they learn can support tasks as
demanding as logical deduction. We pursue this question by evaluating whether
two such models---plain TreeRNNs and tree-structured neural tensor networks
(TreeRNTNs)---can correctly learn to identify logical relationships such as
entailment and contradiction using these representations. In our first set of
experiments, we generate artificial data from a logical grammar and use it to
evaluate the models' ability to learn to handle basic relational reasoning,
recursive structures, and quantification. We then evaluate the models on the
more natural SICK challenge data. Both models perform competitively on the SICK
data and generalize well in all three experiments on simulated data, suggesting
that they can learn suitable representations for logical inference in natural
language.
| 2,015 | Computation and Language |
Toward verbalizing ontologies in isiZulu | IsiZulu is one of the eleven official languages of South Africa and roughly
half the population can speak it. It is the first (home) language for over 10
million people in South Africa. Only a few computational resources exist for
isiZulu and its related Nguni languages, yet the imperative for tool
development exists. We focus on natural language generation, and the grammar
options and preferences in particular, which will inform verbalization of
knowledge representation languages and could contribute to machine translation.
The verbalization pattern specification shows that the grammar rules are
elaborate and there are several options of which one may have preference. We
devised verbalization patterns for subsumption, basic disjointness, existential
and universal quantification, and conjunction. This was evaluated in a survey
among linguists and non-linguists. Some differences between linguists and
non-linguists can be observed, with the former much more in agreement, and
preferences depend on the overall structure of the sentence, such as singular
for subsumption and plural in other cases.
| 2,014 | Computation and Language |
Automatic Extraction of Protein Interaction in Literature | Protein-protein interaction extraction is the key precondition of the
construction of protein knowledge network, and it is very important for the
research in the biomedicine. This paper extracted directional protein-protein
interaction from the biological text, using the SVM-based method. Experiments
were evaluated on the LLL05 corpus with good results. The results show that
dependency features are import for the protein-protein interaction extraction
and features related to the interaction word are effective for the interaction
direction judgment. At last, we analyzed the effects of different features and
planed for the next step.
| 2,014 | Computation and Language |
Learning Word Representations with Hierarchical Sparse Coding | We propose a new method for learning word representations using hierarchical
regularization in sparse coding inspired by the linguistic study of word
meanings. We show an efficient learning algorithm based on stochastic proximal
methods that is significantly faster than previous approaches, making it
possible to perform hierarchical sparse coding on a corpus of billions of word
tokens. Experiments on various benchmark tasks---word similarity ranking,
analogies, sentence completion, and sentiment analysis---demonstrate that the
method outperforms or is competitive with state-of-the-art methods. Our word
representations are available at
\url{http://www.ark.cs.cmu.edu/dyogatam/wordvecs/}.
| 2,014 | Computation and Language |
How Easy is it to Learn a Controlled Natural Language for Building a
Knowledge Base? | Recent developments in controlled natural language editors for knowledge
engineering (KE) have given rise to expectations that they will make KE tasks
more accessible and perhaps even enable non-engineers to build knowledge bases.
This exploratory research focussed on novices and experts in knowledge
engineering during their attempts to learn a controlled natural language (CNL)
known as OWL Simplified English and use it to build a small knowledge base.
Participants' behaviours during the task were observed through eye-tracking and
screen recordings. This was an attempt at a more ambitious user study than in
previous research because we used a naturally occurring text as the source of
domain knowledge, and left them without guidance on which information to
select, or how to encode it. We have identified a number of skills
(competencies) required for this difficult task and key problems that authors
face.
| 2,014 | Computation and Language |
Controlled Natural Language Generation from a Multilingual
FrameNet-based Grammar | This paper presents a currently bilingual but potentially multilingual
FrameNet-based grammar library implemented in Grammatical Framework. The
contribution of this paper is two-fold. First, it offers a methodological
approach to automatically generate the grammar based on semantico-syntactic
valence patterns extracted from FrameNet-annotated corpora. Second, it provides
a proof of concept for two use cases illustrating how the acquired multilingual
grammar can be exploited in different CNL applications in the domains of arts
and tourism.
| 2,014 | Computation and Language |
FrameNet CNL: a Knowledge Representation and Information Extraction
Language | The paper presents a FrameNet-based information extraction and knowledge
representation framework, called FrameNet-CNL. The framework is used on natural
language documents and represents the extracted knowledge in a tailor-made
Frame-ontology from which unambiguous FrameNet-CNL paraphrase text can be
generated automatically in multiple languages. This approach brings together
the fields of information extraction and CNL, because a source text can be
considered belonging to FrameNet-CNL, if information extraction parser produces
the correct knowledge representation as a result. We describe a
state-of-the-art information extraction parser used by a national news agency
and speculate that FrameNet-CNL eventually could shape the natural language
subset used for writing the newswire articles.
| 2,014 | Computation and Language |
A Brief State of the Art for Ontology Authoring | One of the main challenges for building the Semantic web is Ontology
Authoring. Controlled Natural Languages CNLs offer a user friendly means for
non-experts to author ontologies. This paper provides a snapshot of the
state-of-the-art for the core CNLs for ontology authoring and reviews their
respective evaluations.
| 2,014 | Computation and Language |
A Clustering Analysis of Tweet Length and its Relation to Sentiment | Sentiment analysis of Twitter data is performed. The researcher has made the
following contributions via this paper: (1) an innovative method for deriving
sentiment score dictionaries using an existing sentiment dictionary as seed
words is explored, and (2) an analysis of clustered tweet sentiment scores
based on tweet length is performed.
| 2,015 | Computation and Language |
Are Style Guides Controlled Languages? The Case of Koenig & Bauer AG | Controlled natural languages for industrial application are often regarded as
a response to the challenges of translation and multilingual communication.
This paper presents a quite different approach taken by Koenig & Bauer AG,
where the main goal was the improvement of the authoring process for technical
documentation. Most importantly, this paper explores the notion of a controlled
language and demonstrates how style guides can emerge from non-linguistic
considerations. Moreover, it shows the transition from loose language
recommendations into precise and prescriptive rules and investigates whether
such rules can be regarded as a full-fledged controlled language.
| 2,014 | Computation and Language |
Question Answering with Subgraph Embeddings | This paper presents a system which learns to answer questions on a broad
range of topics from a knowledge base using few hand-crafted features. Our
model learns low-dimensional embeddings of words and knowledge base
constituents; these representations are used to score natural language
questions against candidate answers. Training our system using pairs of
questions and structured representations of their answers, and pairs of
question paraphrases, yields competitive results on a competitive benchmark of
the literature.
| 2,014 | Computation and Language |
Mining of product reviews at aspect level | Todays world is a world of Internet, almost all work can be done with the
help of it, from simple mobile phone recharge to biggest business deals can be
done with the help of this technology. People spent their most of the times on
surfing on the Web it becomes a new source of entertainment, education,
communication, shopping etc. Users not only use these websites but also give
their feedback and suggestions that will be useful for other users. In this way
a large amount of reviews of users are collected on the Web that needs to be
explored, analyse and organized for better decision making. Opinion Mining or
Sentiment Analysis is a Natural Language Processing and Information Extraction
task that identifies the users views or opinions explained in the form of
positive, negative or neutral comments and quotes underlying the text. Aspect
based opinion mining is one of the level of Opinion mining that determines the
aspect of the given reviews and classify the review for each feature. In this
paper an aspect based opinion mining system is proposed to classify the reviews
as positive, negative and neutral for each feature. Negation is also handled in
the proposed system. Experimental results using reviews of products show the
effectiveness of the system.
| 2,014 | Computation and Language |
Modelling, Visualising and Summarising Documents with a Single
Convolutional Neural Network | Capturing the compositional process which maps the meaning of words to that
of documents is a central challenge for researchers in Natural Language
Processing and Information Retrieval. We introduce a model that is able to
represent the meaning of documents by embedding them in a low dimensional
vector space, while preserving distinctions of word and sentence order crucial
for capturing nuanced semantics. Our model is based on an extended Dynamic
Convolution Neural Network, which learns convolution filters at both the
sentence and document level, hierarchically learning to capture and compose low
level lexical features into high level semantic concepts. We demonstrate the
effectiveness of this model on a range of document modelling tasks, achieving
strong results with no feature engineering and with a more compact model.
Inspired by recent advances in visualising deep convolution networks for
computer vision, we present a novel visualisation technique for our document
networks which not only provides insight into their learning process, but also
can be interpreted to produce a compelling automatic summarisation system for
texts.
| 2,014 | Computation and Language |
Translation Of Telugu-Marathi and Vice-Versa using Rule Based Machine
Translation | In todays digital world automated Machine Translation of one language to
another has covered a long way to achieve different kinds of success stories.
Whereas Babel Fish supports a good number of foreign languages and only Hindi
from Indian languages, the Google Translator takes care of about 10 Indian
languages. Though most of the Automated Machine Translation Systems are doing
well but handling Indian languages needs a major care while handling the local
proverbs/ idioms. Most of the Machine Translation system follows the direct
translation approach while translating one Indian language to other. Our
research at KMIT R&D Lab found that handling the local proverbs/idioms is not
given enough attention by the earlier research work. This paper focuses on two
of the majorly spoken Indian languages Marathi and Telugu, and translation
between them. Handling proverbs and idioms of both the languages have been
given a special care, and the research outcome shows a significant achievement
in this direction.
| 2,014 | Computation and Language |
Handling non-compositionality in multilingual CNLs | In this paper, we describe methods for handling multilingual
non-compositional constructions in the framework of GF. We specifically look at
methods to detect and extract non-compositional phrases from parallel texts and
propose methods to handle such constructions in GF grammars. We expect that the
methods to handle non-compositional constructions will enrich CNLs by providing
more flexibility in the design of controlled languages. We look at two specific
use cases of non-compositional constructions: a general-purpose method to
detect and extract multilingual multiword expressions and a procedure to
identify nominal compounds in German. We evaluate our procedure for multiword
expressions by performing a qualitative analysis of the results. For the
experiments on nominal compounds, we incorporate the detected compounds in a
full SMT pipeline and evaluate the impact of our method in machine translation
process.
| 2,014 | Computation and Language |
Towards an Error Correction Memory to Enhance Technical Texts Authoring
in LELIE | In this paper, we investigate and experiment the notion of error correction
memory applied to error correction in technical texts. The main purpose is to
induce relatively generic correction patterns associated with more contextual
correction recommendations, based on previously memorized and analyzed
corrections. The notion of error correction memory is developed within the
framework of the LELIE project and illustrated on the case of fuzzy lexical
items, which is a major problem in technical texts.
| 2,014 | Computation and Language |
Embedded Controlled Languages | Inspired by embedded programming languages, an embedded CNL (controlled
natural language) is a proper fragment of an entire natural language (its host
language), but it has a parser that recognizes the entire host language. This
makes it possible to process out-of-CNL input and give useful feedback to
users, instead of just reporting syntax errors. This extended abstract explains
the main concepts of embedded CNL implementation in GF (Grammatical Framework),
with examples from machine translation and some other ongoing work.
| 2,014 | Computation and Language |
Mapping the Economic Crisis: Some Preliminary Investigations | In this paper we describe our contribution to the PoliInformatics 2014
Challenge on the 2007-2008 financial crisis. We propose a state of the art
technique to extract information from texts and provide different
representations, giving first a static overview of the domain and then a
dynamic representation of its main evolutions. We show that this strategy
provides a practical solution to some recent theories in social sciences that
are facing a lack of methods and tools to automatically extract information
from natural language texts.
| 2,014 | Computation and Language |
Authorship Attribution through Function Word Adjacency Networks | A method for authorship attribution based on function word adjacency networks
(WANs) is introduced. Function words are parts of speech that express
grammatical relationships between other words but do not carry lexical meaning
on their own. In the WANs in this paper, nodes are function words and directed
edges stand in for the likelihood of finding the sink word in the ordered
vicinity of the source word. WANs of different authors can be interpreted as
transition probabilities of a Markov chain and are therefore compared in terms
of their relative entropies. Optimal selection of WAN parameters is studied and
attribution accuracy is benchmarked across a diverse pool of authors and
varying text lengths. This analysis shows that, since function words are
independent of content, their use tends to be specific to an author and that
the relational data captured by function WANs is a good summary of stylometric
fingerprints. Attribution accuracy is observed to exceed the one achieved by
methods that rely on word frequencies alone. Further combining WANs with
methods that rely on word frequencies alone, results in larger attribution
accuracy, indicating that both sources of information encode different aspects
of authorial styles.
| 2,015 | Computation and Language |
The Frobenius anatomy of word meanings II: possessive relative pronouns | Within the categorical compositional distributional model of meaning, we
provide semantic interpretations for the subject and object roles of the
possessive relative pronoun `whose'. This is done in terms of Frobenius
algebras over compact closed categories. These algebras and their diagrammatic
language expose how meanings of words in relative clauses interact with each
other. We show how our interpretation is related to Montague-style semantics
and provide a truth-theoretic interpretation. We also show how vector spaces
provide a concrete interpretation and provide preliminary corpus-based
experimental evidence. In a prequel to this paper, we used similar methods and
dealt with the case of subject and object relative pronouns.
| 2,014 | Computation and Language |
Typed Hilbert Epsilon Operators and the Semantics of Determiner Phrases
(Invited Lecture) | The semantics of determiner phrases, be they definite de- scriptions,
indefinite descriptions or quantified noun phrases, is often as- sumed to be a
fully solved question: common nouns are properties, and determiners are
generalised quantifiers that apply to two predicates: the property
corresponding to the common noun and the one corresponding to the verb phrase.
We first present a criticism of this standard view. Firstly, the semantics of
determiners does not follow the syntactical structure of the sentence. Secondly
the standard interpretation of the indefinite article cannot ac- count for
nominal sentences. Thirdly, the standard view misses the linguis- tic asymmetry
between the two properties of a generalised quantifier. In the sequel, we
propose a treatment of determiners and quantifiers as Hilbert terms in a richly
typed system that we initially developed for lexical semantics, using a many
sorted logic for semantical representations. We present this semantical
framework called the Montagovian generative lexicon and show how these terms
better match the syntactical structure and avoid the aforementioned problems of
the standard approach. Hilbert terms rather differ from choice functions in
that there is one polymorphic operator and not one operator per formula. They
also open an intriguing connection between the logic for meaning assembly, the
typed lambda calculus handling compositionality and the many-sorted logic for
semantical representations. Furthermore epsilon terms naturally introduce
type-judgements and confirm the claim that type judgment are a form of
presupposition.
| 2,016 | Computation and Language |
What is India speaking: The "Hinglish" invasion | While language competition models of diachronic language shift are
increasingly sophisticated, drawing on sociolinguistic components like variable
language prestige, distance from language centers and intermediate bilingual
transitionary populations, in one significant way they fall short. They fail to
consider contact-based outcomes resulting in mixed language practices, e.g.
outcome scenarios such as creoles or unmarked code switching as an emergent
communicative norm. On these lines something very interesting is uncovered in
India, where traditionally there have been monolingual Hindi speakers and
Hindi/English bilinguals, but virtually no monolingual English speakers. While
the Indian census data reports a sharp increase in the proportion of
Hindi/English bilinguals, we argue that the number of Hindi/English bilinguals
in India is inaccurate, given a new class of urban individuals speaking a mixed
lect of Hindi and English, popularly known as "Hinglish". Based on
predator-prey, sociolinguistic theories, salient local ecological factors and
the rural-urban divide in India, we propose a new mathematical model of
interacting monolingual Hindi speakers, Hindi/English bilinguals and Hinglish
speakers. The model yields globally asymptotic stable states of coexistence, as
well as bilingual extinction. To validate our model, sociolinguistic data from
different Indian classes are contrasted with census reports: We see that
purported urban Hindi/English bilinguals are unable to maintain fluent Hindi
speech and instead produce Hinglish, whereas rural speakers evidence
monolingual Hindi. Thus we present evidence for the first time where an
unrecognized mixed lect involving English but not "English", has possibly taken
over a sizeable faction of a large global population.
| 2,016 | Computation and Language |
Zipf's law holds for phrases, not words | With Zipf's law being originally and most famously observed for word
frequency, it is surprisingly limited in its applicability to human language,
holding over no more than three to four orders of magnitude before hitting a
clear break in scaling. Here, building on the simple observation that phrases
of one or more words comprise the most coherent units of meaning in language,
we show empirically that Zipf's law for phrases extends over as many as nine
orders of rank magnitude. In doing so, we develop a principled and scalable
statistical mechanical method of random text partitioning, which opens up a
rich frontier of rigorous text analysis via a rank ordering of mixed length
phrases.
| 2,015 | Computation and Language |
A survey on phrase structure learning methods for text classification | Text classification is a task of automatic classification of text into one of
the predefined categories. The problem of text classification has been widely
studied in different communities like natural language processing, data mining
and information retrieval. Text classification is an important constituent in
many information management tasks like topic identification, spam filtering,
email routing, language identification, genre classification, readability
assessment etc. The performance of text classification improves notably when
phrase patterns are used. The use of phrase patterns helps in capturing
non-local behaviours and thus helps in the improvement of text classification
task. Phrase structure extraction is the first step to continue with the phrase
pattern identification. In this survey, detailed study of phrase structure
learning methods have been carried out. This will enable future work in several
NLP tasks, which uses syntactic information from phrase structure like grammar
checkers, question answering, information extraction, machine translation, text
classification. The paper also provides different levels of classification and
detailed comparison of the phrase structure learning methods.
| 2,014 | Computation and Language |
A CNL for Contract-Oriented Diagrams | We present a first step towards a framework for defining and manipulating
normative documents or contracts described as Contract-Oriented (C-O) Diagrams.
These diagrams provide a visual representation for such texts, giving the
possibility to express a signatory's obligations, permissions and prohibitions,
with or without timing constraints, as well as the penalties resulting from the
non-fulfilment of a contract. This work presents a CNL for verbalising C-O
Diagrams, a web-based tool allowing editing in this CNL, and another for
visualising and manipulating the diagrams interactively. We then show how these
proof-of-concept tools can be used by applying them to a small example.
| 2,014 | Computation and Language |
Improved Frame Level Features and SVM Supervectors Approach for the
Recogniton of Emotional States from Speech: Application to categorical and
dimensional states | The purpose of speech emotion recognition system is to classify speakers
utterances into different emotional states such as disgust, boredom, sadness,
neutral and happiness. Speech features that are commonly used in speech emotion
recognition rely on global utterance level prosodic features. In our work, we
evaluate the impact of frame level feature extraction. The speech samples are
from Berlin emotional database and the features extracted from these utterances
are energy, different variant of mel frequency cepstrum coefficients, velocity
and acceleration features.
| 2,013 | Computation and Language |
Scalable Topical Phrase Mining from Text Corpora | While most topic modeling algorithms model text corpora with unigrams, human
interpretation often relies on inherent grouping of terms into phrases. As
such, we consider the problem of discovering topical phrases of mixed lengths.
Existing work either performs post processing to the inference results of
unigram-based topic models, or utilizes complex n-gram-discovery topic models.
These methods generally produce low-quality topical phrases or suffer from poor
scalability on even moderately-sized datasets. We propose a different approach
that is both computationally efficient and effective. Our solution combines a
novel phrase mining framework to segment a document into single and multi-word
phrases, and a new topic model that operates on the induced document partition.
Our approach discovers high quality topical phrases with negligible extra cost
to the bag-of-words topic model in a variety of datasets including research
publication titles, abstracts, reviews, and news articles.
| 2,014 | Computation and Language |
FrameNet Resource Grammar Library for GF | In this paper we present an ongoing research investigating the possibility
and potential of integrating frame semantics, particularly FrameNet, in the
Grammatical Framework (GF) application grammar development. An important
component of GF is its Resource Grammar Library (RGL) that encapsulates the
low-level linguistic knowledge about morphology and syntax of currently more
than 20 languages facilitating rapid development of multilingual applications.
In the ideal case, porting a GF application grammar to a new language would
only require introducing the domain lexicon - translation equivalents that are
interlinked via common abstract terms. While it is possible for a highly
restricted CNL, developing and porting a less restricted CNL requires above
average linguistic knowledge about the particular language, and above average
GF experience. Specifying a lexicon is mostly straightforward in the case of
nouns (incl. multi-word units), however, verbs are the most complex category
(in terms of both inflectional paradigms and argument structure), and adding
them to a GF application grammar is not a straightforward task. In this paper
we are focusing on verbs, investigating the possibility of creating a
multilingual FrameNet-based GF library. We propose an extension to the current
RGL, allowing GF application developers to define clauses on the semantic
level, thus leaving the language-specific syntactic mapping to this extension.
We demonstrate our approach by reengineering the MOLTO Phrasebook application
grammar.
| 2,012 | Computation and Language |
On the Use of Different Feature Extraction Methods for Linear and Non
Linear kernels | The speech feature extraction has been a key focus in robust speech
recognition research; it significantly affects the recognition performance. In
this paper, we first study a set of different features extraction methods such
as linear predictive coding (LPC), mel frequency cepstral coefficient (MFCC)
and perceptual linear prediction (PLP) with several features normalization
techniques like rasta filtering and cepstral mean subtraction (CMS). Based on
this, a comparative evaluation of these features is performed on the task of
text independent speaker identification using a combination between gaussian
mixture models (GMM) and linear and non-linear kernels based on support vector
machine (SVM).
| 2,014 | Computation and Language |
Jabalin: a Comprehensive Computational Model of Modern Standard Arabic
Verbal Morphology Based on Traditional Arabic Prosody | The computational handling of Modern Standard Arabic is a challenge in the
field of natural language processing due to its highly rich morphology.
However, several authors have pointed out that the Arabic morphological system
is in fact extremely regular. The existing Arabic morphological analyzers have
exploited this regularity to variable extent, yet we believe there is still
some scope for improvement. Taking inspiration in traditional Arabic prosody,
we have designed and implemented a compact and simple morphological system
which in our opinion takes further advantage of the regularities encountered in
the Arabic morphological system. The output of the system is a large-scale
lexicon of inflected forms that has subsequently been used to create an Online
Interface for a morphological analyzer of Arabic verbs. The Jabalin Online
Interface is available at http://elvira.lllf.uam.es/jabalin/, hosted at the
LLI-UAM lab. The generation system is also available under a GNU GPL 3 license.
| 2,013 | Computation and Language |
Building DNN Acoustic Models for Large Vocabulary Speech Recognition | Deep neural networks (DNNs) are now a central component of nearly all
state-of-the-art speech recognition systems. Building neural network acoustic
models requires several design decisions including network architecture, size,
and training loss function. This paper offers an empirical investigation on
which aspects of DNN acoustic model design are most important for speech
recognition system performance. We report DNN classifier performance and final
speech recognizer word error rates, and compare DNNs using several metrics to
quantify factors influencing differences in task performance. Our first set of
experiments use the standard Switchboard benchmark corpus, which contains
approximately 300 hours of conversational telephone speech. We compare standard
DNNs to convolutional networks, and present the first experiments using
locally-connected, untied neural networks for acoustic modeling. We
additionally build systems on a corpus of 2,100 hours of training data by
combining the Switchboard and Fisher corpora. This larger corpus allows us to
more thoroughly examine performance of large DNN models -- with up to ten times
more parameters than those typically used in speech recognition systems. Our
results suggest that a relatively simple DNN architecture and optimization
technique produces strong results. These findings, along with previous work,
help establish a set of best practices for building DNN hybrid speech
recognition systems with maximum likelihood training. Our experiments in DNN
optimization additionally serve as a case study for training DNNs with
discriminative loss functions for speech tasks, as well as DNN classifiers more
generally.
| 2,015 | Computation and Language |
Les noms propres se traduisent-ils ? \'Etude d'un corpus multilingue | In this paper, we tackle the problem of the translation of proper names. We
introduce our hypothesis according to which proper names can be translated more
often than most people seem to think. Then, we describe the construction of a
parallel multilingual corpus used to illustrate our point. We eventually
evaluate both the advantages and limits of this corpus in our study.
| 2,011 | Computation and Language |
WordRep: A Benchmark for Research on Learning Word Representations | WordRep is a benchmark collection for the research on learning distributed
word representations (or word embeddings), released by Microsoft Research. In
this paper, we describe the details of the WordRep collection and show how to
use it in different types of machine learning research related to word
embedding. Specifically, we describe how the evaluation tasks in WordRep are
selected, how the data are sampled, and how the evaluation tool is built. We
then compare several state-of-the-art word representations on WordRep, report
their evaluation performance, and make discussions on the results. After that,
we discuss new potential research topics that can be supported by WordRep, in
addition to algorithm comparison. We hope that this paper can help people gain
deeper understanding of WordRep, and enable more interesting research on
learning distributed word representations and related topics.
| 2,014 | Computation and Language |
KNET: A General Framework for Learning Word Embedding using
Morphological Knowledge | Neural network techniques are widely applied to obtain high-quality
distributed representations of words, i.e., word embeddings, to address text
mining, information retrieval, and natural language processing tasks. Recently,
efficient methods have been proposed to learn word embeddings from context that
captures both semantic and syntactic relationships between words. However, it
is challenging to handle unseen words or rare words with insufficient context.
In this paper, inspired by the study on word recognition process in cognitive
psychology, we propose to take advantage of seemingly less obvious but
essentially important morphological knowledge to address these challenges. In
particular, we introduce a novel neural network architecture called KNET that
leverages both contextual information and morphological word similarity built
based on morphological knowledge to learn word embeddings. Meanwhile, the
learning architecture is also able to refine the pre-defined morphological
knowledge and obtain more accurate word similarity. Experiments on an
analogical reasoning task and a word similarity task both demonstrate that the
proposed KNET framework can greatly enhance the effectiveness of word
embeddings.
| 2,014 | Computation and Language |
Lexpresso: a Controlled Natural Language | This paper presents an overview of `Lexpresso', a Controlled Natural Language
developed at the Defence Science & Technology Organisation as a bidirectional
natural language interface to a high-level information fusion system. The paper
describes Lexpresso's main features including lexical coverage, expressiveness
and range of linguistic syntactic and semantic structures. It also touches on
its tight integration with a formal semantic formalism and tentatively
classifies it against the PENS system.
| 2,014 | Computation and Language |
Inter-Rater Agreement Study on Readability Assessment in Bengali | An inter-rater agreement study is performed for readability assessment in
Bengali. A 1-7 rating scale was used to indicate different levels of
readability. We obtained moderate to fair agreement among seven independent
annotators on 30 text passages written by four eminent Bengali authors. As a by
product of our study, we obtained a readability-annotated ground truth dataset
in Bengali. .
| 2,014 | Computation and Language |
Assamese-English Bilingual Machine Translation | Machine translation is the process of translating text from one language to
another. In this paper, Statistical Machine Translation is done on Assamese and
English language by taking their respective parallel corpus. A statistical
phrase based translation toolkit Moses is used here. To develop the language
model and to align the words we used two another tools IRSTLM, GIZA
respectively. BLEU score is used to check our translation system performance,
how good it is. A difference in BLEU scores is obtained while translating
sentences from Assamese to English and vice-versa. Since Indian languages are
morphologically very rich hence translation is relatively harder from English
to Assamese resulting in a low BLEU score. A statistical transliteration system
is also introduced with our translation system to deal basically with proper
nouns, OOV (out of vocabulary) words which are not present in our corpus.
| 2,014 | Computation and Language |
Quality Estimation Of Machine Translation Outputs Through Stemming | Machine Translation is the challenging problem for Indian languages. Every
day we can see some machine translators being developed, but getting a high
quality automatic translation is still a very distant dream . The correct
translated sentence for Hindi language is rarely found. In this paper, we are
emphasizing on English-Hindi language pair, so in order to preserve the correct
MT output we present a ranking system, which employs some machine learning
techniques and morphological features. In ranking no human intervention is
required. We have also validated our results by comparing it with human
ranking.
| 2,014 | Computation and Language |
A Survey of Named Entity Recognition in Assamese and other Indian
Languages | Named Entity Recognition is always important when dealing with major Natural
Language Processing tasks such as information extraction, question-answering,
machine translation, document summarization etc so in this paper we put forward
a survey of Named Entities in Indian Languages with particular reference to
Assamese. There are various rule-based and machine learning approaches
available for Named Entity Recognition. At the very first of the paper we give
an idea of the available approaches for Named Entity Recognition and then we
discuss about the related research in this field. Assamese like other Indian
languages is agglutinative and suffers from lack of appropriate resources as
Named Entity Recognition requires large data sets, gazetteer list, dictionary
etc and some useful feature like capitalization as found in English cannot be
found in Assamese. Apart from this we also describe some of the issues faced in
Assamese while doing Named Entity Recognition.
| 2,014 | Computation and Language |
Hidden Markov Model Based Part of Speech Tagger for Sinhala Language | In this paper we present a fundamental lexical semantics of Sinhala language
and a Hidden Markov Model (HMM) based Part of Speech (POS) Tagger for Sinhala
language. In any Natural Language processing task, Part of Speech is a very
vital topic, which involves analysing of the construction, behaviour and the
dynamics of the language, which the knowledge could utilized in computational
linguistics analysis and automation applications. Though Sinhala is a
morphologically rich and agglutinative language, in which words are inflected
with various grammatical features, tagging is very essential for further
analysis of the language. Our research is based on statistical based approach,
in which the tagging process is done by computing the tag sequence probability
and the word-likelihood probability from the given corpus, where the linguistic
knowledge is automatically extracted from the annotated corpus. The current
tagger could reach more than 90% of accuracy for known words.
| 2,014 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.