Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Semantic approach for effective document clustering using WordNet | Now a days, the text document is spontaneously increasing over the internet,
e-mail and web pages and they are stored in the electronic database format. To
arrange and browse the document it becomes difficult. To overcome such problem
the document preprocessing, term selection, attribute reduction and maintaining
the relationship between the important terms using background knowledge,
WordNet, becomes an important parameters in data mining. In these paper the
different stages are formed, firstly the document preprocessing is done by
removing stop words, stemming is performed using porter stemmer algorithm, word
net thesaurus is applied for maintaining relationship between the important
terms, global unique words, and frequent word sets get generated, Secondly,
data matrix is formed, and thirdly terms are extracted from the documents by
using term selection approaches tf-idf, tf-df, and tf2 based on their minimum
threshold value. Further each and every document terms gets preprocessed, where
the frequency of each term within the document is counted for representation.
The purpose of this approach is to reduce the attributes and find the effective
term selection method using WordNet for better clustering accuracy. Experiments
are evaluated on Reuters Transcription Subsets, wheat, trade, money grain, and
ship, Reuters 21578, Classic 30, 20 News group (atheism), 20 News group
(Hardware), 20 News group (Computer Graphics) etc.
| 2,013 | Computation and Language |
Japanese-Spanish Thesaurus Construction Using English as a Pivot | We present the results of research with the goal of automatically creating a
multilingual thesaurus based on the freely available resources of Wikipedia and
WordNet. Our goal is to increase resources for natural language processing
tasks such as machine translation targeting the Japanese-Spanish language pair.
Given the scarcity of resources, we use existing English resources as a pivot
for creating a trilingual Japanese-Spanish-English thesaurus. Our approach
consists of extracting the translation tuples from Wikipedia, disambiguating
them by mapping them to WordNet word senses. We present results comparing two
methods of disambiguation, the first using VSM on Wikipedia article texts and
WordNet definitions, and the second using categorical information extracted
from Wikipedia, We find that mixing the two methods produces favorable results.
Using the proposed method, we have constructed a multilingual
Spanish-Japanese-English thesaurus consisting of 25,375 entries. The same
method can be applied to any pair of languages that are linked to English in
Wikipedia.
| 2,008 | Computation and Language |
Towards the Fully Automatic Merging of Lexical Resources: A Step Forward | This article reports on the results of the research done towards the fully
automatically merging of lexical resources. Our main goal is to show the
generality of the proposed approach, which have been previously applied to
merge Spanish Subcategorization Frames lexica. In this work we extend and apply
the same technique to perform the merging of morphosyntactic lexica encoded in
LMF. The experiments showed that the technique is general enough to obtain good
results in these two different tasks which is an important step towards
performing the merging of lexical resources fully automatically.
| 2,012 | Computation and Language |
Automatic lexical semantic classification of nouns | The work we present here addresses cue-based noun classification in English
and Spanish. Its main objective is to automatically acquire lexical semantic
information by classifying nouns into previously known noun lexical classes.
This is achieved by using particular aspects of linguistic contexts as cues
that identify a specific lexical class. Here we concentrate on the task of
identifying such cues and the theoretical background that allows for an
assessment of the complexity of the task. The results show that, despite of the
a-priori complexity of the task, cue-based classification is a useful tool in
the automatic acquisition of lexical semantic classes.
| 2,012 | Computation and Language |
A Classification of Adjectives for Polarity Lexicons Enhancement | Subjective language detection is one of the most important challenges in
Sentiment Analysis. Because of the weight and frequency in opinionated texts,
adjectives are considered a key piece in the opinion extraction process. These
subjective units are more and more frequently collected in polarity lexicons in
which they appear annotated with their prior polarity. However, at the moment,
any polarity lexicon takes into account prior polarity variations across
domains. This paper proves that a majority of adjectives change their prior
polarity value depending on the domain. We propose a distinction between domain
dependent and domain independent adjectives. Moreover, our analysis led us to
propose a further classification related to subjectivity degree: constant,
mixed and highly subjective adjectives. Following this classification, polarity
values will be a better support for Sentiment Analysis.
| 2,012 | Computation and Language |
Mining and Exploiting Domain-Specific Corpora in the PANACEA Platform | The objective of the PANACEA ICT-2007.2.2 EU project is to build a platform
that automates the stages involved in the acquisition, production, updating and
maintenance of the large language resources required by, among others, MT
systems. The development of a Corpus Acquisition Component (CAC) for extracting
monolingual and bilingual data from the web is one of the most innovative
building blocks of PANACEA. The CAC, which is the first stage in the PANACEA
pipeline for building Language Resources, adopts an efficient and distributed
methodology to crawl for web documents with rich textual content in specific
languages and predefined domains. The CAC includes modules that can acquire
parallel data from sites with in-domain content available in more than one
language. In order to extrinsically evaluate the CAC methodology, we have
conducted several experiments that used crawled parallel corpora for the
identification and extraction of parallel sentences using sentence alignment.
The corpora were then successfully used for domain adaptation of Machine
Translation Systems.
| 2,012 | Computation and Language |
Automatic Detection of Non-deverbal Event Nouns for Quick Lexicon
Production | In this work we present the results of our experimental work on the
develop-ment of lexical class-based lexica by automatic means. The objective is
to as-sess the use of linguistic lexical-class based information as a feature
selection methodology for the use of classifiers in quick lexical development.
The results show that the approach can help in re-ducing the human effort
required in the development of language resources sig-nificantly.
| 2,010 | Computation and Language |
Using qualia information to identify lexical semantic classes in an
unsupervised clustering task | Acquiring lexical information is a complex problem, typically approached by
relying on a number of contexts to contribute information for classification.
One of the first issues to address in this domain is the determination of such
contexts. The work presented here proposes the use of automatically obtained
FORMAL role descriptors as features used to draw nouns from the same lexical
semantic class together in an unsupervised clustering task. We have dealt with
three lexical semantic classes (HUMAN, LOCATION and EVENT) in English. The
results obtained show that it is possible to discriminate between elements from
different lexical semantic classes using only FORMAL role information, hence
validating our initial hypothesis. Also, iterating our method accurately
accounts for fine-grained distinctions within lexical classes, namely
distinctions involving ambiguous expressions. Moreover, a filtering and
bootstrapping strategy employed in extracting FORMAL role descriptors proved to
minimize effects of sparse data and noise in our task.
| 2,012 | Computation and Language |
Probabilistic Topic and Syntax Modeling with Part-of-Speech LDA | This article presents a probabilistic generative model for text based on
semantic topics and syntactic classes called Part-of-Speech LDA (POSLDA).
POSLDA simultaneously uncovers short-range syntactic patterns (syntax) and
long-range semantic patterns (topics) that exist in document collections. This
results in word distributions that are specific to both topics (sports,
education, ...) and parts-of-speech (nouns, verbs, ...). For example,
multinomial distributions over words are uncovered that can be understood as
"nouns about weather" or "verbs about law". We describe the model and an
approximate inference algorithm and then demonstrate the quality of the learned
topics both qualitatively and quantitatively. Then, we discuss an NLP
application where the output of POSLDA can lead to strong improvements in
quality: unsupervised part-of-speech tagging. We describe algorithms for this
task that make use of POSLDA-learned distributions that result in improved
performance beyond the state of the art.
| 2,013 | Computation and Language |
Types and forgetfulness in categorical linguistics and quantum mechanics | The role of types in categorical models of meaning is investigated. A general
scheme for how typed models of meaning may be used to compare sentences,
regardless of their grammatical structure is described, and a toy example is
used as an illustration. Taking as a starting point the question of whether the
evaluation of such a type system 'loses information', we consider the
parametrized typing associated with connectives from this viewpoint.
The answer to this question implies that, within full categorical models of
meaning, the objects associated with types must exhibit a simple but subtle
categorical property known as self-similarity. We investigate the category
theory behind this, with explicit reference to typed systems, and their
monoidal closed structure. We then demonstrate close connections between such
self-similar structures and dagger Frobenius algebras. In particular, we
demonstrate that the categorical structures implied by the polymorphically
typed connectives give rise to a (lax unitless) form of the special forms of
Frobenius algebras known as classical structures, used heavily in abstract
categorical approaches to quantum mechanics.
| 2,013 | Computation and Language |
Expressing Ethnicity through Behaviors of a Robot Character | Achieving homophily, or association based on similarity, between a human user
and a robot holds a promise of improved perception and task performance.
However, no previous studies that address homophily via ethnic similarity with
robots exist. In this paper, we discuss the difficulties of evoking ethnic cues
in a robot, as opposed to a virtual agent, and an approach to overcome those
difficulties based on using ethnically salient behaviors. We outline our
methodology for selecting and evaluating such behaviors, and culminate with a
study that evaluates our hypotheses of the possibility of ethnic attribution of
a robot character through verbal and nonverbal behaviors and of achieving the
homophily effect.
| 2,013 | Computation and Language |
An Adaptive Methodology for Ubiquitous ASR System | Achieving and maintaining the performance of ubiquitous (Automatic Speech
Recognition) ASR system is a real challenge. The main objective of this work is
to develop a method that will improve and show the consistency in performance
of ubiquitous ASR system for real world noisy environment. An adaptive
methodology has been developed to achieve an objective with the help of
implementing followings, -Cleaning speech signal as much as possible while
preserving originality / intangibility using various modified filters and
enhancement techniques. -Extracting features from speech signals using various
sizes of parameter. -Train the system for ubiquitous environment using
multi-environmental adaptation training methods. -Optimize the word recognition
rate with appropriate variable size of parameters using fuzzy technique. The
consistency in performance is tested using standard noise databases as well as
in real world environment. A good improvement is noticed. This work will be
helpful to give discriminative training of ubiquitous ASR system for better
Human Computer Interaction (HCI) using Speech User Interface (SUI).
| 2,013 | Computation and Language |
A Multilingual Semantic Wiki Based on Attempto Controlled English and
Grammatical Framework | We describe a semantic wiki system with an underlying controlled natural
language grammar implemented in Grammatical Framework (GF). The grammar
restricts the wiki content to a well-defined subset of Attempto Controlled
English (ACE), and facilitates a precise bidirectional automatic translation
between ACE and language fragments of a number of other natural languages,
making the wiki content accessible multilingually. Additionally, our approach
allows for automatic translation into the Web Ontology Language (OWL), which
enables automatic reasoning over the wiki content. The developed wiki
environment thus allows users to build, query and view OWL knowledge bases via
a user-friendly multilingual natural language interface. As a further feature,
the underlying multilingual grammar is integrated into the wiki and can be
collaboratively edited to extend the vocabulary of the wiki or even customize
its sentence structures. This work demonstrates the combination of the existing
technologies of Attempto Controlled English and Grammatical Framework, and is
implemented as an extension of the existing semantic wiki engine AceWiki.
| 2,013 | Computation and Language |
Estimating Confusions in the ASR Channel for Improved Topic-based
Language Model Adaptation | Human language is a combination of elemental languages/domains/styles that
change across and sometimes within discourses. Language models, which play a
crucial role in speech recognizers and machine translation systems, are
particularly sensitive to such changes, unless some form of adaptation takes
place. One approach to speech language model adaptation is self-training, in
which a language model's parameters are tuned based on automatically
transcribed audio. However, transcription errors can misguide self-training,
particularly in challenging settings such as conversational speech. In this
work, we propose a model that considers the confusions (errors) of the ASR
channel. By modeling the likely confusions in the ASR output instead of using
just the 1-best, we improve self-training efficacy by obtaining a more reliable
reference transcription estimate. We demonstrate improved topic-based language
modeling adaptation results over both 1-best and lattice self-training using
our ASR channel confusion estimates on telephone conversations.
| 2,013 | Computation and Language |
Parameters Optimization for Improving ASR Performance in Adverse Real
World Noisy Environmental Conditions | From the existing research it has been observed that many techniques and
methodologies are available for performing every step of Automatic Speech
Recognition (ASR) system, but the performance (Minimization of Word Error
Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology
is not dependent on the only technique applied in that method. The research
work indicates that, performance mainly depends on the category of the noise,
the level of the noise and the variable size of the window, frame, frame
overlap etc is considered in the existing methods. The main aim of the work
presented in this paper is to use variable size of parameters like window size,
frame size and frame overlap percentage to observe the performance of
algorithms for various categories of noise with different levels and also train
the system for all size of parameters and category of real world noisy
environment to improve the performance of the speech recognition system. This
paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by
applying variable size of parameters. It is observed that, it is really very
hard to evaluate test results and decide parameter size for ASR performance
improvement for its resultant optimization. Hence, this study further suggests
the feasible and optimum parameter size using Fuzzy Inference System (FIS) for
enhancing resultant accuracy in adverse real world noisy environmental
conditions. This work will be helpful to give discriminative training of
ubiquitous ASR system for better Human Computer Interaction (HCI).
| 2,012 | Computation and Language |
Adverse Conditions and ASR Techniques for Robust Speech User Interface | The main motivation for Automatic Speech Recognition (ASR) is efficient
interfaces to computers, and for the interfaces to be natural and truly useful,
it should provide coverage for a large group of users. The purpose of these
tasks is to further improve man-machine communication. ASR systems exhibit
unacceptable degradations in performance when the acoustical environments used
for training and testing the system are not the same. The goal of this research
is to increase the robustness of the speech recognition systems with respect to
changes in the environment. A system can be labeled as environment-independent
if the recognition accuracy for a new environment is the same or higher than
that obtained when the system is retrained for that environment. Attaining such
performance is the dream of the researchers. This paper elaborates some of the
difficulties with Automatic Speech Recognition (ASR). These difficulties are
classified into Speakers characteristics and environmental conditions, and
tried to suggest some techniques to compensate variations in speech signal.
This paper focuses on the robustness with respect to speakers variations and
changes in the acoustical environment. We discussed several different external
factors that change the environment and physiological differences that affect
the performance of a speech recognition system followed by techniques that are
helpful to design a robust ASR system.
| 2,011 | Computation and Language |
SYNTAGMA. A Linguistic Approach to Parsing | SYNTAGMA is a rule-based parsing system, structured on two levels: a general
parsing engine and a language specific grammar. The parsing engine is a
language independent program, while grammar and language specific rules and
resources are given as text files, consisting in a list of constituent
structuresand a lexical database with word sense related features and
constraints. Since its theoretical background is principally Tesniere's
Elements de syntaxe, SYNTAGMA's grammar emphasizes the role of argument
structure (valency) in constraint satisfaction, and allows also horizontal
bounds, for instance treating coordination. Notions such as Pro, traces, empty
categories are derived from Generative Grammar and some solutions are close to
Government&Binding Theory, although they are the result of an autonomous
research. These properties allow SYNTAGMA to manage complex syntactic
configurations and well known weak points in parsing engineering. An important
resource is the semantic network, which is used in disambiguation tasks.
Parsing process follows a bottom-up, rule driven strategy. Its behavior can be
controlled and fine-tuned.
| 2,016 | Computation and Language |
Exploring the Role of Logically Related Non-Question Phrases for
Answering Why-Questions | In this paper, we show that certain phrases although not present in a given
question/query, play a very important role in answering the question. Exploring
the role of such phrases in answering questions not only reduces the dependency
on matching question phrases for extracting answers, but also improves the
quality of the extracted answers. Here matching question phrases means phrases
which co-occur in given question and candidate answers. To achieve the above
discussed goal, we introduce a bigram-based word graph model populated with
semantic and topical relatedness of terms in the given document. Next, we apply
an improved version of ranking with a prior-based approach, which ranks all
words in the candidate document with respect to a set of root words (i.e.
non-stopwords present in the question and in the candidate document). As a
result, terms logically related to the root words are scored higher than terms
that are not related to the root words. Experimental results show that our
devised system performs better than state-of-the-art for the task of answering
Why-questions.
| 2,013 | Computation and Language |
Extension of hidden markov model for recognizing large vocabulary of
sign language | Computers still have a long way to go before they can interact with users in
a truly natural fashion. From a users perspective, the most natural way to
interact with a computer would be through a speech and gesture interface.
Although speech recognition has made significant advances in the past ten
years, gesture recognition has been lagging behind. Sign Languages (SL) are the
most accomplished forms of gestural communication. Therefore, their automatic
analysis is a real challenge, which is interestingly implied to their lexical
and syntactic organization levels. Statements dealing with sign language occupy
a significant interest in the Automatic Natural Language Processing (ANLP)
domain. In this work, we are dealing with sign language recognition, in
particular of French Sign Language (FSL). FSL has its own specificities, such
as the simultaneity of several parameters, the important role of the facial
expression or movement and the use of space for the proper utterance
organization. Unlike speech recognition, Frensh sign language (FSL) events
occur both sequentially and simultaneously. Thus, the computational processing
of FSL is too complex than the spoken languages. We present a novel approach
based on HMM to reduce the recognition complexity.
| 2,013 | Computation and Language |
The risks of mixing dependency lengths from sequences of different
length | Mixing dependency lengths from sequences of different length is a common
practice in language research. However, the empirical distribution of
dependency lengths of sentences of the same length differs from that of
sentences of varying length and the distribution of dependency lengths depends
on sentence length for real sentences and also under the null hypothesis that
dependencies connect vertices located in random positions of the sequence. This
suggests that certain results, such as the distribution of syntactic dependency
lengths mixing dependencies from sentences of varying length, could be a mere
consequence of that mixing. Furthermore, differences in the global averages of
dependency length (mixing lengths from sentences of varying length) for two
different languages do not simply imply a priori that one language optimizes
dependency lengths better than the other because those differences could be due
to differences in the distribution of sentence lengths and other factors.
| 2,014 | Computation and Language |
Hubiness, length, crossings and their relationships in dependency trees | Here tree dependency structures are studied from three different
perspectives: their degree variance (hubiness), the mean dependency length and
the number of dependency crossings. Bounds that reveal pairwise dependencies
among these three metrics are derived. Hubiness (the variance of degrees) plays
a central role: the mean dependency length is bounded below by hubiness while
the number of crossings is bounded above by hubiness. Our findings suggest that
the online memory cost of a sentence might be determined not just by the
ordering of words but also by the hubiness of the underlying structure. The 2nd
moment of degree plays a crucial role that is reminiscent of its role in large
complex networks.
| 2,013 | Computation and Language |
Sentiment Analysis : A Literature Survey | Our day-to-day life has always been influenced by what people think. Ideas
and opinions of others have always affected our own opinions. The explosion of
Web 2.0 has led to increased activity in Podcasting, Blogging, Tagging,
Contributing to RSS, Social Bookmarking, and Social Networking. As a result
there has been an eruption of interest in people to mine these vast resources
of data for opinions. Sentiment Analysis or Opinion Mining is the computational
treatment of opinions, sentiments and subjectivity of text. In this report, we
take a look at the various challenges and applications of Sentiment Analysis.
We will discuss in details various approaches to perform a computational
treatment of sentiments and opinions. Various supervised or data-driven
techniques to SA like Na\"ive Byes, Maximum Entropy, SVM, and Voted Perceptrons
will be discussed and their strengths and drawbacks will be touched upon. We
will also see a new dimension of analyzing sentiments by Cognitive Psychology
mainly through the work of Janyce Wiebe, where we will see ways to detect
subjectivity, perspective in narrative and understanding the discourse
structure. We will also study some specific topics in Sentiment Analysis and
the contemporary works in those areas.
| 2,013 | Computation and Language |
Dealing with natural language interfaces in a geolocation context | In the geolocation field where high-level programs and low-level devices
coexist, it is often difficult to find a friendly user inter- face to configure
all the parameters. The challenge addressed in this paper is to propose
intuitive and simple, thus natural lan- guage interfaces to interact with
low-level devices. Such inter- faces contain natural language processing and
fuzzy represen- tations of words that facilitate the elicitation of
business-level objectives in our context.
| 2,013 | Computation and Language |
Question Answering Against Very-Large Text Collections | Question answering involves developing methods to extract useful information
from large collections of documents. This is done with specialised search
engines such as Answer Finder. The aim of Answer Finder is to provide an answer
to a question rather than a page listing related documents that may contain the
correct answer. So, a question such as "How tall is the Eiffel Tower" would
simply return "325m" or "1,063ft". Our task was to build on the current version
of Answer Finder by improving information retrieval, and also improving the
pre-processing involved in question series analysis.
| 2,008 | Computation and Language |
An Improved Approach for Word Ambiguity Removal | Word ambiguity removal is a task of removing ambiguity from a word, i.e.
correct sense of word is identified from ambiguous sentences. This paper
describes a model that uses Part of Speech tagger and three categories for word
sense disambiguation (WSD). Human Computer Interaction is very needful to
improve interactions between users and computers. For this, the Supervised and
Unsupervised methods are combined. The WSD algorithm is used to find the
efficient and accurate sense of a word based on domain information. The
accuracy of this work is evaluated with the aim of finding best suitable domain
of word.
| 2,013 | Computation and Language |
TimeML-strict: clarifying temporal annotation | TimeML is an XML-based schema for annotating temporal information over
discourse. The standard has been used to annotate a variety of resources and is
followed by a number of tools, the creation of which constitute hundreds of
thousands of man-hours of research work. However, the current state of
resources is such that many are not valid, or do not produce valid output, or
contain ambiguous or custom additions and removals. Difficulties arising from
these variances were highlighted in the TempEval-3 exercise, which included its
own extra stipulations over conventional TimeML as a response.
To unify the state of current resources, and to make progress toward easy
adoption of its current incarnation ISO-TimeML, this paper introduces
TimeML-strict: a valid, unambiguous, and easy-to-process subset of TimeML. We
also introduce three resources -- a schema for TimeML-strict; a validator tool
for TimeML-strict, so that one may ensure documents are in the correct form;
and a repair tool that corrects common invalidating errors and adds
disambiguating markup in order to convert documents from the laxer TimeML
standard to TimeML-strict.
| 2,013 | Computation and Language |
Measuring Cultural Relativity of Emotional Valence and Arousal using
Semantic Clustering and Twitter | Researchers since at least Darwin have debated whether and to what extent
emotions are universal or culture-dependent. However, previous studies have
primarily focused on facial expressions and on a limited set of emotions. Given
that emotions have a substantial impact on human lives, evidence for cultural
emotional relativity might be derived by applying distributional semantics
techniques to a text corpus of self-reported behaviour. Here, we explore this
idea by measuring the valence and arousal of the twelve most popular emotion
keywords expressed on the micro-blogging site Twitter. We do this in three
geographical regions: Europe, Asia and North America. We demonstrate that in
our sample, the valence and arousal levels of the same emotion keywords differ
significantly with respect to these geographical regions --- Europeans are, or
at least present themselves as more positive and aroused, North Americans are
more negative and Asians appear to be more positive but less aroused when
compared to global valence and arousal levels of the same emotion keywords. Our
work is the first in kind to programatically map large text corpora to a
dimensional model of affect.
| 2,013 | Computation and Language |
Machine Translation Systems in India | Machine Translation is the translation of one natural language into another
using automated and computerized means. For a multilingual country like India,
with the huge amount of information exchanged between various regions and in
different languages in digitized format, it has become necessary to find an
automated process from one language to another. In this paper, we take a look
at the various Machine Translation System in India which is specifically built
for the purpose of translation between the Indian languages. We discuss the
various approaches taken for building the machine translation system and then
discuss some of the Machine Translation Systems in India along with their
features.
| 2,013 | Computation and Language |
ManTIME: Temporal expression identification and normalization in the
TempEval-3 challenge | This paper describes a temporal expression identification and normalization
system, ManTIME, developed for the TempEval-3 challenge. The identification
phase combines the use of conditional random fields along with a
post-processing identification pipeline, whereas the normalization phase is
carried out using NorMA, an open-source rule-based temporal normalizer. We
investigate the performance variation with respect to different feature types.
Specifically, we show that the use of WordNet-based features in the
identification task negatively affects the overall performance, and that there
is no statistically significant difference in using gazetteers, shallow parsing
and propositional noun phrases labels on top of the morphological features. On
the test data, the best run achieved 0.95 (P), 0.85 (R) and 0.90 (F1) in the
identification phase. Normalization accuracies are 0.84 (type attribute) and
0.77 (value attribute). Surprisingly, the use of the silver data (alone or in
addition to the gold annotated ones) does not improve the performance.
| 2,013 | Computation and Language |
A quantum teleportation inspired algorithm produces sentence meaning
from word meaning and grammatical structure | We discuss an algorithm which produces the meaning of a sentence given
meanings of its words, and its resemblance to quantum teleportation. In fact,
this protocol was the main source of inspiration for this algorithm which has
many applications in the area of Natural Language Processing.
| 2,013 | Computation and Language |
New Alignment Methods for Discriminative Book Summarization | We consider the unsupervised alignment of the full text of a book with a
human-written summary. This presents challenges not seen in other text
alignment problems, including a disparity in length and, consequent to this, a
violation of the expectation that individual words and phrases should align,
since large passages and chapters can be distilled into a single summary
phrase. We present two new methods, based on hidden Markov models, specifically
targeted to this problem, and demonstrate gains on an extractive book
summarization task. While there is still much room for improvement,
unsupervised alignment holds intrinsic value in offering insight into what
features of a book are deemed worthy of summarization.
| 2,013 | Computation and Language |
A study for the effect of the Emphaticness and language and dialect for
Voice Onset Time (VOT) in Modern Standard Arabic (MSA) | The signal sound contains many different features, including Voice Onset Time
(VOT), which is a very important feature of stop sounds in many languages. The
only application of VOT values is stopping phoneme subsets. This subset of
consonant sounds is stop phonemes exist in the Arabic language, and in fact,
all languages. The pronunciation of these sounds is hard and unique especially
for less-educated Arabs and non-native Arabic speakers. VOT can be utilized by
the human auditory system to distinguish between voiced and unvoiced stops such
as /p/ and /b/ in English.This search focuses on computing and analyzing VOT of
Modern Standard Arabic (MSA), within the Arabic language, for all pairs of
non-emphatic (namely, /d/ and /t/) and emphatic pairs (namely, /d?/ and /t?/)
depending on carrier words. This research uses a database built by ourselves,
and uses the carrier words syllable structure: CV-CV-CV. One of the main
outcomes always found is the emphatic sounds (/d?/, /t?/) are less than 50% of
non-emphatic (counter-part) sounds ( /d/, /t/).Also, VOT can be used to
classify or detect for a dialect ina language.
| 2,013 | Computation and Language |
Opportunities & Challenges In Automatic Speech Recognition | Automatic speech recognition enables a wide range of current and emerging
applications such as automatic transcription, multimedia content analysis, and
natural human-computer interfaces. This paper provides a glimpse of the
opportunities and challenges that parallelism provides for automatic speech
recognition and related application research from the point of view of speech
researchers. The increasing parallelism in computing platforms opens three
major possibilities for speech recognition systems: improving recognition
accuracy in non-ideal, everyday noisy environments; increasing recognition
throughput in batch processing of speech data; and reducing recognition latency
in realtime usage scenarios. This paper describes technical challenges,
approaches taken, and possible directions for future research to guide the
design of efficient parallel software and hardware infrastructures.
| 2,013 | Computation and Language |
An Overview of Hindi Speech Recognition | In this age of information technology, information access in a convenient
manner has gained importance. Since speech is a primary mode of communication
among human beings, it is natural for people to expect to be able to carry out
spoken dialogue with computer. Speech recognition system permits ordinary
people to speak to the computer to retrieve information. It is desirable to
have a human computer dialogue in local language. Hindi being the most widely
spoken Language in India is the natural primary human language candidate for
human machine interaction. There are five pairs of vowels in Hindi languages;
one member is longer than the other one. This paper describes an overview of
speech recognition system that includes how speech is produced and the
properties and characteristics of Hindi Phoneme.
| 2,013 | Computation and Language |
Rule-Based Semantic Tagging. An Application Undergoing Dictionary
Glosses | The project presented in this article aims to formalize criteria and
procedures in order to extract semantic information from parsed dictionary
glosses. The actual purpose of the project is the generation of a semantic
network (nearly an ontology) issued from a monolingual Italian dictionary,
through unsupervised procedures. Since the project involves rule-based Parsing,
Semantic Tagging and Word Sense Disambiguation techniques, its outcomes may
find an interest also beyond this immediate intent. The cooperation of both
syntactic and semantic features in meaning construction are investigated, and
procedures which allows a translation of syntactic dependencies in semantic
relations are discussed. The procedures that rise from this project can be
applied also to other text types than dictionary glosses, as they convert the
output of a parsing process into a semantic representation. In addition some
mechanism are sketched that may lead to a kind of procedural semantics, through
which multiple paraphrases of an given expression can be generated. Which means
that these techniques may find an application also in 'query expansion'
strategies, interesting Information Retrieval, Search Engines and Question
Answering Systems.
| 2,013 | Computation and Language |
Binary Tree based Chinese Word Segmentation | Chinese word segmentation is a fundamental task for Chinese language
processing. The granularity mismatch problem is the main cause of the errors.
This paper showed that the binary tree representation can store outputs with
different granularity. A binary tree based framework is also designed to
overcome the granularity mismatch problem. There are two steps in this
framework, namely tree building and tree pruning. The tree pruning step is
specially designed to focus on the granularity problem. Previous work for
Chinese word segmentation such as the sequence tagging can be easily employed
in this framework. This framework can also provide quantitative error analysis
methods. The experiments showed that after using a more sophisticated tree
pruning function for a state-of-the-art conditional random field based
baseline, the error reduction can be up to 20%.
| 2,013 | Computation and Language |
Random crossings in dependency trees | It has been hypothesized that the rather small number of crossings in real
syntactic dependency trees is a side-effect of pressure for dependency length
minimization. Here we answer a related important research question: what would
be the expected number of crossings if the natural order of a sentence was lost
and replaced by a random ordering? We show that this number depends only on the
number of vertices of the dependency tree (the sentence length) and the second
moment about zero of vertex degrees. The expected number of crossings is
minimum for a star tree (crossings are impossible) and maximum for a linear
tree (the number of crossings is of the order of the square of the sequence
length).
| 2,017 | Computation and Language |
A probabilistic framework for analysing the compositionality of
conceptual combinations | Conceptual combination performs a fundamental role in creating the broad
range of compound phrases utilized in everyday language. This article provides
a novel probabilistic framework for assessing whether the semantics of
conceptual combinations are compositional, and so can be considered as a
function of the semantics of the constituent concepts, or not. While the
systematicity and productivity of language provide a strong argument in favor
of assuming compositionality, this very assumption is still regularly
questioned in both cognitive science and philosophy. Additionally, the
principle of semantic compositionality is underspecified, which means that
notions of both "strong" and "weak" compositionality appear in the literature.
Rather than adjudicating between different grades of compositionality, the
framework presented here contributes formal methods for determining a clear
dividing line between compositional and non-compositional semantics. In
addition, we suggest that the distinction between these is contextually
sensitive. Utilizing formal frameworks developed for analyzing composite
systems in quantum theory, we present two methods that allow the semantics of
conceptual combinations to be classified as "compositional" or
"non-compositional". Compositionality is first formalised by factorising the
joint probability distribution modeling the combination, where the terms in the
factorisation correspond to individual concepts. This leads to the necessary
and sufficient condition for the joint probability distribution to exist. A
failure to meet this condition implies that the underlying concepts cannot be
modeled in a single probability space when considering their combination, and
the combination is thus deemed "non-compositional". The formal analysis methods
are demonstrated by applying them to an empirical study of twenty-four
non-lexicalised conceptual combinations.
| 2,014 | Computation and Language |
An Inventory of Preposition Relations | We describe an inventory of semantic relations that are expressed by
prepositions. We define these relations by building on the word sense
disambiguation task for prepositions and propose a mapping from preposition
senses to the relation labels by collapsing semantically related senses across
prepositions.
| 2,013 | Computation and Language |
Reduce Meaningless Words for Joint Chinese Word Segmentation and
Part-of-speech Tagging | Conventional statistics-based methods for joint Chinese word segmentation and
part-of-speech tagging (S&T) have generalization ability to recognize new words
that do not appear in the training data. An undesirable side effect is that a
number of meaningless words will be incorrectly created. We propose an
effective and efficient framework for S&T that introduces features to
significantly reduce meaningless words generation. A general lexicon, Wikepedia
and a large-scale raw corpus of 200 billion characters are used to generate
word-based features for the wordhood. The word-lattice based framework consists
of a character-based model and a word-based model in order to employ our
word-based features. Experiments on Penn Chinese treebank 5 show that this
method has a 62.9% reduction of meaningless word generation in comparison with
the baseline. As a result, the F1 measure for segmentation is increased to
0.984.
| 2,013 | Computation and Language |
Fast and accurate sentiment classification using an enhanced Naive Bayes
model | We have explored different methods of improving the accuracy of a Naive Bayes
classifier for sentiment analysis. We observed that a combination of methods
like negation handling, word n-grams and feature selection by mutual
information results in a significant improvement in accuracy. This implies that
a highly accurate and fast sentiment classifier can be built using a simple
Naive Bayes model that has linear training and testing time complexities. We
achieved an accuracy of 88.80% on the popular IMDB movie reviews dataset.
| 2,013 | Computation and Language |
Development of a Hindi Lemmatizer | We live in a translingual society, in order to communicate with people from
different parts of the world we need to have an expertise in their respective
languages. Learning all these languages is not at all possible; therefore we
need a mechanism which can do this task for us. Machine translators have
emerged as a tool which can perform this task. In order to develop a machine
translator we need to develop several different rules. The very first module
that comes in machine translation pipeline is morphological analysis. Stemming
and lemmatization comes under morphological analysis. In this paper we have
created a lemmatizer which generates rules for removing the affixes along with
the addition of rules for creating a proper root word.
| 2,013 | Computation and Language |
Extended Lambek calculi and first-order linear logic | First-order multiplicative intuitionistic linear logic (MILL1) can be seen as
an extension of the Lambek calculus. In addition to the fragment of MILL1 which
corresponds to the Lambek calculus (of Moot & Piazza 2001), I will show
fragments of MILL1 which generate the multiple context-free languages and which
correspond to the Displacement calculus of Morrilll e.a.
| 2,013 | Computation and Language |
The User Feedback on SentiWordNet | With the release of SentiWordNet 3.0 the related Web interface has been
restyled and improved in order to allow users to submit feedback on the
SentiWordNet entries, in the form of the suggestion of alternative triplets of
values for an entry. This paper reports on the release of the user feedback
collected so far and on the plans for the future.
| 2,013 | Computation and Language |
A framework for (under)specifying dependency syntax without overloading
annotators | We introduce a framework for lightweight dependency syntax annotation. Our
formalism builds upon the typical representation for unlabeled dependencies,
permitting a simple notation and annotation workflow. Moreover, the formalism
encourages annotators to underspecify parts of the syntax if doing so would
streamline the annotation process. We demonstrate the efficacy of this
annotation on three languages and develop algorithms to evaluate and compare
underspecified annotations.
| 2,013 | Computation and Language |
"Not not bad" is not "bad": A distributional account of negation | With the increasing empirical success of distributional models of
compositional semantics, it is timely to consider the types of textual logic
that such models are capable of capturing. In this paper, we address
shortcomings in the ability of current models to capture logical operations
such as negation. As a solution we propose a tripartite formulation for a
continuous vector space representation of semantics and subsequently use this
representation to develop a formal compositional notion of negation within such
models.
| 2,013 | Computation and Language |
The Quantum Challenge in Concept Theory and Natural Language Processing | The mathematical formalism of quantum theory has been successfully used in
human cognition to model decision processes and to deliver representations of
human knowledge. As such, quantum cognition inspired tools have improved
technologies for Natural Language Processing and Information Retrieval. In this
paper, we overview the quantum cognition approach developed in our Brussels
team during the last two decades, specifically our identification of quantum
structures in human concepts and language, and the modeling of data from
psychological and corpus-text-based experiments. We discuss our
quantum-theoretic framework for concepts and their conjunctions/disjunctions in
a Fock-Hilbert space structure, adequately modeling a large amount of data
collected on concept combinations. Inspired by this modeling, we put forward
elements for a quantum contextual and meaning-based approach to information
technologies in which 'entities of meaning' are inversely reconstructed from
texts, which are considered as traces of these entities' states.
| 2,013 | Computation and Language |
Recurrent Convolutional Neural Networks for Discourse Compositionality | The compositionality of meaning extends beyond the single sentence. Just as
words combine to form the meaning of sentences, so do sentences combine to form
the meaning of paragraphs, dialogues and general discourse. We introduce both a
sentence model and a discourse model corresponding to the two levels of
compositionality. The sentence model adopts convolution as the central
operation for composing semantic vectors and is based on a novel hierarchical
convolutional neural network. The discourse model extends the sentence model
and is based on a recurrent neural network that is conditioned in a novel way
both on the current sentence and on the current speaker. The discourse model is
able to capture both the sequentiality of sentences and the interaction between
different speakers. Without feature engineering or pretraining and with simple
greedy decoding, the discourse model coupled to the sentence model obtains
state of the art performance on a dialogue act classification experiment.
| 2,013 | Computation and Language |
An open diachronic corpus of historical Spanish: annotation criteria and
automatic modernisation of spelling | The IMPACT-es diachronic corpus of historical Spanish compiles over one
hundred books --containing approximately 8 million words-- in addition to a
complementary lexicon which links more than 10 thousand lemmas with
attestations of the different variants found in the documents. This textual
corpus and the accompanying lexicon have been released under an open license
(Creative Commons by-nc-sa) in order to permit their intensive exploitation in
linguistic research. Approximately 7% of the words in the corpus (a selection
aimed at enhancing the coverage of the most frequent word forms) have been
annotated with their lemma, part of speech, and modern equivalent. This paper
describes the annotation criteria followed and the standards, based on the Text
Encoding Initiative recommendations, used to the represent the texts in digital
form. As an illustration of the possible synergies between diachronic textual
resources and linguistic research, we describe the application of statistical
machine translation techniques to infer probabilistic context-sensitive rules
for the automatic modernisation of spelling. The automatic modernisation with
this type of statistical methods leads to very low character error rates when
the output is compared with the supervised modern version of the text.
| 2,013 | Computation and Language |
Discriminating word senses with tourist walks in complex networks | Patterns of topological arrangement are widely used for both animal and human
brains in the learning process. Nevertheless, automatic learning techniques
frequently overlook these patterns. In this paper, we apply a learning
technique based on the structural organization of the data in the attribute
space to the problem of discriminating the senses of 10 polysemous words. Using
two types of characterization of meanings, namely semantical and topological
approaches, we have observed significative accuracy rates in identifying the
suitable meanings in both techniques. Most importantly, we have found that the
characterization based on the deterministic tourist walk improves the
disambiguation process when one compares with the discrimination achieved with
traditional complex networks measurements such as assortativity and clustering
coefficient. To our knowledge, this is the first time that such deterministic
walk has been applied to such a kind of problem. Therefore, our finding
suggests that the tourist walk characterization may be useful in other related
applications.
| 2,013 | Computation and Language |
Dialogue System: A Brief Review | A Dialogue System is a system which interacts with human in natural language.
At present many universities are developing the dialogue system in their
regional language. This paper will discuss about dialogue system, its
components, challenges and its evaluation. This paper helps the researchers for
getting info regarding dialogues system.
| 2,013 | Computation and Language |
Punjabi Language Interface to Database: a brief review | Unlike most user-computer interfaces, a natural language interface allows
users to communicate fluently with a computer system with very little
preparation. Databases are often hard to use in cooperating with the users
because of their rigid interface. A good NLIDB allows a user to enter commands
and ask questions in native language and then after interpreting respond to the
user in native language. For a large number of applications requiring
interaction between humans and the computer systems, it would be convenient to
provide the end-user friendly interface. Punjabi language interface to database
would proof fruitful to native people of Punjab, as it provides ease to them to
use various e-governance applications like Punjab Sewa, Suwidha, Online Public
Utility Forms, Online Grievance Cell, Land Records Management System,legacy
matters, e-District, agriculture, etc. Punjabi is the mother tongue of more
than 110 million people all around the world. According to available
information, Punjabi ranks 10th from top out of a total of 6,900 languages
recognized internationally by the United Nations. This paper covers a brief
overview of the Natural language interface to database, its different
components, its advantages, disadvantages, approaches and techniques used. The
paper ends with the work done on Punjabi language interface to database and
future enhancements that can be done.
| 2,013 | Computation and Language |
Supervised Topical Key Phrase Extraction of News Stories using
Crowdsourcing, Light Filtering and Co-reference Normalization | Fast and effective automated indexing is critical for search and personalized
services. Key phrases that consist of one or more words and represent the main
concepts of the document are often used for the purpose of indexing. In this
paper, we investigate the use of additional semantic features and
pre-processing steps to improve automatic key phrase extraction. These features
include the use of signal words and freebase categories. Some of these features
lead to significant improvements in the accuracy of the results. We also
experimented with 2 forms of document pre-processing that we call light
filtering and co-reference normalization. Light filtering removes sentences
from the document, which are judged peripheral to its main content.
Co-reference normalization unifies several written forms of the same named
entity into a unique form. We also needed a "Gold Standard" - a set of labeled
documents for training and evaluation. While the subjective nature of key
phrase selection precludes a true "Gold Standard", we used Amazon's Mechanical
Turk service to obtain a useful approximation. Our data indicates that the
biggest improvements in performance were due to shallow semantic features, news
categories, and rhetorical signals (nDCG 78.47% vs. 68.93%). The inclusion of
deeper semantic features such as Freebase sub-categories was not beneficial by
itself, but in combination with pre-processing, did cause slight improvements
in the nDCG scores.
| 2,013 | Computation and Language |
Key Phrase Extraction of Lightly Filtered Broadcast News | This paper explores the impact of light filtering on automatic key phrase
extraction (AKE) applied to Broadcast News (BN). Key phrases are words and
expressions that best characterize the content of a document. Key phrases are
often used to index the document or as features in further processing. This
makes improvements in AKE accuracy particularly important. We hypothesized that
filtering out marginally relevant sentences from a document would improve AKE
accuracy. Our experiments confirmed this hypothesis. Elimination of as little
as 10% of the document sentences lead to a 2% improvement in AKE precision and
recall. AKE is built over MAUI toolkit that follows a supervised learning
approach. We trained and tested our AKE method on a gold standard made of 8 BN
programs containing 110 manually annotated news stories. The experiments were
conducted within a Multimedia Monitoring Solution (MMS) system for TV and radio
news/programs, running daily, and monitoring 12 TV and 4 radio channels.
| 2,013 | Computation and Language |
Recognition of Named-Event Passages in News Articles | We extend the concept of Named Entities to Named Events - commonly occurring
events such as battles and earthquakes. We propose a method for finding
specific passages in news articles that contain information about such events
and report our preliminary evaluation results. Collecting "Gold Standard" data
presents many problems, both practical and conceptual. We present a method for
obtaining such data using the Amazon Mechanical Turk service.
| 2,013 | Computation and Language |
A Computational Approach to Politeness with Application to Social
Factors | We propose a computational framework for identifying linguistic aspects of
politeness. Our starting point is a new corpus of requests annotated for
politeness, which we use to evaluate aspects of politeness theory and to
uncover new interactions between politeness markers and context. These findings
guide our construction of a classifier with domain-independent lexical and
syntactic features operationalizing key components of politeness theory, such
as indirection, deference, impersonalization and modality. Our classifier
achieves close to human performance and is effective across domains. We use our
framework to study the relationship between politeness and social power,
showing that polite Wikipedia editors are more likely to achieve high status
through elections, but, once elevated, they become less polite. We see a
similar negative correlation between politeness and power on Stack Exchange,
where users at the top of the reputation scale are less polite than those at
the bottom. Finally, we apply our classifier to a preliminary analysis of
politeness variation by gender and community.
| 2,013 | Computation and Language |
Competency Tracking for English as a Second or Foreign Language Learners | My system utilizes the outcomes feature found in Moodle and other learning
content management systems (LCMSs) to keep track of where students are in terms
of what language competencies they have mastered and the competencies they need
to get where they want to go. These competencies are based on the Common
European Framework for (English) Language Learning. This data can be available
for everyone involved with a given student's progress (e.g. educators, parents,
supervisors and the students themselves). A given student's record of past
accomplishments can also be meshed with those of his classmates. Not only are a
student's competencies easily seen and tracked, educators can view competencies
of a group of students that were achieved prior to enrollment in the class.
This should make curriculum decision making easier and more efficient for
educators.
| 2,013 | Computation and Language |
Arabizi Detection and Conversion to Arabic | Arabizi is Arabic text that is written using Latin characters. Arabizi is
used to present both Modern Standard Arabic (MSA) or Arabic dialects. It is
commonly used in informal settings such as social networking sites and is often
with mixed with English. In this paper we address the problems of: identifying
Arabizi in text and converting it to Arabic characters. We used word and
sequence-level features to identify Arabizi that is mixed with English. We
achieved an identification accuracy of 98.5%. As for conversion, we used
transliteration mining with language modeling to generate equivalent Arabic
text. We achieved 88.7% conversion accuracy, with roughly a third of errors
being spelling and morphological variants of the forms in ground truth.
| 2,013 | Computation and Language |
The DeLiVerMATH project - Text analysis in mathematics | A high-quality content analysis is essential for retrieval functionalities
but the manual extraction of key phrases and classification is expensive.
Natural language processing provides a framework to automatize the process.
Here, a machine-based approach for the content analysis of mathematical texts
is described. A prototype for key phrase extraction and classification of
mathematical texts is presented.
| 2,013 | Computation and Language |
Improving Pointwise Mutual Information (PMI) by Incorporating
Significant Co-occurrence | We design a new co-occurrence based word association measure by incorporating
the concept of significant cooccurrence in the popular word association measure
Pointwise Mutual Information (PMI). By extensive experiments with a large
number of publicly available datasets we show that the newly introduced measure
performs better than other co-occurrence based measures and despite being
resource-light, compares well with the best known resource-heavy distributional
similarity and knowledge based word association measures. We investigate the
source of this performance improvement and find that of the two types of
significant co-occurrence - corpus-level and document-level, the concept of
corpus level significance combined with the use of document counts in place of
word counts is responsible for all the performance gains observed. The concept
of document level significance is not helpful for PMI adaptation.
| 2,013 | Computation and Language |
Polyglot: Distributed Word Representations for Multilingual NLP | Distributed word representations (word embeddings) have recently contributed
to competitive performance in language modeling and several NLP tasks. In this
work, we train word embeddings for more than 100 languages using their
corresponding Wikipedias. We quantitatively demonstrate the utility of our word
embeddings by using them as the sole features for training a part of speech
tagger for a subset of these languages. We find their performance to be
competitive with near state-of-art methods in English, Danish and Swedish.
Moreover, we investigate the semantic features captured by these embeddings
through the proximity of word groupings. We will release these embeddings
publicly to help researchers in the development and enhancement of multilingual
applications.
| 2,014 | Computation and Language |
Intelligent Hybrid Man-Machine Translation Quality Estimation | Inferring evaluation scores based on human judgments is invaluable compared
to using current evaluation metrics which are not suitable for real-time
applications e.g. post-editing. However, these judgments are much more
expensive to collect especially from expert translators, compared to evaluation
based on indicators contrasting source and translation texts. This work
introduces a novel approach for quality estimation by combining learnt
confidence scores from a probabilistic inference model based on human
judgments, with selective linguistic features-based scores, where the proposed
inference model infers the credibility of given human ranks to solve the
scarcity and inconsistency issues of human judgments. Experimental results,
using challenging language-pairs, demonstrate improvement in correlation with
human judgments over traditional evaluation metrics.
| 2,013 | Computation and Language |
Improving the quality of Gujarati-Hindi Machine Translation through
part-of-speech tagging and stemmer-assisted transliteration | Machine Translation for Indian languages is an emerging research area.
Transliteration is one such module that we design while designing a translation
system. Transliteration means mapping of source language text into the target
language. Simple mapping decreases the efficiency of overall translation
system. We propose the use of stemming and part-of-speech tagging for
transliteration. The effectiveness of translation can be improved if we use
part-of-speech tagging and stemming assisted transliteration.We have shown that
much of the content in Gujarati gets transliterated while being processed for
translation to Hindi language.
| 2,013 | Computation and Language |
Opinion Mining and Analysis: A survey | The current research is focusing on the area of Opinion Mining also called as
sentiment analysis due to sheer volume of opinion rich web resources such as
discussion forums, review sites and blogs are available in digital form. One
important problem in sentiment analysis of product reviews is to produce
summary of opinions based on product features. We have surveyed and analyzed in
this paper, various techniques that have been developed for the key tasks of
opinion mining. We have provided an overall picture of what is involved in
developing a software system for opinion mining on the basis of our survey and
analysis.
| 2,013 | Computation and Language |
Genetic approach for arabic part of speech tagging | With the growing number of textual resources available, the ability to
understand them becomes critical. An essential first step in understanding
these sources is the ability to identify the part of speech in each sentence.
Arabic is a morphologically rich language, wich presents a challenge for part
of speech tagging. In this paper, our goal is to propose, improve and implement
a part of speech tagger based on a genetic alorithm. The accuracy obtained with
this method is comparable to that of other probabilistic approaches.
| 2,013 | Computation and Language |
Part of Speech Tagging of Marathi Text Using Trigram Method | In this paper we present a Marathi part of speech tagger. It is a
morphologically rich language. It is spoken by the native people of
Maharashtra. The general approach used for development of tagger is statistical
using trigram Method. The main concept of trigram is to explore the most likely
POS for a token based on given information of previous two tags by calculating
probabilities to determine which is the best sequence of a tag. In this paper
we show the development of the tagger. Moreover we have also shown the
evaluation done.
| 2,013 | Computation and Language |
Rule Based Transliteration Scheme for English to Punjabi | Machine Transliteration has come out to be an emerging and a very important
research area in the field of machine translation. Transliteration basically
aims to preserve the phonological structure of words. Proper transliteration of
name entities plays a very significant role in improving the quality of machine
translation. In this paper we are doing machine transliteration for
English-Punjabi language pair using rule based approach. We have constructed
some rules for syllabification. Syllabification is the process to extract or
separate the syllable from the words. In this we are calculating the
probabilities for name entities (Proper names and location). For those words
which do not come under the category of name entities, separate probabilities
are being calculated by using relative frequency through a statistical machine
translation toolkit known as MOSES. Using these probabilities we are
transliterating our input text from English to Punjabi.
| 2,013 | Computation and Language |
Says who? Automatic Text-Based Content Analysis of Television News | We perform an automatic analysis of television news programs, based on the
closed captions that accompany them. Specifically, we collect all the news
broadcasted in over 140 television channels in the US during a period of six
months. We start by segmenting, processing, and annotating the closed captions
automatically. Next, we focus on the analysis of their linguistic style and on
mentions of people using NLP methods. We present a series of key insights about
news providers, people in the news, and we discuss the biases that can be
uncovered by automatic means. These insights are contrasted by looking at the
data from multiple points of view, including qualitative assessment.
| 2,013 | Computation and Language |
Good Debt or Bad Debt: Detecting Semantic Orientations in Economic Texts | The use of robo-readers to analyze news texts is an emerging technology trend
in computational finance. In recent research, a substantial effort has been
invested to develop sophisticated financial polarity-lexicons that can be used
to investigate how financial sentiments relate to future company performance.
However, based on experience from other fields, where sentiment analysis is
commonly applied, it is well-known that the overall semantic orientation of a
sentence may differ from the prior polarity of individual words. The objective
of this article is to investigate how semantic orientations can be better
detected in financial and economic news by accommodating the overall
phrase-structure information and domain-specific use of language. Our three
main contributions are: (1) establishment of a human-annotated finance
phrase-bank, which can be used as benchmark for training and evaluating
alternative models; (2) presentation of a technique to enhance financial
lexicons with attributes that help to identify expected direction of events
that affect overall sentiment; (3) development of a linearized phrase-structure
model for detecting contextual semantic orientations in financial and economic
news texts. The relevance of the newly added lexicon features and the benefit
of using the proposed learning-algorithm are demonstrated in a comparative
study against previously used general sentiment models as well as the popular
word frequency models used in recent financial studies. The proposed framework
is parsimonious and avoids the explosion in feature-space caused by the use of
conventional n-gram features.
| 2,013 | Computation and Language |
Clustering Algorithm for Gujarati Language | Natural language processing area is still under research. But now a day it is
on platform for worldwide researchers. Natural language processing includes
analyzing the language based on its structure and then tagging of each word
appropriately with its grammar base. Here we have 50,000 tagged words set and
we try to cluster those Gujarati words based on proposed algorithm, we have
defined our own algorithm for processing. Many clustering techniques are
available Ex. Single linkage, complete, linkage,average linkage, Hear no of
clusters to be formed are not known, so it is all depends on the type of data
set provided . Clustering is preprocess for stemming . Stemming is the process
where root is extracted from its word. Ex. cats= cat+S, meaning. Cat: Noun and
plural form.
| 2,013 | Computation and Language |
Speaker Independent Continuous Speech to Text Converter for Mobile
Application | An efficient speech to text converter for mobile application is presented in
this work. The prime motive is to formulate a system which would give optimum
performance in terms of complexity, accuracy, delay and memory requirements for
mobile environment. The speech to text converter consists of two stages namely
front-end analysis and pattern recognition. The front end analysis involves
preprocessing and feature extraction. The traditional voice activity detection
algorithms which track only energy cannot successfully identify potential
speech from input because the unwanted part of the speech also has some energy
and appears to be speech. In the proposed system, VAD that calculates energy of
high frequency part separately as zero crossing rate to differentiate noise
from speech is used. Mel Frequency Cepstral Coefficient (MFCC) is used as
feature extraction method and Generalized Regression Neural Network is used as
recognizer. MFCC provides low word error rate and better feature extraction.
Neural Network improves the accuracy. Thus a small database containing all
possible syllable pronunciation of the user is sufficient to give recognition
accuracy closer to 100%. Thus the proposed technique entertains realization of
real time speaker independent applications like mobile phones, PDAs etc.
| 2,013 | Computation and Language |
Human and Automatic Evaluation of English-Hindi Machine Translation | For the past 60 years, Research in machine translation is going on. For the
development in this field, a lot of new techniques are being developed each
day. As a result, we have witnessed development of many automatic machine
translators. A manager of machine translation development project needs to know
the performance increase/decrease, after changes have been done in his system.
Due to this reason, a need for evaluation of machine translation systems was
felt. In this article, we shall present the evaluation of some machine
translators. This evaluation will be done by a human evaluator and by some
automatic evaluation metrics, which will be done at sentence, document and
system level. In the end we shall also discuss the comparison between the
evaluations.
| 2,013 | Computation and Language |
Information content versus word length in natural language: A reply to
Ferrer-i-Cancho and Moscoso del Prado Martin [arXiv:1209.1751] | Recently, Ferrer i Cancho and Moscoso del Prado Martin [arXiv:1209.1751]
argued that an observed linear relationship between word length and average
surprisal (Piantadosi, Tily, & Gibson, 2011) is not evidence for communicative
efficiency in human language. We discuss several shortcomings of their approach
and critique: their model critically rests on inaccurate assumptions, is
incapable of explaining key surprisal patterns in language, and is incompatible
with recent behavioral results. More generally, we argue that statistical
models must not critically rely on assumptions that are incompatible with the
real system under study.
| 2,013 | Computation and Language |
Learning Frames from Text with an Unsupervised Latent Variable Model | We develop a probabilistic latent-variable model to discover semantic
frames---types of events and their participants---from corpora. We present a
Dirichlet-multinomial model in which frames are latent categories that explain
the linking of verb-subject-object triples, given document-level sparsity. We
analyze what the model learns, and compare it to FrameNet, noting it learns
some novel and interesting frames. This document also contains a discussion of
inference issues, including concentration parameter learning; and a small-scale
error analysis of syntactic parsing accuracy.
| 2,013 | Computation and Language |
Connecting Language and Knowledge Bases with Embedding Models for
Relation Extraction | This paper proposes a novel approach for relation extraction from free text
which is trained to jointly use information from the text and from existing
knowledge. Our model is based on two scoring functions that operate by learning
low-dimensional embeddings of words and of entities and relationships from a
knowledge base. We empirically show on New York Times articles aligned with
Freebase relations that our approach is able to efficiently use the extra
information provided by a large subset of Freebase data (4M entities, 23k
relationships) to improve over existing methods that rely on text features
alone.
| 2,013 | Computation and Language |
Extracting Connected Concepts from Biomedical Texts using Fog Index | In this paper, we establish Fog Index (FI) as a text filter to locate the
sentences in texts that contain connected biomedical concepts of interest. To
do so, we have used 24 random papers each containing four pairs of connected
concepts. For each pair, we categorize sentences based on whether they contain
both, any or none of the concepts. We then use FI to measure difficulty of the
sentences of each category and find that sentences containing both of the
concepts have low readability. We rank sentences of a text according to their
FI and select 30 percent of the most difficult sentences. We use an association
matrix to track the most frequent pairs of concepts in them. This matrix
reports that the first filter produces some pairs that hold almost no
connections. To remove these unwanted pairs, we use the Equally Weighted
Harmonic Mean of their Positive Predictive Value (PPV) and Sensitivity as a
second filter. Experimental results demonstrate the effectiveness of our
method.
| 2,013 | Computation and Language |
Exploring The Contribution of Unlabeled Data in Financial Sentiment
Analysis | With the proliferation of its applications in various industries, sentiment
analysis by using publicly available web data has become an active research
area in text classification during these years. It is argued by researchers
that semi-supervised learning is an effective approach to this problem since it
is capable to mitigate the manual labeling effort which is usually expensive
and time-consuming. However, there was a long-term debate on the effectiveness
of unlabeled data in text classification. This was partially caused by the fact
that many assumptions in theoretic analysis often do not hold in practice. We
argue that this problem may be further understood by adding an additional
dimension in the experiment. This allows us to address this problem in the
perspective of bias and variance in a broader view. We show that the well-known
performance degradation issue caused by unlabeled data can be reproduced as a
subset of the whole scenario. We argue that if the bias-variance trade-off is
to be better balanced by a more effective feature selection method unlabeled
data is very likely to boost the classification performance. We then propose a
feature selection framework in which labeled and unlabeled training samples are
both considered. We discuss its potential in achieving such a balance. Besides,
the application in financial sentiment analysis is chosen because it not only
exemplifies an important application, the data possesses better illustrative
power as well. The implications of this study in text classification and
financial sentiment analysis are both discussed.
| 2,013 | Computation and Language |
Context Specific Event Model For News Articles | We present a new context based event indexing and event ranking model for
News Articles. The context event clusters formed from the UNL Graphs uses the
modified scoring scheme for segmenting events which is followed by clustering
of events. From the context clusters obtained three models are developed-
Identification of Main and Sub events; Event Indexing and Event Ranking. Based
on the properties considered from the UNL Graphs for the modified scoring main
events and sub events associated with main-events are identified. The temporal
details obtained from the context cluster are stored using hashmap data
structure. The temporal details are place-where the event took; person-who
involved in that event; time-when the event took place. Based on the
information collected from the context clusters three indices are generated-
Time index, Person index, and Place index. This index gives complete details
about every event obtained from context clusters. A new scoring scheme is
introduced for ranking the events. The scoring scheme for event ranking gives
weight-age based on the priority level of the events. The priority level
includes the occurrence of the event in the title of the document, event
frequency, and inverse document frequency of the events.
| 2,013 | Computation and Language |
Boundary identification of events in clinical named entity recognition | The problem of named entity recognition in the medical/clinical domain has
gained increasing attention do to its vital role in a wide range of clinical
decision support applications. The identification of complete and correct term
span is vital for further knowledge synthesis (e.g., coding/mapping concepts
thesauruses and classification standards). This paper investigates boundary
adjustment by sequence labeling representations models and post-processing
techniques in the problem of clinical named entity recognition (recognition of
clinical events). Using current state-of-the-art sequence labeling algorithm
(conditional random fields), we show experimentally that sequence labeling
representation and post-processing can be significantly helpful in strict
boundary identification of clinical events.
| 2,013 | Computation and Language |
Logical analysis of natural language semantics to solve the problem of
computer understanding | An object--oriented approach to create a natural language understanding
system is considered. The understanding program is a formal system built on the
base of predicative calculus. Horn's clauses are used as well--formed formulas.
An inference is based on the principle of resolution. Sentences of natural
language are represented in the view of typical predicate set. These predicates
describe physical objects and processes, abstract objects, categories and
semantic relations between objects. Predicates for concrete assertions are
saved in a database. To describe the semantics of classes for physical objects,
abstract concepts and processes, a knowledge base is applied. The proposed
representation of natural language sentences is a semantic net. Nodes of such
net are typical predicates. This approach is perspective as, firstly, such
typification of nodes facilitates essentially forming of processing algorithms
and object descriptions, secondly, the effectiveness of algorithms is increased
(particularly for the great number of nodes), thirdly, to describe the
semantics of words, encyclopedic knowledge is used, and this permits
essentially to extend the class of solved problems.
| 2,013 | Computation and Language |
The Royal Birth of 2013: Analysing and Visualising Public Sentiment in
the UK Using Twitter | Analysis of information retrieved from microblogging services such as Twitter
can provide valuable insight into public sentiment in a geographic region. This
insight can be enriched by visualising information in its geographic context.
Two underlying approaches for sentiment analysis are dictionary-based and
machine learning. The former is popular for public sentiment analysis, and the
latter has found limited use for aggregating public sentiment from Twitter
data. The research presented in this paper aims to extend the machine learning
approach for aggregating public sentiment. To this end, a framework for
analysing and visualising public sentiment from a Twitter corpus is developed.
A dictionary-based approach and a machine learning approach are implemented
within the framework and compared using one UK case study, namely the royal
birth of 2013. The case study validates the feasibility of the framework for
analysis and rapid visualisation. One observation is that there is good
correlation between the results produced by the popular dictionary-based
approach and the machine learning approach when large volumes of tweets are
analysed. However, for rapid analysis to be possible faster methods need to be
developed using big data techniques and parallel methods.
| 2,016 | Computation and Language |
Exploratory Analysis of Highly Heterogeneous Document Collections | We present an effective multifaceted system for exploratory analysis of
highly heterogeneous document collections. Our system is based on intelligently
tagging individual documents in a purely automated fashion and exploiting these
tags in a powerful faceted browsing framework. Tagging strategies employed
include both unsupervised and supervised approaches based on machine learning
and natural language processing. As one of our key tagging strategies, we
introduce the KERA algorithm (Keyword Extraction for Reports and Articles).
KERA extracts topic-representative terms from individual documents in a purely
unsupervised fashion and is revealed to be significantly more effective than
state-of-the-art methods. Finally, we evaluate our system in its ability to
help users locate documents pertaining to military critical technologies buried
deep in a large heterogeneous sea of information.
| 2,013 | Computation and Language |
Hidden Structure and Function in the Lexicon | How many words are needed to define all the words in a dictionary?
Graph-theoretic analysis reveals that about 10% of a dictionary is a unique
Kernel of words that define one another and all the rest, but this is not the
smallest such subset. The Kernel consists of one huge strongly connected
component (SCC), about half its size, the Core, surrounded by many small SCCs,
the Satellites. Core words can define one another but not the rest of the
dictionary. The Kernel also contains many overlapping Minimal Grounding Sets
(MGSs), each about the same size as the Core, each part-Core, part-Satellite.
MGS words can define all the rest of the dictionary. They are learned earlier,
more concrete and more frequent than the rest of the dictionary. Satellite
words, not correlated with age or frequency, are less concrete (more abstract)
words that are also needed for full lexical power.
| 2,013 | Computation and Language |
B(eo)W(u)LF: Facilitating recurrence analysis on multi-level language | Discourse analysis may seek to characterize not only the overall composition
of a given text but also the dynamic patterns within the data. This technical
report introduces a data format intended to facilitate multi-level
investigations, which we call the by-word long-form or B(eo)W(u)LF. Inspired by
the long-form data format required for mixed-effects modeling, B(eo)W(u)LF
structures linguistic data into an expanded matrix encoding any number of
researchers-specified markers, making it ideal for recurrence-based analyses.
While we do not necessarily claim to be the first to use methods along these
lines, we have created a series of tools utilizing Python and MATLAB to enable
such discourse analyses and demonstrate them using 319 lines of the Old English
epic poem, Beowulf, translated into modern English.
| 2,013 | Computation and Language |
System and Methods for Converting Speech to SQL | This paper concerns with the conversion of a Spoken English Language Query
into SQL for retrieving data from RDBMS. A User submits a query as speech
signal through the user interface and gets the result of the query in the text
format. We have developed the acoustic and language models using which a speech
utterance can be converted into English text query and thus natural language
processing techniques can be applied on this English text query to generate an
equivalent SQL query. For conversion of speech into English text HTK and Julius
tools have been used and for conversion of English text query into SQL query we
have implemented a System which uses rule based translation to translate
English Language Query into SQL Query. The translation uses lexical analyzer,
parser and syntax directed translation techniques like in compilers. JFLex and
BYACC tools have been used to build lexical analyzer and parser respectively.
System is domain independent i.e. system can run on different database as it
generates lex files from the underlying database.
| 2,013 | Computation and Language |
Implementation Of Back-Propagation Neural Network For Isolated Bangla
Speech Recognition | This paper is concerned with the development of Back-propagation Neural
Network for Bangla Speech Recognition. In this paper, ten bangla digits were
recorded from ten speakers and have been recognized. The features of these
speech digits were extracted by the method of Mel Frequency Cepstral
Coefficient (MFCC) analysis. The mfcc features of five speakers were used to
train the network with Back propagation algorithm. The mfcc features of ten
bangla digit speeches, from 0 to 9, of another five speakers were used to test
the system. All the methods and algorithms used in this research were
implemented using the features of Turbo C and C++ languages. From our
investigation it is seen that the developed system can successfully encode and
analyze the mfcc features of the speech signal to recognition. The developed
system achieved recognition rate about 96.332% for known speakers (i.e.,
speaker dependent) and 92% for unknown speakers (i.e., speaker independent).
| 2,013 | Computation and Language |
Natural Language Web Interface for Database (NLWIDB) | It is a long term desire of the computer users to minimize the communication
gap between the computer and a human. On the other hand, almost all ICT
applications store information in to databases and retrieve from them.
Retrieving information from the database requires knowledge of technical
languages such as Structured Query Language. However majority of the computer
users who interact with the databases do not have a technical background and
are intimidated by the idea of using languages such as SQL. For above reasons,
a Natural Language Web Interface for Database (NLWIDB) has been developed. The
NLWIDB allows the user to query the database in a language more like English,
through a convenient interface over the Internet.
| 2,013 | Computation and Language |
Consensus Sequence Segmentation | In this paper we introduce a method to detect words or phrases in a given
sequence of alphabets without knowing the lexicon. Our linear time unsupervised
algorithm relies entirely on statistical relationships among alphabets in the
input sequence to detect location of word boundaries. We compare our algorithm
to previous approaches from unsupervised sequence segmentation literature and
provide superior segmentation over number of benchmarks.
| 2,013 | Computation and Language |
An Investigation of the Sampling-Based Alignment Method and Its
Contributions | By investigating the distribution of phrase pairs in phrase translation
tables, the work in this paper describes an approach to increase the number of
n-gram alignments in phrase translation tables output by a sampling-based
alignment method. This approach consists in enforcing the alignment of n-grams
in distinct translation subtables so as to increase the number of n-grams.
Standard normal distribution is used to allot alignment time among translation
subtables, which results in adjustment of the distribution of n- grams. This
leads to better evaluation results on statistical machine translation tasks
than the original sampling-based alignment approach. Furthermore, the
translation quality obtained by merging phrase translation tables computed from
the sampling-based alignment method and from MGIZA++ is examined.
| 2,013 | Computation and Language |
Can inferred provenance and its visualisation be used to detect
erroneous annotation? A case study using UniProtKB | A constant influx of new data poses a challenge in keeping the annotation in
biological databases current. Most biological databases contain significant
quantities of textual annotation, which often contains the richest source of
knowledge. Many databases reuse existing knowledge, during the curation process
annotations are often propagated between entries. However, this is often not
made explicit. Therefore, it can be hard, potentially impossible, for a reader
to identify where an annotation originated from. Within this work we attempt to
identify annotation provenance and track its subsequent propagation.
Specifically, we exploit annotation reuse within the UniProt Knowledgebase
(UniProtKB), at the level of individual sentences. We describe a visualisation
approach for the provenance and propagation of sentences in UniProtKB which
enables a large-scale statistical analysis. Initially levels of sentence reuse
within UniProtKB were analysed, showing that reuse is heavily prevalent, which
enables the tracking of provenance and propagation. By analysing sentences
throughout UniProtKB, a number of interesting propagation patterns were
identified, covering over 100, 000 sentences. Over 8000 sentences remain in the
database after they have been removed from the entries where they originally
occurred. Analysing a subset of these sentences suggest that approximately 30%
are erroneous, whilst 35% appear to be inconsistent. These results suggest that
being able to visualise sentence propagation and provenance can aid in the
determination of the accuracy and quality of textual annotation. Source code
and supplementary data are available from the authors website.
| 2,013 | Computation and Language |
A Literature Review: Stemming Algorithms for Indian Languages | Stemming is the process of extracting root word from the given inflection
word. It also plays significant role in numerous application of Natural
Language Processing (NLP). The stemming problem has addressed in many contexts
and by researchers in many disciplines. This expository paper presents survey
of some of the latest developments on stemming algorithms in data mining and
also presents with some of the solutions for various Indian language stemming
algorithms along with the results.
| 2,013 | Computation and Language |
Linear models and linear mixed effects models in R with linguistic
applications | This text is a conceptual introduction to mixed effects modeling with
linguistic applications, using the R programming environment. The reader is
introduced to linear modeling and assumptions, as well as to mixed
effects/multilevel modeling, including a discussion of random intercepts,
random slopes and likelihood ratio tests. The example used throughout the text
focuses on the phonetic analysis of voice pitch data.
| 2,013 | Computation and Language |
NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of
Tweets | In this paper, we describe how we created two state-of-the-art SVM
classifiers, one to detect the sentiment of messages such as tweets and SMS
(message-level task) and one to detect the sentiment of a term within a
submissions stood first in both tasks on tweets, obtaining an F-score of 69.02
in the message-level task and 88.93 in the term-level task. We implemented a
variety of surface-form, semantic, and sentiment features. with sentiment-word
hashtags, and one from tweets with emoticons. In the message-level task, the
lexicon-based features provided a gain of 5 F-score points over all others.
Both of our systems can be replicated us available resources.
| 2,013 | Computation and Language |
Crowdsourcing a Word-Emotion Association Lexicon | Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion.
| 2,013 | Computation and Language |
Computing Lexical Contrast | Knowing the degree of semantic contrast between words has widespread
application in natural language processing, including machine translation,
information retrieval, and dialogue systems. Manually-created lexicons focus on
opposites, such as {\rm hot} and {\rm cold}. Opposites are of many kinds such
as antipodals, complementaries, and gradable. However, existing lexicons often
do not classify opposites into the different kinds. They also do not explicitly
list word pairs that are not opposites but yet have some degree of contrast in
meaning, such as {\rm warm} and {\rm cold} or {\rm tropical} and {\rm
freezing}. We propose an automatic method to identify contrasting word pairs
that is based on the hypothesis that if a pair of words, $A$ and $B$, are
contrasting, then there is a pair of opposites, $C$ and $D$, such that $A$ and
$C$ are strongly related and $B$ and $D$ are strongly related. (For example,
there exists the pair of opposites {\rm hot} and {\rm cold} such that {\rm
tropical} is related to {\rm hot,} and {\rm freezing} is related to {\rm
cold}.) We will call this the contrast hypothesis. We begin with a large
crowdsourcing experiment to determine the amount of human agreement on the
concept of oppositeness and its different kinds. In the process, we flesh out
key features of different kinds of opposites. We then present an automatic and
empirical measure of lexical contrast that relies on the contrast hypothesis,
corpus statistics, and the structure of a {\it Roget}-like thesaurus. We show
that the proposed measure of lexical contrast obtains high precision and large
coverage, outperforming existing methods.
| 2,013 | Computation and Language |
Tagging Scientific Publications using Wikipedia and Natural Language
Processing Tools. Comparison on the ArXiv Dataset | In this work, we compare two simple methods of tagging scientific
publications with labels reflecting their content. As a first source of labels
Wikipedia is employed, second label set is constructed from the noun phrases
occurring in the analyzed corpus. We examine the statistical properties and the
effectiveness of both approaches on the dataset consisting of abstracts from
0.7 million of scientific documents deposited in the ArXiv preprint collection.
We believe that obtained tags can be later on applied as useful document
features in various machine learning tasks (document similarity, clustering,
topic modelling, etc.).
| 2,014 | Computation and Language |
Advances in the Logical Representation of Lexical Semantics | The integration of lexical semantics and pragmatics in the analysis of the
meaning of natural lan- guage has prompted changes to the global framework
derived from Montague. In those works, the original lexicon, in which words
were assigned an atomic type of a single-sorted logic, has been re- placed by a
set of many-facetted lexical items that can compose their meaning with salient
contextual properties using a rich typing system as a guide. Having related our
proposal for such an expanded framework \LambdaTYn, we present some recent
advances in the logical formalisms associated, including constraints on lexical
transformations and polymorphic quantifiers, and ongoing discussions and
research on the granularity of the type system and the limits of transitivity.
| 2,013 | Computation and Language |
Learning to answer questions | We present an open-domain Question-Answering system that learns to answer
questions based on successful past interactions. We follow a pattern-based
approach to Answer-Extraction, where (lexico-syntactic) patterns that relate a
question to its answer are automatically learned and used to answer future
questions. Results show that our approach contributes to the system's best
performance when it is conjugated with typical Answer-Extraction strategies.
Moreover, it allows the system to learn with the answered questions and to
rectify wrong or unsolved past questions.
| 2,013 | Computation and Language |
Analysing Quality of English-Hindi Machine Translation Engine Outputs
Using Bayesian Classification | This paper considers the problem for estimating the quality of machine
translation outputs which are independent of human intervention and are
generally addressed using machine learning techniques.There are various
measures through which a machine learns translations quality. Automatic
Evaluation metrics produce good co-relation at corpus level but cannot produce
the same results at the same segment or sentence level. In this paper 16
features are extracted from the input sentences and their translations and a
quality score is obtained based on Bayesian inference produced from training
data.
| 2,013 | Computation and Language |
Rank-frequency relation for Chinese characters | We show that the Zipf's law for Chinese characters perfectly holds for
sufficiently short texts (few thousand different characters). The scenario of
its validity is similar to the Zipf's law for words in short English texts. For
long Chinese texts (or for mixtures of short Chinese texts), rank-frequency
relations for Chinese characters display a two-layer, hierarchic structure that
combines a Zipfian power-law regime for frequent characters (first layer) with
an exponential-like regime for less frequent characters (second layer). For
these two layers we provide different (though related) theoretical descriptions
that include the range of low-frequency characters (hapax legomena). The
comparative analysis of rank-frequency relations for Chinese characters versus
English words illustrates the extent to which the characters play for Chinese
writers the same role as the words for those writing within alphabetical
systems.
| 2,014 | Computation and Language |
Subsets and Splits